Inside the Briefcase

Driving Better Outcomes through Workforce Analytics Webcast

Driving Better Outcomes through Workforce Analytics Webcast

Find out what’s really going on in your business...

Legacy Modernization: Look to the Cloud and Open Systems

Legacy Modernization: Look to the Cloud and Open Systems

On the surface, mainframe architecture seems relatively simple: A...

Still keeping your hybrid power systems indoors?  It’s time for change.

Still keeping your hybrid power systems indoors? It’s time for change.

Mobile telecommunications network equipment is expected to work without...

As the Network Changes, Engineers Are Embracing the DevOps Model

As the Network Changes, Engineers Are Embracing the DevOps Model

Businesses that have embraced digital transformation with a clear...

The 5 Most Common Application Bottlenecks

The 5 Most Common Application Bottlenecks

Application bottlenecks can lead an otherwise functional computer or...

Hadoop: How Open Source Can Whittle Big Data Down to Size

March 2, 2012 No Comments

SOURCE: Computerworld

Techworld Australia caught up with Doug Cutting to talk about Apache Hadoop, a software framework he created for processing massive amounts of data.

In 2011 ’Big Data’ was, next to ‘Cloud’, the most dropped buzzword of the year. In 2012 Big Data is set to become a serious issue that many IT organisations across the public and private sectors will need to come to grips with.

The challenge essentially comes down to this: How do you store the massive amounts of often-unstructured data generated by end users and then transform it into meaningful, useful information?

One tool that enterprises have turned to to help with this is Hadoop, an open source framework for the distributed processing of large amounts of data.

Read More

DATA and ANALYTICS , Featured Articles, OPEN SOURCE, SOCIAL BUSINESS

Leave a Reply

(required)

(required)


ADVERTISEMENT

bpm online Boston May 11

UC Expo

SSOW

sptechcon

ITBriefcase Comparison Report