Hadoop: How Open Source Can Whittle Big Data Down to SizeMarch 2, 2012 No Comments
Techworld Australia caught up with Doug Cutting to talk about Apache Hadoop, a software framework he created for processing massive amounts of data.
In 2011 ’Big Data’ was, next to ‘Cloud’, the most dropped buzzword of the year. In 2012 Big Data is set to become a serious issue that many IT organisations across the public and private sectors will need to come to grips with.
The challenge essentially comes down to this: How do you store the massive amounts of often-unstructured data generated by end users and then transform it into meaningful, useful information?
One tool that enterprises have turned to to help with this is Hadoop, an open source framework for the distributed processing of large amounts of data.DATA and ANALYTICS , Featured Articles, OPEN SOURCE, SOCIAL BUSINESS