Inside the Briefcase

How Security in Tech is Being Reinforced

How Security in Tech is Being Reinforced

In an increasingly digital world, security has become a...

2022 Business Spend Management Benchmark Report

2022 Business Spend Management Benchmark Report

Read the 2022 Coupa Benchmark Report to explore 20...

Cloud Security: Understanding “Shared Responsibility” … and Keeping Up Best Security Practices

Cloud Security: Understanding “Shared Responsibility” … and Keeping Up Best Security Practices

Cloud computing has been around for many years now,...

Webcast: HOW TO SCALE A DATA LITERACY PROGRAM AT YOUR ORGANIZATION

Webcast: HOW TO SCALE A DATA LITERACY PROGRAM AT YOUR ORGANIZATION

Join data & analytics leaders from Starbucks, Cardinal Health,...

How EverQuote Democratized Data Through Self-Service Analytics

How EverQuote Democratized Data Through Self-Service Analytics

During our recent webinar on scaling self-service analytics, AtScale...

Hadoop: How Open Source Can Whittle Big Data Down to Size

March 2, 2012 No Comments

SOURCE: Computerworld

Techworld Australia caught up with Doug Cutting to talk about Apache Hadoop, a software framework he created for processing massive amounts of data.

In 2011 ’Big Data’ was, next to ‘Cloud’, the most dropped buzzword of the year. In 2012 Big Data is set to become a serious issue that many IT organisations across the public and private sectors will need to come to grips with.

The challenge essentially comes down to this: How do you store the massive amounts of often-unstructured data generated by end users and then transform it into meaningful, useful information?

One tool that enterprises have turned to to help with this is Hadoop, an open source framework for the distributed processing of large amounts of data.

Read More

DATA and ANALYTICS , DATA PRIVACY, Featured Articles, SOCIAL BUSINESS

Leave a Reply

(required)

(required)


ADVERTISEMENT

Gartner