Inside the Briefcase

FICO Scales with Oracle Cloud

FICO Scales with Oracle Cloud

Doug Clare, Vice President at FICO, describes how Oracle...

Is Your Enterprise IT the Best It Can Be?

Is Your Enterprise IT the Best It Can Be?

Enterprise IT is a driver of the global economy....

The IoT Imperative for Consumer Industries

The IoT Imperative for Consumer Industries

This IDC white paper examines current and future...

How to align your visual brand guidelines and create consistently on-brand content

How to align your visual brand guidelines and create consistently on-brand content

In this ebook, we’ll explore the various themes leading...

Your B2B Content Strategy in 2017: How To Think Like A Movie Studio + 6 Other Tactics

Your B2B Content Strategy in 2017: How To Think Like A Movie Studio + 6 Other Tactics

Jon Lombardo, Creative Lead, LinkedIn, reveals in this presentation...

Hadoop: How Open Source Can Whittle Big Data Down to Size

March 2, 2012 No Comments

SOURCE: Computerworld

Techworld Australia caught up with Doug Cutting to talk about Apache Hadoop, a software framework he created for processing massive amounts of data.

In 2011 ’Big Data’ was, next to ‘Cloud’, the most dropped buzzword of the year. In 2012 Big Data is set to become a serious issue that many IT organisations across the public and private sectors will need to come to grips with.

The challenge essentially comes down to this: How do you store the massive amounts of often-unstructured data generated by end users and then transform it into meaningful, useful information?

One tool that enterprises have turned to to help with this is Hadoop, an open source framework for the distributed processing of large amounts of data.

Read More

DATA and ANALYTICS , Featured Articles, OPEN SOURCE, SOCIAL BUSINESS

Leave a Reply

(required)

(required)


ADVERTISEMENT

IBC 2017

ITBriefcase Comparison Report