Inside the Briefcase

How Security in Tech is Being Reinforced

How Security in Tech is Being Reinforced

In an increasingly digital world, security has become a...

2022 Business Spend Management Benchmark Report

2022 Business Spend Management Benchmark Report

Read the 2022 Coupa Benchmark Report to explore 20...

Cloud Security: Understanding “Shared Responsibility” … and Keeping Up Best Security Practices

Cloud Security: Understanding “Shared Responsibility” … and Keeping Up Best Security Practices

Cloud computing has been around for many years now,...

Webcast: HOW TO SCALE A DATA LITERACY PROGRAM AT YOUR ORGANIZATION

Webcast: HOW TO SCALE A DATA LITERACY PROGRAM AT YOUR ORGANIZATION

Join data & analytics leaders from Starbucks, Cardinal Health,...

How EverQuote Democratized Data Through Self-Service Analytics

How EverQuote Democratized Data Through Self-Service Analytics

During our recent webinar on scaling self-service analytics, AtScale...

Large Scale Analytics in the Enterprise

August 9, 2012 No Comments

SOURCE: Think Big Analytics

The growth of Internet businesses led to a whole new scale of data processing
challenges. Companies like Google, Facebook, Yahoo, Twitter, and Quantcast now
routinely collect and process hundreds to thousands of terabytes of data on a daily basis.
This represents a significant change in the volume of data which can be processed, a
major reduction in processing time required, and of the cost required to store data. The
most important of the techniques used at these companies is storing data in a cluster of
servers and using a distributed data processing technique Google invented, called
MapReduce. Facebook, Yahoo, Twitter, and Quantcast all process data with an open
source technology implementation of MapReduce called Hadoop.

Click here to view this white paper

DATA and ANALYTICS , Featured White Papers

Leave a Reply

(required)

(required)


ADVERTISEMENT

Gartner