Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

It’s High Time for Web-Scale Storage

September 12, 2016 No Comments

Featured article by Stefan Bernbo, founder and CEO of Compuverde

What is web-scale IT? Gartner coined the phrase back in 2013 to refer to an architectural approach that enables organizations to achieve extreme levels of agility, scalability and service delivery as compared to many of their enterprise counterparts. The analyst firm predicts that web-scale IT will be an architectural approach found in 50 percent of global enterprises by 2017, up from less than 10 percent in 2013.

Data is in a seemingly unending growth cycle, due largely to cloud services, mobility and the IoT, which is driving the development of new storage architectures to store all of this newly generated information. It is becoming increasingly clear that even a linear growth trajectory for storage is insufficient to deliver the quantity of storage needed for data produced by the Internet of Things. Current architectures have bottlenecks that, while merely inconvenient for legacy data, are simply untenable for the scale of storage needed today.

Enterprises must adapt quickly or be left behind. Many are choosing, in keeping with Gartner’s prediction, to deploy web-scale architectures that enable virtualization, compute and storage functionality on a tremendous scale.

Overcoming Performance Issues

In the web-scale world, there is no room for bottlenecks, so this type of storage focuses relentlessly on removing them all from storage architecture. A bottleneck that functions as a single point of entry can become a single point of failure, especially with the demands of cloud computing on Big Data storage. Adding redundant, expensive, high-performance components to alleviate the bottleneck, as most service providers presently do, adds cost and complexity to a system very quickly. On the other hand, a horizontally scalable web-scale system designed to distribute data among all nodes makes it possible to choose cheaper, lower-energy hardware.

Cloud providers must manage far more users and greater performance demands than do enterprises, so solving performance problems like data bottlenecks is a chief concern. While the average user of an enterprise system demands high performance, these systems typically have fewer users, and those users can access their files directly through the local network. Furthermore, enterprise system users are typically accessing, sending and saving relatively low-volume files like document files and spreadsheets, using less storage capacity and alleviating performance load.

This is not the case for someone using the Cloud outside the enterprise, however. The system is being accessed simultaneously over the Internet by an order of magnitude more users, which itself becomes a performance bottleneck. The cloud provider’s storage system not only has to scale to each additional user, but must also maintain performance across the aggregate of all users. Significantly, the average cloud user is accessing and storing far larger files – music, photo and video files – than does the average enterprise user. Web-scale architectures are designed to prevent the bottlenecks that this volume of usage causes in traditional legacy storage setups.

The Cost of Failure

Hardware cannot be relied upon when building web-scale architecture; it is important that it be built on software exclusively. Since hardware inevitably fails (at a number of points within the machine), traditional appliances – storage hardware that has proprietary software built in – typically include multiple copies of expensive components to anticipate and prevent failure. These extra layers of identical hardware extract higher costs in energy usage, and add layers of complication to a single appliance. Because the actual cost per appliance is quite high compared with commodity servers, cost estimates often skyrocket when companies begin examining how to scale out their data centers. One way to avoid this is by using software-defined vNAS or vSAN in a hypervisor environment, both of which offer a way to build out servers at a web-scale rate.

Maintaining Consistency

Distributed storage presents the best way to build at web-scale levels, even though the data center trend has been moving toward centralization,. This is because there are now ways to improve performance at the software level that neutralize the performance advantage of a centralized data storage approach.

The nature of cloud services requires that they be accessible to any user from anywhere

in the world, service providers must be able to offer data centers located across the globe to minimize load time. With global availability, however, comes a number of challenges. Load is active in the data center in a company’s region. This creates a problem, since all data stored in all locations must be in sync. From an architecture point of view, it’s important to solve these problems at the storage layer instead of up at the application layer, where it becomes more difficult and complicated to solve.

Power outages and other natural or man-made disasters can take a local server farm offline, so global data centers must be resilient to localized disaster. If a local data center or server goes down, global data centers must reroute data quickly to available servers to minimize downtime. While there are certainly solutions today that solve these problems, they do so at the application layer. Attempting to solve these issues that high up in the hierarchy of data center infrastructure – instead of solving them at the storage level – presents significant cost and complexity disadvantages. Solving these issues directly at the storage level through web-scale architectures delivers significant benefits in efficiency, time and cost savings.

Agility for Ongoing Change

As the mobile, connected world continues to generate data, enterprises need cheap storage. If companies continue to rely on expensive, inflexible appliances in their data centers, they will be forced to outlay significant funds to develop the storage capacity they need to meet customer needs.

Agility is necessary across all business settings—be it budgets, network environments or corporate priorities—in order to be able to respond to market demands. Having an expansive, rigid network environment locked into configurations determined by an outside vendor severely curtails the ability of the organization to react nimbly to market demands, much less anticipate them in a proactive manner. Web-scale storage philosophies enable major enterprises to “future proof” their data centers. Since the hardware and the software are separate investments, either may be switched out to a better, more appropriate option as the market dictates, at minimal cost.

Storing the Future

Web-scale IT is a global class of computing used to deliver the capabilities of large cloud service providers within an enterprise IT setting. For organizations dealing with the data deluge, web-scale must include a new storage solution. Software-defined storage and hyper-converged infrastructures are newer concepts that serve this model well, allowing enterprises to scale to huge compute environments with integrated virtualization components. Web-scale storage architecture is the answer for those who need to be able to store quantities of data in a way that Facebook or Amazon does – whether they need it now or will need it soon.

stefan-headshot

About the Author:

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.

 

Leave a Reply

(required)

(required)


ADVERTISEMENT

Gartner

WomeninTech