Inside the Briefcase

How to align your visual brand guidelines and create consistently on-brand content

How to align your visual brand guidelines and create consistently on-brand content

In this ebook, we’ll explore the various themes leading...

Your B2B Content Strategy in 2017: How To Think Like A Movie Studio + 6 Other Tactics

Your B2B Content Strategy in 2017: How To Think Like A Movie Studio + 6 Other Tactics

Jon Lombardo, Creative Lead, LinkedIn, reveals in this presentation...

2017 State of Technology Training

2017 State of Technology Training

Pluralsight recently completed an in-depth survey of 300 enterprises...

IT Briefcase Exclusive Interview: Keeping Your (Manufacturing) Head in the Clouds

IT Briefcase Exclusive Interview: Keeping Your (Manufacturing) Head in the Clouds

with Srivats Ramaswami, 42Q
In this interview, Srivats Ramaswami,...

IT Briefcase Exclusive Interview: New Solutions Keeping Enterprise Business Ahead of the Game

IT Briefcase Exclusive Interview: New Solutions Keeping Enterprise Business Ahead of the Game

with Sander Barens, Expereo
In this interview, Sander Barens...

Are You ‘Virtually’ Stuck in the Past?

October 2, 2017 No Comments

Featured article by David Logue, Kroll Ontrack

After almost a decade of recovering data for virtual platforms, I have seen a steady increase in computer virtualization. Most IT departments now use virtualization within their companies in some way.  Servers were among the first to be virtualized and, thanks to new developments in the market (i.e. virtualized desktops [VDI], cloud environments and virtualized networking), virtualization is now a part of all areas of the IT infrastructure.  However, as the years continue to pass, it is apparent that systems with virtualization are still far from being immune to data loss, especially when users lack adequate training and continue to make poor decisions.

As a senior lead data recovery engineer, consistently year in and year out since 2008, my team and I have received an increasing number of data recovery jobs from systems with at least some form of virtualization. We conducted an internal analysis of the most common reasons for data loss in virtual systems reported across our worldwide data recovery labs. The results showed five prominent categories for data loss in virtual systems from the year 2016:

* Deleted VMs (Virtual Machines) (40%)
* Hardware Failures (30%)
* Migration Failures (10%)
* Snapshots gone wrong (10%)
* Other (10%)

What is so shocking about these numbers is that they have not changed much over the years. The primary causes of data loss in virtualized environments only differs slightly from years past. The same questions in a study from 2012 revealed the following results.

* Hardware/RAID failures (40%)
* Deleted virtual disks and/or snapshots (36%)

These figures remain the most dominant, but what has become less a factor over the years is formatting and reinstallation problems that lead to data loss. This accounted for 11% of all cases in 2012 and today, it is consolidated into the 10% other reasons.

While virtualization remains a complicated technology, some aspects have changed over the years, like an improved user experience. This includes a new interface and administration panels. As an example, migrating hundreds or thousands of VMs from one storage platform to another has become easier.

What businesses can take away from this information is the fact that data loss on virtual platforms is still possible and the risk of critical data is real. To prevent these type of data losses, it is imperative that businesses do the following:

1. Use the right backup software for your virtual environment. When choosing a backup software for your virtual environment, the most essential factor is whether the software can truly back up the virtualized environment. Once that is determined, compare backup time, recovery time, security and compression. Good backup solutions for VMs are able to mount the backup while the files are transferred back to the main host system.

2. Understanding the Backup Software. Always create backups if you want to make sure that everything can be fully recovered in the event of a failure. Businesses should fully understand how the backup software works and its limitations. Don’t forget to check your backups regularly to make sure everything is functioning properly.

3. Do not save backups and active VMs on the same storage. If you save your backups on the same hard drive or storage space where your active VMs are located, you risk ending up with a total data loss. If a backup fails and a VM is active, it is possible that the active VM will overwrite the blocks allocated by the failed backup. In order to prevent this from occurring, always store your backups and your active VMs on different storage, preferably offsite. It is also important to make use of a backup application that allows users to schedule the backups based on the needs of the business.

4. Use technologies available with your virtualization solution wisely. There are several technologies available that have an impact when used with virtualization. One example is thin provisioning. Thin provisioning means that the only storage space being used is really needed in the current moment. When additional space is needed, free space will be allocated to the virtual disk. However, there are some additional considerations when using thin provisioned virtual disks when you encounter a data loss event. When a data loss occurs, other thin provisioned virtual machines must be stopped immediately, otherwise it is possible that the virtual machines that are running on the same data store, will use the blocks that the deleted virtual disk used to occupy. Because of this, data recovery can then become more difficult. Keeping this in mind, it is always a good idea to think through a complex technology before implementing it.

5. Think and plan before you use VMs. Always remember that virtualization and VMs are not error-free and are just as prone to failure as every other technology. Before creating a virtual environment for sensitive applications, think first. Some applications have a high input/output (I/O) rate may be better suited for physical server environments. Planning ahead with virtualization is key when it comes to preventing data loss.

One of the main data loss problems with virtualization stems from the setup of the virtualized server and storage. Missing documentation of VMs and virtual servers being built and connected with applications and sensitive data can make data recovery costly and time-consuming.

Every virtualized IT environment has its own advantages and disadvantages. When dealing with data loss, it is best to consult a data recovery professional.

david 150x150 Are You ‘Virtually’ Stuck in the Past?

About the Author:

David Logue assists customers around the world as the Senior Lead Data Recovery Engineer for Kroll Ontrack (www.krollontrack.com). With over 15 years of data recovery experience, Mr. Logue leads a team that specializes in the remote recovery of high-end SAN and NAS storage technologies using Kroll Ontrack’s patented Remote Data Recovery solution. He also works closely with development teams to communicate client needs, build innovative tools, and create solutions for specific data loss needs. Follow Kroll Ontrack on Twitter: @KrollOntrack.

 

DATA and ANALYTICS , SOCIAL BUSINESS

Leave a Reply

(required)

(required)


ADVERTISEMENT

Gartner Infrastructure


Gartner Application Strategies


IBC 2017

ITBriefcase Comparison Report