Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

IT Briefcase Exclusive Interview: New Era of Performance Optimization in SAN Storage

May 22, 2015 No Comments

Featured interview with Brian Morin, Global VP of Marketing, Condusiv Technologies

  • Q. What are some of the driving factors affecting SAN performance?

A. One of the biggest silent killers of SAN performance has nothing to do with the SAN itself, but the number of small, fractured random I/O that overwhelms bandwidth from the SAN as a result a fragmented logical disk. When people think of fragmentation, they usually think in the context of a physical disk platter problem that increases disk latency with added head movement. The problem of fragmentation in a SAN environment is very different as it’s not a physical disk platter problem at all, but a fragmented logical disk outside the SAN that inflates the IOPS requirement for any given workload regardless of using Disk or SSD. Fragmentation is inherit to the behavior of Windows, so as the OS fragments the logical disk, it will require multiple I/O operations to process a single file when in fact it would have only taken a single I/O if Windows had written the file in a more contiguous fashion in the first place. This Windows “I/O tax” in a SAN environment steals throughput from server to storage and causes the SAN device to be more IOPS dependent than it really needs to be.

  • Q. A common perception in IT is that fragmentation is not an issue for a shared storage system. Do SANs need defragmentation?

A. The answer is actually yes and no. Yes, SANs need to be free of fragmentation at the logical disk software layer for optimal performance, but No, you can’t just run a traditional defragmentation utility on a live, production SAN. In that case, the remedy is worse than the problem. As a result, most organizations just learn to live with this I/O inefficiency and mask the problem with more spindles or flash. VMware’s Monitoring and Performance Guide for vSphere is replete with recommendations around defragmentation. Whereas its number one recommendation is to increase virtual machine memory, its number two recommendation is to defragment the file system on all guests. In fact, Storage Switzerland just released a best practice video on steps to increase MS-SQL performance before simply adding more flash, and its number one recommendation was to solve logical disk fragmentation.

The problem with “defragging” in a SAN environment is that the utility ends up competing with SAN technologies for physical layer management, moving blocks that the SAN purposefully laid out, and the change block activity ends up triggering and skewing all sorts of advanced features like thin provisioning, replication, snapshots, etc. It’s for this very reason that we developed our patented inline fragmentation prevention technology that is included in both our Diskeeper® Server and V-locity® I/O reduction product lines, so administrators can keep their systems running like new.

  • Q. Why is fragmentation prevention so important? What are some of the benefits of planning ahead?

A. Ultimately, the benefit is putting an end to any and all performance degradation to keep systems running like new. Most admins are simply unaware of how much performance has actually degraded due to this I/O inefficiency. In our experience with thousands of customers, we find fragmentation dampens overall performance by 25% or more on I/O intensive applications. In more severe cases, it’s much more than that. In fact, for some organizations, it’s not just an issue of sluggish performance but reliability as they have to regularly reboot servers and have issues with certain data sets. When spending tens to hundreds of thousands of dollars on SAN storage systems and new flash systems, why would anyone want to give back 25% of the performance they paid for when it can be solved so easily and inexpensively. That’s why our customers who understand the I/O overhead issue of a fragmented logical disk in SAN environments view our solution as a “no brainer.”

  • Q. What advice would you give about the steps companies should take regarding SAN fragmentation?

A. There are organizations who run defragmentation processes as a part of storage management, however, it’s a very laborious process in a SAN environment, and you would never “defrag” an SSD. Due to the performance dampening of change block activity from defragmentation, an administrator has to migrate data, take the fragmented volume offline, perform defragmentation, then bring it back.

The beauty about our inline approach is that fragmentation is actually prevented from ever occurring. This enables systems to process more data in less time and benefits SSDs as much as mechanical disks. The smartest thing a company could do is simply evaluate our software to see exactly how badly systems are infected and see the real-world performance benefit and before/after comparison before purchase commitment.

  • Q. Are solutions available to optimize virtualized workloads connected to SAN as well as those in Windows-based physical server environments?

A. The benefit of fragmentation prevention is actually more pronounced in a virtual environment than a physical environment, and that’s because of the “I/O blender” effect specific to virtual environments. It’s bad enough in a physical environment when you’re dealing with a lot of unnecessarily small, fractured I/O from a fragmented logical disk that is requiring more I/O operations than necessary to process any unit of data, but at least the traffic is more sequential in nature. In a virtual environment, the disparate I/O streams from multiple VMs are mixed and randomized at the point of the hypervisor before sending out to storage a very random I/O patter. By preventing fragmentation and increasing I/O density, systems can process more data in less time while also reducing the amount of I/O per GB that is being randomized.

  • Q. How does a fragmentation prevention solution work along the SAN?

A. The key thing to understand about fragmentation prevention is that it doesn’t touch the SAN at all. It’s an approach that adds a layer of intelligence to the Windows OS, helping it find proper allocations within the logical disk layer (instead of the very next available address regardless of size) so files are written in a contiguous manner, requiring minimal I/O. Although this happens outside the SAN, it’s the SAN that receives the most benefit as all the I/O overhead of small, fractured I/O is eliminated. This approach significantly improves throughput and reduces the dependency on IOPS since the relationship between data and I/O is no longer eroding at the logical layer.

  • Q. How are customers benefiting from innovations in solutions designed to prevent SAN fragmentation in the first place? 

A. The typical customer experience is reclaiming 25% or more throughput from server to storage. Virtualized customers who use our V-locity I/O reduction platform see performance gains even more pronounced because of the “I/O blender” effect issue and also because our V-locity product line has an additional technology engine beyond fragmentation prevention that includes server-side caching. This further reduces I/O to SAN storage for any given workload and further reduces the performance penalty of the “I/O blender” effect since even fewer I/O is being randomized. It is a holistic approach to I/O reduction.

 Brian Morin - SVP Global Marketing - Condusiv Technologies

About Brian Morin, Senior Vice President, Global Marketing, Condusiv Technologies

Brian is Senior Vice President, Global Marketing, responsible for the corporate marketing vision by driving demand and awareness worldwide. Efforts over the last year led to growing adoption of V-locity®, which has quickly amassed over 1,000 new customers looking to accelerate their virtual environment with a 100% software approach.

Prior to Condusiv, Brian served in leadership positions at Nexsan that touched all aspects of marketing, from communications to demand generation, as well as product marketing and go-to-market strategies with the channel. Brian notably steered rebranding efforts and built the demand generation model from scratch as an early marketing automation adopter. Growth led to the successful acquisition by Imation.

With 15+ years of marketing expertise, Brian has spent recent years on the forefront of revenue marketing models that leverage automation for data-driven determinations. His earlier background has roots on the agency side as creative director, helping companies build brands and transition to online engagement.

 

 

Leave a Reply

(required)

(required)


ADVERTISEMENT

Gartner

WomeninTech