Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

Five Best Practices for a Successful VDI Deployment

December 4, 2014 No Comments

Featured article by S. “Sundi” Sundaresh, President and CEO of Xangati

VDI (Virtual Desktop Infrastructure) has been a boon to both users and IT departments. For users, it means having access to their desktop from virtually anywhere. For IT, it means that new virtual desktops can be rolled out to users en masse, anywhere they are located, and without travel. It also means that user data can be corralled on centralized servers, rather than distributed across thousands of disparate – and frequently unsecured – physical desktop computers. And VDI makes software upgrades and patches infinitely easier to deploy. What’s more, the benefits of VDI come at significantly lower costs than the alternative, which entails the administration of physical desktops for thousands of users.

But VDI can exact its own price. Many VDI users cited VDI performance and user acceptance/experience as their primary issues. That’s not surprising given significant challenges such as the vastly different patterns of traffic created by VDI versus those of traditional physical desktops, the unpredictable demands made by applications and also the risks of “contention storms” that can occur leading to performance problems in large VDI deployments.

So How Do You Ensure a Successful VDI Deployment?

Take in point the U.S. Army. The Army decided to convert its desktop healthcare patient record application system to virtual desktop infrastructure (VDI). This would give a doctor a roaming capability, with the record system available to him no matter where he moved around the hospital.

Typically, Army doctors have about ten to fifteen minutes to treat each patient. But delays spent logging on to the patient record system had skyrocketed to three to five minutes, cutting deeply into the limited amount of time allocated for patient examination and discussion. The system had gotten so slow, it had become very inefficient for the doctors to see patients.

To ensure the success of its VDI deployment, the Army deployed an infrastructure intelligence solution that provided application-aware, second-by-second monitoring proven to isolate the sources of problems and provide recommendations for quick remediation.

Best Practice #1: Visualize in Real Time with High Fidelity

Users are sensitive to a single second of delay, so a performance management tool should have that granularity. It needs to collect metrics from the entire VDI infrastructure on a real-time, second-by-second basis so resource contention and user performance issues can be quickly identified and resolved. Continuous monitoring enables real-time responsiveness where alerts can be generated instantaneously to identify the location and origin of an issue, and even to provide root cause analysis, predictions and recommendations for remediation.

Best Practice #2: Understand End-to-End Behavior Piecewise

Second-by-second metrics collected from the servers, switches and storage systems and correlated in real time provide an excellent foundation for higher level functions, which are provided by advanced analytics. Metrics collected on the client devices and the networks that connect them to the datacenter add to the picture. Now you can divide a problem in two: within the datacenter and out to the client devices for a true end-to-end perspective. Add to that information about the users, the desktops they are using, as well as the groups and servers those desktops are currently running on and the picture becomes crystal clear. So much so that problems that crop up can be identified and remediated in record time.

Best Practice #3: Combine Best-Practice Alerting with Self-Learned Alerting

There may be one contention within one hour or there may be five, or even fifty. So, how do you make sure that each one is caught? The answer is live and continuous alerting. Continuously monitoring activity allows appropriate alerts to be generated, and a DVD- like recording can be captured when thresholds are crossed. Advanced analytics not only should look at the data in real time, but also cross-reference that data with your own history, as well as with experiences seen in similar environments.

Best Practice #4: Understand Toxic Cross-Silo Interactions to Resolve Resource Contentions

Cross-silo interactions can be toxic to the end-user experience, even for users who are outside of the silo in which a glitch, or an apparent glitch, has been discovered. But, with app-aware understanding of the interactions between IT infrastructure components in the various silos, you can learn what specifically is causing an issue, such as contention for a single resource by several devices or applications.

Best Practice #5: Empower End-to-End Users to Submit Trouble Tickets

Recording the actual customer activity is your “ticket” to solving the problem, particularly if you chose a solution that lets users report an issue with a trouble ticket attached to the recording for speedy resolution by helpdesk personnel. The access to playbacks allows IT personnel to step back through time to when a contention for resources happened. Also, this access can indicate if the issue is recurring or not. From there, a “proof” of performance – and the ability to see how the same performance issue was solved in the past – enables you to solve the problem.

Taking these five practices into consideration when deploying your VDI can make the process smoother and less stressful. The initial VDI deployment for the U.S. Army was taking the doctors away from what they were there for, to care for patients. But once the new VDI environment was modified under the watchful and intelligent eye of an app-aware performance management solution, contention and performance storms were averted. Army doctors were able to log in to the system with more consistent 30 second login times.

VDI is a great organizational asset, allowing businesses to minimize risk, enhance performance and greatly improve the end user experience. However, no new deployment is without risk. In fact, an infrastructure performance intelligence solution should be engaged as early as possible in the process, not after problems with your new VDI infrastructure occur.

To address the five best practices outlined here, find a solution that provides app-aware, continuous, scalable performance intelligence – and doesn’t just tell you that a problem exists, but tells you how to fix it before it seriously impacts the end user. A successful VDI deployment should enable end users to focus on the business at hand. Performance issues shouldn’t impact their day or, in the case of the U.S. Army doctors, impact their ability to care adequately for their patients.

 

Leave a Reply

(required)

(required)


ADVERTISEMENT

Gartner

WomeninTech