Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized... Membership! Membership!

Tweet Register as an member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

Ensuring User Satisfaction and Performance while Moving Away from IT Infrastructure Siloes

June 25, 2015 No Comments

By Srinivas Ramanathan, President and CEO, eG Innovations

User experience is top of mind these days as companies both large and small are looking to provide the best user experience – regardless of where the user is located or what type of device is being used. At the same time, many organizations are still managing their IT infrastructure silo by silo. They are finding that having a goal of providing outstanding user experience is difficult to deliver when each component in the IT infrastructure is managed as a discrete silo.

It is difficult because as the user accesses multiple services at once – the IT department is confronted with a number of divergent problems that require a dynamic, agile infrastructure. That means that applications and servers must work in conjunction with each other for the best user experience. If that doesn’t happen, the result is user frustration, administration challenges and lengthy problem resolution cycles.

In order to deliver outstanding user experience, many organizations are beginning to see their entire IT environment as a set of services that are linked together to support workloads. They have embarked on a journey to understand how their IT services interact holistically. Ensuring that services can be managed as a set of services delivers a more efficient and effective way to meet new requirements.

Managing Objectives

When trying to provide the best performance for users, it is important to be able to quickly see what is happening, where it is happening and be able to prevent performance issues from occurring or quickly address them when they appear. That means that organizations are striving to meet agreed upon service levels and often find themselves piecing together operational data to gain a holistic view of what is happening in the infrastructure. Depending upon the organization, having the ability to get to the bottom of a performance issue may require levels of expertise that is not available.

If outages do occur despite efforts to prevent them, IT needs to isolate the root cause and quickly provide a solution. This also means addressing user complaints that an application is slow.

IT typically finds that the hardest problems to solve are the ones where users complain that the service is slow or not working. The struggle is to determine the root-cause of the problem. Is it the network, the database, the application, storage or the virtualization tier?

The traditional approach to solving such issues is to use different domain-specific tools. Since multiple administrators and tools are involved, problem diagnosis can be very manual and time consuming. Root-cause diagnosis tools have been used in physical infrastructures to reduce the manual process. The problem is that these tools rely on static “if-then-else” rules. Virtual IT environments are highly dynamic and have dynamic interdependencies, which makes rules-based correlation and analysis inadequate.

Performance Management Tools

Based on the need to provide users with the performance they expect and the impact of new technology, new devices, evolving regulations along with the need to keep IT solutions operating efficiently and reliably, it is clear that many performance management tools are inadequate.

Some of the drawbacks to traditional performance management are:

– Blind spots – single vendor or single product management tools don’t extend visibility beyond their immediate component.

– Static – traditional tools have been built for static, physical environment and really aren’t designed for today’s dynamic environment.

– Slow – often organizations have to manually piece together information from a multitude of disparate systems, which slows down analysis, diagnosis and repair of performance issues.

– Complexity – Manually piecing fragmented operational data requires expertise in a number of separate disciplines and it is time consuming. Discovering why systems behave abnormally requires an in depth understanding of the application dependencies. Without tools that offer a holistic view may be beyond capabilities.

– Costly – Often organizations don’t have staff members who are knowledgeable in different systems, application frameworks, databases, etc. that may be affected.

Dealing with Dynamic Environments

Many of today’s tools were designed for static environments and provide only one siloed view. Viewing the environment as a set of services or through the user experience is very limited so seeing the big picture is difficult. Because of this limitation, tools that can see into virtual servers and many application frameworks, databases, storage are an absolute requirement.

Using these traditional tools often results in piecing together a view of the entire environment and even then isn’t fast enough to find real time problems. That’s when customers experience slowdowns and outages – without knowing the root cause of a problem it is impossible to provide a solid user experience.

To overcome these issues, a new approach to performance management and monitoring is required. Below are the requirements for an ideal solution:

– Complete visibility – performance management requires a solution that enables deep insight into the IT environment along with all performance dependencies.

– Virtualization awareness – be able to gather data, analyze that data and provide a holistic view of a wide variety of virtualized environments from desktops, application servers and virtual networks.

– Accelerated auto-diagnosis – have the ability to always be monitoring the environment so root cause analysis and auto-diagnosis of slow downs and other abnormal system behavior is a few clicks away. Real time response allows for sharply reduced solution times and reduces the need to over provision solutions.

– Simplified – look for built in intelligence to offer useful information to everyone without also requiring deep technical expertise. The tool should be simple to use with having a specialized staff.

– Pre-emptive – proactively identify issues so they can be addressed before users experience any slowdown or degradation in service.

– Cost-effective – the ideal platform should reduce the cost of operations where it is simple enough to be used by administrators that don’t have to be experts in all technical details of systems they manage.

– Comprehensive – the tool should be powerful enough to manage performance across all silos from the end user to the datacenter – across virtual, cloud and physical environments.

Performance problems can lead directly to both lost customers and lost revenues. When time is money, it is important to be able to quickly see what is happening, where it is happening and be able to either prevent performance issues from occurring or quickly addressing them when they do appear.


Srinivas Ramanatha is the president and CEO of eG Innovations, an award-winning provider of automated performance management solutions for today’s virtualized IT services. For more information, visit


Leave a Reply