Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

Sabermetrics for the Data Center

June 3, 2014 No Comments

How giving the data center the “Moneyball” treatment can help IT performance

As both the book and film “Moneyball” demonstrated, the old ways of measuring baseball performance — relying on a set of traditionally accepted metrics — didn’t necessarily dictate future success. “Moneyball” introduced many of us to the new concepts of sabermetrics, a way of measuring complex, and often misunderstood data relationships.

For over 100 years it was “known” that a team using players having the best individual performance statistics was the sure path to winning. But this was also expensive and didn’t always guarantee the desired championships. The revolutionary (to baseball tradition!) approach of advanced analytics (“Sabermetrics”) has fundamentally changed this equation.

Today, cost-effective and winning teams today are being built by integrating holistic understandings and measurements that look to yield a result greater than the sum of the parts — measuring what matters.

First: Measure What Matters

The situation is similar in today’s data center environments. The metrics that matter no longer relate to how busy or non-busy any given technology resource is. Rather, it is now all about how much work is getting done, how fast that work is getting done, and how cost-effectively the IT environment is getting the work done.

With today’s virtual server environment well deployed across almost every IT environment, and the rapid adoption of virtualized storage and networking technologies, the actual underlying physical IT resources doing business work are almost totally abstracted from the work they’re supporting. In addition, these configurations are now dynamically changing in response to business change at a speed never before considered by basic and traditional monitoring and management tools. It no longer matters if any given physical resource underneath these virtualized pools of resource is, or isn’t, fully utilized or underutilized. It bears no direct relationship to how fast or how efficiently work is getting done.

Approaches are needed that directly understand how much work is getting done in terms that matter most to the business and to users. IT needs a simultaneous understanding of how fast transactions are being processed, whether they are waiting for resources and if so, what resources, when, and why. Dynamic business demand requires proactively to predict when business transactions will wait for resources, what those resources are, and why they will be critical to success. And, the total cost picture for all resources that are processing those business transactions and services must be understood well into the future.

Next: Apply Analytics That Matter

As with baseball, it isn’t enough to measure what matters; even more important is applying the right analytics to solve the business problems at hand – and, by doing so grade current productivity, service delivery and cost efficiency, and most significantly, predict a path towards ongoing future success across these dimensions.

Knowing what analytics really matter would seem to be a straightforward question. But, with technologies like virtualization, it’s not as easy to answer as you might think. Since the underlying physical IT resources doing the work are completely abstracted from the work they’re doing, how do you know which physical resources are driving what workload? How can you drive greater efficiencies if you don’t know? You won’t find the answer with traditional IT measurement and monitoring approaches. There are two key analytic approaches required: Correlative Analysis, and Predictive Analysis.

Correlative Analysis: Correlation links the business “metrics that matter”  to the underlying metrics associated with IT resource performance in support of the business, in an ongoing fashion. To accomplish this, one must be able to continuously  correlate a wide variety of disparate metrics – both IT resource performance and business  to identify the causal relationships that underpin real world performance, throughput, response time, and ultimately cost. This necessarily implies the existence of a logical “data mart” of all appropriate metrics to feed the analysis.

Predictive Analysis: Once these relationships are understood you can begin to predict IT and business performance based on these relationships, combined with historical and current performance. There are a wide variety of predictive analytic approaches ranging from simple trending, through multiple types of statistical analysis, up to predictive modeling approaches. The curveball to consider: these predictions must not solely be based on simple trending or other linear analytic approaches as they do not factor the complexities associated with contention for resources — the inevitable “traffic jam” when dynamic workloads contend for shared IT resources. This contention relates to the underlying physical infrastructure actually processing the work. When infrastructure is insufficient to meet dynamic workload demand performance and response time degrades rapidly and in a nonlinear (i.e. difficult to understand) fashion.

Further, IT configurations are now dynamically changing in response to business needs at a speed and frequency never before considered by existing monitoring and management tools.

This change affects the traditional management stack which creates a very big – and constantly changing – data problem. This reinforces the requirement that all appropriate metrics: IT technical resource performance, IT configuration and asset, Financial costing, and business service and performance metrics be made continuously available to both Correlative and Predictive Analytic processes and tools.

As with “Moneyball,” cost-effective and optimized IT requires measuring what matters and using appropriate analytic technologies. When done effectively, the results speak for themselves – in both domains.

Wagner_Dave

TeamQuest Director of Market Development Dave Wagner has more than three decades of systems hardware and software experience including product management, marketing, sales, project management, customer support, and training. He is responsible for the successful business integration of TeamQuest Surveyor. He has authored many articles on the topic of capacity management and has presented at CMG, AFCOM, Pink, and other industry events. For more information on Dave and his company, please visit www.teamquest.com or visit him on Twitter and YouTube.

 

Leave a Reply

(required)

(required)


ADVERTISEMENT

Gartner

WomeninTech