Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

Data Center Virtualization: Benefits of Power and Cooling

December 27, 2010 No Comments

By Dennis Bouley, strategic research analyst, Data Center Science Center, APC by Schneider Electric

Virtualization of data center servers and storage devices creates new challenges for the data center power and cooling infrastructure. As IT load decreases, the efficiency of existing power and cooling equipment also decreases. Upgrades to power and cooling systems are not required to make virtualization “work”. However, an additional efficiency benefit can be realized if power and cooling systems are “rightsized” to support the new, virtualized load.

Virtualization will always lower the electric bill. However, virtualization affects power consumption and efficiency in two somewhat counterintuitive ways.  First, power consumption will not be reduced as much as might be expected, because of the presence of fixed loss in power and cooling systems.  Second, in spite of a reduction in power consumption, the data center’s infrastructure efficiency (PUE) is typically worse after virtualizing.

The Fixed Loss Dilemma

Why does PUE get worse after virtualizing? The answer lies in the fact that the power and cooling infrastructure is now oversized. Running more power and cooling equipment than needed is like leaving your car running at idle when you’re not using it – energy is consumed, but no useful work is accomplished.  All power and cooling devices have electrical losses (inefficiency) dispersed as heat.  A portion of this loss is fixed loss – power consumed regardless of load.  At no load (idle), all the power consumed by a data center is attributed to fixed losses and the data center is 0% efficient (PUE is infinite), doing no useful work.  As IT load increases, fixed loss becomes a smaller portion of the total energy used.

In a data center that is already at low loading because of redundancy or other reasons, virtualization can reduce loading to exceptionally low levels.  These effects could result in expenses that negate some of the virtualization energy savings (e.g., having to run a hot-gas bypass in order to prevent the chiller from shutting down).

Rightsizing Solutions

In order to minimize fixed losses and maximize electrical efficiency, the following best practices should be considered:

Row-based cooling

Row-based cooling allows for the cooling source to be situated close the load. This shortens air paths from the air conditioner to the server inlet, and reduces mixing of cool supply and hot return air.

On the cool air supply side, row-based cooling allows operation of cooling infrastructure at a higher coil temperature. This consumes less chiller energy and is much less likely to cause wasteful condensation.  On the hot air return side, the cooling system produces a higher return temperature which increases the heat removal efficiency. 

Scalable power and cooling

Scalable power and cooling architecture allows scaling down to remove unneeded capacity at the time of initial virtualization.  Then, the option exists later to re-grow as the new virtualized environment re-populates “Right-sized” infrastructure keeps capacity at a level that is appropriate for the actual demand.

Capacity management tools

A capacity management system uses automated intelligence and modeling to monitor power, cooling, and physical space capacities at the room, row, rack, and server level. This facilitates the task of placing new equipment, helps predict the effect of equipment changes on power and cooling, and recognizes negative trends, such as the emergence of hot spots, in time for corrective actions to be taken.  Capacity management systems increase data center efficiency by optimizing the use of available resources (e.g., utilizing the full capacity of a specific air conditioner).

Some data center professionals will have 5 year old non-modular UPS or 7 year old perimeter cooling units that they cannot afford to upgrade. Some practical steps can still be taken to lessen the impact of the fixed losses. An assessment can be performed to determine if some of the perimeter cooling units can be turned off without creating any unwanted hot spots.  Blanking panels can be installed in racks to reduce mixing of hot and cold air.  The area beneath the raised floor can be examined to determine if obstructions to airflow can be removed.  All of these actions improve the overall energy efficiency of the data center.  These are good practices regardless of whether the data center has been virtualized.

Leave a Reply

(required)

(required)


ADVERTISEMENT

Gartner

WomeninTech