Over the past few years, average power consumption per server has increased more than 20 percent. Consolidations and build-outs are causing data centers and their racks to be more and more densely packed with power-hungry IT equipment, such as blade servers. Over the last decade, the typical power required at a rack has increased from 2 to 10 kilowatts and is going higher. Because every watt of power consumed by IT equipment becomes a watt of heat that needs to be removed, the increases in IT equipment power demand have driven increases in data center cooling power consumption. To give a sense of the magnitude, in the San Francisco Bay/Silicon Valley area today, data centers alone consume 375 megawatts per annum.

Figure 1. Typical energy consumption in a data center power (IT equipment consuming 50% of data center power implies a PUE = 2.0.) Source: EYP Mission Critical Facilities Inc., New York

The owner/operators of these data centers could minimize their power consumption by looking at the purpose of a data center and reducing the power used by the infrastructure to support that work. For example, turning up the temperate in a data center reduces the energy consumed by air conditioning equipment without harming IT equipment doing computational work.

Calculating PUE

The Green Grid defined Power Usage Effectiveness (PUE) as:



Total Facility Power

PUE = IT Equipment Power



Total Facility Power in this equation is all the power required to operate the entire data center, including IT equipment items such as servers, storage, network equipment, and other IT equipment and support infrastructure items such as CRAC units, fans, condensers, UPS, and lighting. IT Equipment Power is the power required to operate the servers and IT equipment alone. PUE can range from 1.0 (where all the power is consumed by IT equipment only) to infinity. PUE serves as a metric defining how effectively facility infrastructure supports IT activitees in a data center.

Table 1. PUE levels explained.
Source: greengrid.org

Calculating PUE

Determining PUE requires that energy use be measured throughout the data center and attributed to either the facilities or the IT category for calculation. It is best to gather energy use data over time to ensure that peak periods have been captured. A one-time snapshot of PUE can be very misleading.

Measurements taken on branch circuits provide the current draw and energy use of non-IT equipment, including CRAC units, lighting, UPS, and other loads. For data centers where facilities are shared, such as in office buildings with common HVAC systems, it may be difficult to gather the necessary data accurately. But facility managers can help develop reasonably good estimates.

Metered or intelligent rack PDUs in the IT equipment racks will provide total IT Equipment Power data for the denominator of the PUE formula. Power management software tools can capture power IT energy data every few seconds, when combined with intelligent PDUs, over whatever time period makes sense.

PUE Levels

The Green Grid has defined three levels of PUE: Basic or Level 1, Intermediate or Level 2, and Advanced or Level 3, depending on the quality and frequency of the measurements.

Basic PUE allows IT equipment power that is pulled from the UPS supporting the IT equipment. These data are a reasonable approximation provided the UPS does not also supply power to infrastructure such as rack-side cooling units and supplemental fans. Data from the UPS also excludes UPS losses (typical UPS efficiencies are 85 to 95 percent) that do not belong in IT Equipment Power calculations.

In addition power entering the data center can be used as Total Facility Power, which ignores the effect of shared facilities such as HVAC on the data. At this Basic level, PUE can be measured monthly or weekly. Optimally, measurements should be taken at the same time each month or week to factor in cyclicality to how loads change over time.

The Intermediate PUE category requires IT equipment power data taken from PDUs. If the PDUs supply power to IT equipment only, these data will be accurate, less potential transmission losses from the PDU to the rack. Total Facility Power includes all data center input power less HVAC capacity shared with other areas. Intermediate PUE requires daily measurement, which should be taken at the same time each day.

At the Advanced level, IT equipment power information comes from the device or server level. Intelligent rack PDUs that can meter at an outlet level and aggregate the power drawn across all the power supplies into a combined device total will be needed for IT equipment with multiple power supplies. The Advanced PUE also includes the energy use of additional resources such as lighting and security. Advanced PUE measurements include continuously measured data with a suggested frequency of at least every 15 minutes.

Why Level 3

Many industry analysts consider the Intermediate/Level 2 PUE adequate. They consider outlet-level or device-level power consumption “nice to have” but not necessary. Power data gathered at each individual outlet indicate the draw of each device. This detailed IT load data provides the granularity of information required to make recommendations on how to reduce energy consumption, not just improve the PUE metric. The detailed data help efforts to document data center power efficiency improvements. Unfortunately, in some cases, these improvements can actually raise PUE.

Figure 2. Monitoring in a simple rack.
Source: ASHRAE, “Thermal Guidelines for Data Processing Environments”

Continuous, detailed power consumption monitoring, and the ability to prepare reports by location, department or IT equipment type, enables further comparisons and analysis. For example, a data center in San Francisco can be compared to one in Las Vegas, or a bank’s credit card division can be compared to its retail banking division, or one manufacturer’s blade server can be compared to another vendor’s model. More importantly, one can determine the energy requirements of the IT load and understand how applications consume power. Finally, granular data can help identify servers for decommissioning.

Individual PUE snapshots are not as useful as detailed power consumption information gathered over time. At a minimum, data should be gathered frequently over a period long enough to ensure that times of peak demand have been captured. Continuously monitoring PUE and the ability to view these data in consolidated reports become important for evaluating how changes in a data center affect the whole data center. For example, how does changing the temperature set points in the room from 70 to 77 F affect server inlet temperatures?

Flaws

A lower PUE is generally better than a higher one, but it is possible to implement measures that reduce data center energy use while raising PUE. For example, replacing older, less efficient servers with more efficient ones will raise PUE because the change reduces IT Equipment Power value in the denominator in a PUE calculation. Reducing the number of physical servers and more efficiently using those that remain through virtualization would also raise PUE in the same way.

Similarly replacing several “pizza box” 1U servers with a few new blade servers might reduce a facility’s PUE while wasting energy. Overprovisioning energy hungry blade servers in data center will cause increased energy use compared to the old 1U servers while reducing PUE by increasing the IT Equipment Power value.

Chasing the lowest PUE in the industry isn’t for everyone. Google is to be commended for their extremely low PUEs of 1.21 and less, but its achievement was made possible by techniques including evaporative cooling, recycling water, containers with close-coupled cooling, and eliminating bypass airflow (see article elsewhere in this issue). Such techniques may not be applicable in all data centers. Data center managers should compare PUE with PUE from comparable facilities.

Finally, PUE does capture kilowatt per computational load, or useful work. A 10-megawatt (MW) data center with a PUE of 2.0 the data center uses 5 MW to power IT equipment and 5 MW to operate support infrastructure. More efficient servers would perform the same computational work using only 4 MW, allowing the facility to reduce cooling energy consumption to 4 MW by installing more efficient cooling systems and because the more efficient servers generate less heat. These two expensive and difficult improvements in efficiency reduce power by an impressive 2 MW while PUE remains unchanged at 2.0.

Change PUE

The PUE metric by itself can be misleading and lead to some inefficient decisions. And PUE alone isn’t much help in actually implementing programs to reduce energy consumption. Where can energy be saved? Turning up the thermostat is a good place to start since many data centers are kept colder than necessary. But how much, and how can the data center manager be sure that troublesome hot spots won’t occur? Data center managers often say that no one complains if the data center is kept cold, but the phone starts ringing as soon as a server has problems.

Figure 3. Sun manages to reduce energy use and captures results in improved PUE

To confidently implement an energy conservation program, data center managers need a variety of tools. Having environmental monitors at the rack is an excellent tool. Hardware vendors specify the inlet temperature necessary to keep their equipment from overheating. These inlet temperature specifications are often much higher than data center managers realize. For example, an HP Proliant BL2x220c G5 server blade is specified to operate with an inlet temperature up to 35°C (95 F) at sea level.

Taking inlet temperatures to the maximum vendor specification is generally not recommended. But with manufacturer specifications in hand, and temperature sensors mounted to the cool air inlet side of server racks, data center managers can turn up the thermostat several degrees to conserve energy and still be confident that their IT equipment won’t overheat.

It is good practice to have temperature sensors on each rack since changes in temperature and airflow can be hard to predict and may affect sections of the data center differently and in unintuitive ways.

Though equipment vendors typically only specify inlet temperatures, it is also a good idea to monitor exhaust temperatures. For high-power consumption devices such as blade servers and storage networks, setting both high and low exhaust temperature thresholds can identify potential problems such as a fan failure or a blocked vent. The hot aisle should be hot.

Some other techniques for lowering PUE:
  • Minimize humidification, but not to less than 20 percent relative humidity
  • Reduce cold air/hot air mixing
  • Install blanking panels in racks to minimize recirculation
  • Install raised floor grommets to reduce bypass airflow around cables
  • Optimize floor layout (CFD analysis)
  • Move cool air supply and returns close to the load


Conclusion

Properly managing power in a data center requires gathering sufficiently detailed information to take energy conservation actions. A baseline means that changes can be meaningful and not just be a PUE numbers game.

Really improving efficiency in the data center means changing people’s behavior. The right power management solution can track power consumption and cost by department, location, and type of equipment. Graphs and reports show problem areas and the impact of actions taken.

No single number can describe a subject as complex as power consumption in data centers. PUE does help data center IT and facilities managers think about where electricity is being used and what broad categories of actions can be taken to become more efficient.