Figure 1. The problem with power usage effectiveness is that it does not describe what is going on in a data center


 

In 2007 The Green Grid introduced power usage effectiveness (PUE) as a metric to better understand and improve the energy efficiency of existing and new data centers. Until this development there was no broadly accepted method of evaluating the impact of conservation measures on energy efficiency. Last June PUE was designated by the U.S. Environmental Protection Agency as the basis for the Energy Star label. Today PUE is the principal efficiency metric used by the IT industry to compare the relative efficiency of data processing facilities.

The Green Grid actually defined two terms in 2007, PUE and its inverse, data center infrastructure efficiency (DCiE).

PUE is defined as total facility power/IT equipment power, and DCiE is defined as 1/PUE x 100 percent. In these definitions, IT equipment power is the entire power load associated with all the IT equipment including processing, storage, and network hardware, plus equipment used to monitor and control the data center. Total facility power is all the power dedicated solely to the data center, measured at the utility meter, so it includes IT equipment power plus everything else that moves the needle on the electric meter such as UPS systems, switch gear, PDUs, batteries, the cooling system, data center lighting, and other miscellaneous loads and losses. By definition, perfection is achieved when PUE and DCiE reach 1.0, but as a practical matter PUE will always be a number greater than 1.0 and its inverse, DCiE, will always be less than 1.0.
 

PROBLEMS WITH PUE

The broad acceptance of PUE is surprising given its obvious deficiencies as a useful measurement tool, especially when comparing one data center with another. One problem with PUE is that the acronym makes no sense. PUE stands for “power usage effectiveness” but might as well mean “pig with a unicorn horn and elephant trunk,” shown in Figure 1, because neither cluster of words meaningfully describes what is going on in a data center.
 

Table 1. The four categories of PUE measurement

 

A PUE of 1.0, where all of the power is used for IT equipment and there are no chillers, fans, or losses is the asymptotic ideal. So what does a PUE of 2.0 mean? The acronym would suggest that power is used twice as effectively in a data center that has a PUE of 2.0 as compared to a facility with a PUE of 1.0, when in fact it is only half as good. It seems that whatever PUE actually measures, the term “power usage effectiveness” fails the logic test as a descriptor.

It could be argued that DCiE resolves this contradiction because a data center with a PUE of 2.0 has a DCiE of 50 percent, implying that it is only half as good as the ideal data center. Although this seems to make sense, DCiE really does not address the fundamental problem with the PUE/DCiE concept because the definitions are not based on a common set of assumptions that would validate energy-efficiency comparisons among data centers or even at different points of time within a single data center. Furthermore, for some reason the world has settled on PUE despite its logical inconsistencies, and no one really talks about DCiE.
 

 

Recognizing that PUE was not being applied consistently, causing confusion and leading to misunderstanding, various industry groups and government regulatory agencies came together in a task force to tighten the definition. The task force based its work on three guiding principles:

  • PUE using source energy consumption is the preferred energy efficiency metric for data centers.
     
  • IT energy consumption should be measured at the output of the uninterruptible power supply for now, but eventually IT energy consumption will be measured at the servers as measurement capabilities advance.
     
  • The PUE equation should include all energy sources at the point of utility handoff to the data center.

On July 15, 2010 the task force issued its report, “Recommendations for Measuring and Reporting Overall Data Center Efficiency, Version 1 – Measuring PUE at Dedicated Data Centers,” recommending four categories of PUE measurement (see table 1).
 

ADDED COMPLEXITY

The original definition of PUE was too simple to be useful, and these expanded definitions incorporate some necessary improvements. For one thing, the original PUE definition was all about power, which is an instantaneous measure of electric draw and does not consider variations in day-to-day operations. Now only PUE Category 0 is an instantaneous measurement, and all the rest are energy measures calculated by integrating power use over a 12-month period.

PUE Category 0 reports the worst-case scenario where power is measured during peak IT equipment utilization and, although an improvement over the current definition, is of limited use. The other three definitions are better. PUE Category 1, PUE Category 2, and PUE Category 3 capture IT equipment power with increasing precision as power measurement technology advances, and recognize that not all power used in data centers necessarily comes through the electric meter. Furthermore, by incorporating the annual weather cycle, PUE Category 1, PUE Category 2, and PUE Category 3 take into account the benefits of free cooling, which is a significant factor when comparing data centers in different climatic regions.

These new definitions of PUE add considerable complexity, but they fail to address the fundamental problem. For an automobile, the key efficiency metric is miles per gallon, how far a car can go when consuming a prescribed amount of fuel. For an air conditioning system, the key efficiency metric is coefficient of performance, the amount of cooling effect accomplished with a prescribed amount of electrical energy. For a power plant the key efficiency metric is heat rate, the amount of fuel it takes to generate a prescribed amount of electric energy. In each case there is a connection between a desired system output and the amount of energy required to produce that output.
 

LINKING FUEL AND WORK

An energy-efficiency measurement that does not link fuel consumption to useful work product is meaningless. Consider two cars, “gas guzzler” and “gas sipper,” where gas guzzler gets 12 miles to the gallon of gasoline and gas sipper gets 26 miles to the gallon. It turns out that gas guzzler is a classic and stays in the garage most of the year, so it only consumes 400 gallons of fuel over a 12-month period. Gas sipper, on the other hand, is the high-efficiency family car and consumes 600 gallons of gas. A comparison of fuel consumption, shown in figure 2, gives the impression that gas guzzler is more efficient, when the opposite is true.
 

Figure 2. Which car has the better PUE?

 

Similarly PUE calculated according to any of the new definitions can give the wrong answer because there is no link between fuel use and work product. For example, Hardcore Computer manufactures the Liquid Blade, a high-performance server that uses proprietary liquid cooling technology to entirely eliminate the need for cooling fans. In laboratory bench tests, the Liquid Blade was compared to an air-cooled server with an identical processor operating under the same computing load and at the same temperature. The air-cooled blade drew 506 watts whereas the Liquid Blade drew 420 watts; the difference can be attributed to the fact that the Liquid Blade has no blade fans and as a result needs a smaller power supply.

Using these energy consumption measurements and assuming that the data centers in both cases have a PUE of 1.3, which is pretty efficient, the total energy needed to operate and support that air-cooled blade would be 506 x 1.3 = 658 watts. In other words, fans, chillers, lighting, auxiliary equipment, and electric equipment losses account for 152 watts. At a PUE of 1.3, the total energy required to operate the liquid-cooled blade would be 546 watts. The PUEs are the same, but the air-cooled data center uses 20 percent more energy.
 

DCOP: A POSSIBLE SOLUTION

If PUE is not measuring energy for a given computing output, what good is it? It is time to develop a tool that provides the information data center designers, owners, and operators can use to maximize efficiency and minimize operating cost. The metric would need to link a standardized measure of computing output, say gigaflops, to the energy consumed to achieve that computing output, say kilowatt-hours. True, a data center in operation may not consume energy exactly as a standardized compute test might predict, but the same could be said for automobiles, air conditioners, and power plants. Yet standardized rating systems developed for all of these products and facilities are commonly used to rank their relative efficiency.

It would be simple if the goal were just to compare individual servers, because they could be put through their paces on a lab bench. The objective here, however, is to rank the energy efficiency of entire data centers, which include servers, storage devices, networking equipment, lighting, cooling equipment, electric distribution systems, and auxiliaries. Location also is important because some data centers are able to take advantage of free cooling, a benefit that should be reflected in comparative ratings.
 

Figure 3. Data center operating performance

 

To take all of the key design and provisioning elements into account it is necessary to first rank equipment individually by using standardized test protocols. These energy consumption measurements are the building blocks upon which an energy model for a proposed data center can be constructed. This basic concept, where data centers incorporate assumed configurations for purposes of uniformity and comparison, is similar to the approach used by Jonathon Koomey to compare the cost of owning data centers as described in a white paper issued by the Uptime Institute: “A Simple Model for Determining True Total Cost of Ownership for Data Centers.”

Consider a new efficiency measurement tool, data center operating performance (DCOP), defined to be computing output/facility energy consumption. To do the job right, it is first necessary to develop the energy profile of individual IT components. Similar to comparative tests that rank automobile mileage, air conditioner efficiency, and power plant performance, a standardized test protocol established by an industry association would be developed to determine energy consumption for each category of IT equipment operating at a relatively high processing load. For example, during the test the processor would be run at a specified temperature, say 85°C, and a specified load factor, say 75 percent.

Then, a general data center design would be established in which equipment is specified, and this configuration will become the basis for energy supply and cooling designs. The addition of site specific data including weather and fuel source information will lead to an estimate of the DCOP for the data center integrated over a 12-month period. The steps to calculate DCOP are illustrated in figure 3.

Once calculated, DCOP can be used to compare alternate designs for a particular data center or one facility to another. DCOP also can be used to compare anticipated energy consumption for data centers located in different climatic regions. During commissioning and at the end of each operating year actual measurements of computing output and facility energy consumption, normalized for variances from standard weather data, could be compiled and compared to the projected DCOP. This confirmation process also could be used to tune the data center to ensure that it continues to operate over time as originally designed.

Developing a measurement of energy efficiency for an entire facility, such as a data center or a power plant, is much more complicated than simply measuring the relative efficiency of energy-consuming equipment, such as automobiles or air conditioners. But if the goal is to create a sound basis for evaluating and comparing data centers, it is necessary to follow a path based on solid engineering principles. PUE served a useful purpose to monitor progress between phases of improvement in a single data center, but fails as a metric to compare one data center to another. DCOP, on the other hand, would be a fair expression of data center efficiency and a reliable yardstick for the quality of ongoing maintenance. It’s time to move on.