Figure 1. The primary fuel source of the installed power plant base in the U.S. (2008)


 

On June 14, 2010, the U.S. Environmental Protection Agency (EPA) announced that stand-alone data centers and buildings that house large data centers can earn the Energy Star label if they are in the top 25 percent of their peers in energy efficiency as measured by power usage effectiveness (PUE).

The EPA designed the Energy Star label to identify and promote energy efficiency. The program supports the underlying policy of optimizing energy use and reducing environmental impacts created by various types of electric use applications. Therefore, the inquiry that leads to Energy Star status should center on the relative improvement achieved in electricity production emissions due to the particular electric application being considered.

Data centers account for more than 1.5 percent of total U.S. electricity consumption at a cost of $4.5 billion annually, according to the EPA, an amount that is expected to double by 2012. It is appropriate that this concentrated energy demand be targeted for efficiency improvements, and the Energy Star label could provide a simple way for data center developers and tenants to distinguish good facilities from bad.

An important opportunity was missed, however, because PUE simply is the wrong metric to use in this application. To understand why, it is necessary to understand how electric energy is generated and distributed and how the PUE calculation works in practice.
 

Our Electric Power Supply

Electric distribution is a hub-and-spoke system. Energy is generated at power plants and distributed to end-users via a complex network referred to as the "grid." There is an electric distribution substation at each power plant where energy is transmitted between the power plant and the grid. Electricity can travel down a wire in either direction, so a power plant substation is the confluence of energy from that particular power plant as well as all the other power plants connected to the grid. In fact, power plant substations are the only places where onsite generation and the grid come together.
 

Figure 2. Various economic and technical factors favor the use of power plants using some fuels to  meet the demand for electricity

 

Electrical generating capacity in the U.S. totaled about 1,030 gigawatts (billion watts) in 2008 as shown below (see figure 1). When measured by generating capacity, natural gas is the largest source (40 percent), followed by coal (31 percent), nuclear (10 percent), and hydroelectric (8 percent) (See figure 2).

Not all power plants are dispatched equally. As measured by the amount of energy actually produced, also shown in figure 2, coal-fired power plants provided the most energy by far (48 percent), followed by natural gas (21 percent), nuclear (20 percent), and hydroelectric (6 percent).

The difference between those power plants that supply capacity and those that provide energy reflects how quickly some plants can be dispatched and the fuel cost differential, both which play an important part in determining which generators are dispatched to meet the real-time demand for electric power. For example, on a hot summer day it may be necessary to run all available equipment regardless of fuel cost, whereas during off-peak hours baseload generators, those that use the lowest-cost fuel, might be sufficient.

In order of increasing fuel costs, baseload generators start with hydroelectric power, followed by nuclear, coal, and natural gas. Wind generation, which accounted for 2.4 percent of generation capacity and 1.3 percent of energy produced in 2008, takes advantage of a free fuel, but wind turbines only operate when the wind is blowing and cannot be dispatched as baseload generators.
 

Figure 3. The Rankine Cycle

A Bit of Thermodynamics

Power plants fueled by coal and nuclear energy, which in 2008 generated almost 70 percent of the electric energy used in the U.S., produce electricity using a thermodynamic process known as the Rankine Cycle. In this process, fuel is fired in a boiler to convert water to high-pressure steam. This steam flows through a turbine that is connected to a generator where the electric power is produced. The steam then exits the turbine at low-pressure, where it is converted in a condenser back to water that is pumped back to the boiler to complete the cycle (see figure 3 and http://www.roymech.co.uk/Related/Thermos/Thermos_Steam_Turbine.html).

The Rankine Cycle has an inherent efficiency limitation, 35 to 40 percent for a conventional power plant, tied to the difference between the highest and lowest energy levels in the cycle. The typical efficiency for a gas-fired combustion turbine, which uses the Brayton Cycle, is in the same range.
 

Figure 4. Data center inefficiencies include power line losses

 

Energy sources such as hydroelectric, wind, and solar photovoltaic theoretically can achieve higher efficiencies, but combined they account for less than 8 percent of the total electric energy supply and cannot always be relied upon when power is needed. As a result, energy for most data centers comes from power plants that operate at efficiency levels in the 35 percent range (see figure 4).

If the fuel at the beginning of a power plant cycle has 100 units of embedded energy, 65 percent of this energy is rejected as waste heat in the process of generating electricity. On average another two units of energy, 6 percent of the electric power generated, are lost in transmitting and distributing this power to the data center. This means that every unit of energy saved in a data center is equivalent to three units of energy saved at the head end of the power plant, which directly correlates to a reduction in fuel use and CO2 emissions if the plant fires fossil fuel.
 

PUE

PUE, the metric settled on by the EPA for its Energy Star label, is defined to be total facility power divided by IT equipment power. An ideal data center, where all of the power is used by IT equipment, would have a PUE of 1.0. Of course this is impossible because virtually all of the energy used by IT equipment is converted to heat, so cooling is necessary. Also transformers, UPS systems, switch gear, and other ancillary equipment all operate with some degree of inefficiency. For a typical data center, below, where IT equipment represents 55 percent of the total power consumed, the PUE would be 1.8.

It is generally thought that the lower the PUE the better, but this is not always true. Ideally, PUE goes down as the entire data center becomes more efficient because power drawn by support equipment is reduced faster than power used by IT equipment. However, cooling and support loads do not directly correlate with IT equipment, so as a matter of simple calculation PUE can also improve if the IT equipment becomes less efficient. For example, if dry-cell batteries attached to servers are used for power conditioning and backup instead of central UPS systems, the PUE will look better even if the data center as a whole uses more energy.

There are other ways to game the PUE equation. Each server in a data center contains fans to keep the electronics from burning up. If power used by these cooling fans is considered IT equipment power, the denominator increases and the PUE looks better, even though there is no change in total facility power. The time interval over which PUE is measured also is at play. Electric power, measured in units such as kilowatts, is the capacity to do work at an instant in time and the PUE one minute will be different the next, so a time interval can be engineered to make the PUE appear lower than it really is.

There are other problems. PUE ignores all of the energy lost upstream of the data center electric meter, and some energy lost downstream as well. Even though a well-designed data center should be located close to a power plant to improve system-wide efficiency, PUE does not account for power line losses between the power plant and the data center. PUE also misses energy consumed in diesel engines when they are operated for testing or to provide electric power during emergencies. And, of course, PUE does not take into account the value of free cooling, a significant energy reduction strategy, unless the calculation is made when the chillers are not operating.
 

Figure 5. PUE is defined in a way that excludes power line losses

EUE: A Better Metric

According to published reports, as of late 2009, the EPA was leaning toward energy usage effectiveness (EUE), a better metric for the Energy Star program, according to published reports. EUE is defined as total source energy divided by total IT equipment energy. Energy is a measure of power used over a time period, and electric energy is expressed in units such as kilowatt-hours.

EUE is a broad measure of overall system efficiency as shown below. It does not solve every problem ignored by PUE, but EUE does take into account power-line losses and energy produced by diesels. EUE also appropriately rewards data centers that utilize free cooling because the metric incorporates a time interval that logically would cover operations over all four seasons.
 

The Myth about Hydroelectric

Although subject to variations in rainfall, hydroelectric power can be generated with high efficiency and is a low-cost source of electric energy. That is why there has been a rush to locate cloud data centers near the Columbia River and Niagara Falls. Electric power is the largest single operating cost at these energy-intensive facilities, and power near hydroelectric plants is cheap.

However, inexpensive power should not be confused with "green" power. Mega data centers benefit from low power costs, but they simply grab baseload energy that would be sold anyway. In effect, they shift the burden of operating higher-cost intermediate dispatch and peaking plants to other users downstream.

With the possible exception of reduced power-line losses, there is no significant system-wide efficiency improvement just because data centers are located near hydroelectric energy sources. The fact is hydroelectric power plants generate just 6 percent of the energy produced in the U.S., and only the lucky few have access because most of the best locations are taken.
 

Figure 6. A better metric would capture all system inefficiencies and account for all energy-saving strategies

Location, Location, Location

Most data centers will be powered by a combination of coal, nuclear power, and natural gas electric generating plants. These plants are about 35 percent efficient, so losses are effectively tripled at the head end of the power plant where fuel is consumed. That is why location is so important. Two units of energy lost in the transmission lines between the power plant and data center translate to six units of energy embedded in fuel, directly correlating to fuel cost and the carbon footprint. Yet PUE completely ignores the impact of data center location on energy use. PUE also ignores the value in locating data centers in northern climates where free cooling is readily available. Cooling equipment can account for more than 35 percent of the total energy consumed in a data center, and minimizing chiller use can have a major impact on energy consumption.
 

Summary

It is easy to imagine how the Energy Star label can be applied to appliances such as refrigerators. Simply plug in a refrigerator, measure power consumed to achieve the desired cooling effect, and compare the results with other refrigerators; the most efficient appliances win the label. Location does not matter because, regardless of the model selected, the refrigerator will be located in the kitchen.

Data centers are different. Data travels at the speed of light, and location usually does not negatively impact data center utility but can significantly affect energy efficiency and the carbon footprint. Data centers have many moving parts, and the EPA should take a national view when awarding an Energy Star label.

Theoretically, it is possible to install solar collectors or wind turbines anywhere, but in a world of finite resources it makes sense to locate solar collectors where the sun shines and wind turbines where the wind blows. The same is true of data centers. The goal should be to minimize kilowatt-hours per year, rather than just kilowatts. A data center with a PUE of 1.3 in Dallas, where there are few free cooling hours, is not as efficient as an identical data center in Chicago, where there are 4,000 hours or more of free cooling per year, and the Energy Star label should point data center users to Chicago over Dallas. Within a geographic region, the Energy Star label should take into account the distance between where power comes from and where it is used and certainly should count energy sources the PUE ignores such as diesel engines.

As currently designed, the Energy Star program might cause some data center users that are limited to a small geographic area to think about energy efficiency, and could be marginally useful in comparing one data center with another. However, most data center users are not regionally constrained and a program that purports to recommend energy efficient options ought to identify where data centers should be built across the U.S. to maximize system efficiency and minimize the carbon footprint nationwide. The EPA missed that opportunity when PUE was selected to measure data center efficiency.