Figure 1. Legacy volume data center with 4 kw/rack


This article is based on a white paper prepared by Eaton and UTC Power. The full white paper can bedownloaded here.

Figure 2. Blade server data center

While new technologies that improve IT utilization, such as blade servers and virtualization, solve some issues, they also create new ones. Increased power densities free up floor space in the legacy data center, but they also quickly use up power and cooling. This results in underutilized raised floor space. This is especially true within data centers located in markets where incremental utility power is not readily available. The latest generation of service-based applications has minimum response-time criteria, which increases the importance of the physical location of data centers hosting those applications and exacerbates the shortage of power and space in already constrained markets.

With the U.S. Environmental Protection Agency forecasting that energy consumption by servers and data centers in the U.S. will double in the next 5 years, organizations are now at a tipping point. They must develop strategies that meet their growth objectives, increase energy efficiency, and incorporate new technologies while maintaining or increasing system availability.

Data Centers

Historically, many data centers used volume servers in a dedicated, single-application model, meaning they could not reprovision themselves with multiple software applications. For practical purposes, these servers were also physically constrained to their original racks. This resulted in average server utilization rates in the 10-20 percent range. Many of these servers were dual corded, which enabled them to operate on an “A” or “B” power system for increased availability. This architecture led to very inefficient power systems, since an idle server still consumes approximately 50 percent of its full load power.

Similarly, UPS systems in a dual-bus power distribution configuration are loaded less than 50 percent to enable them to assume the full data center load. At these utilization rates, legacy UPS systems are typically less than 85 percent efficient, resulting in energy losses and additional stranded capacity. Inefficient legacy computer room air conditioning (CRAC) systems compound the inefficiencies since they typically run at 100 percent, regardless of true heat load.

As a result, such data centers supply only one-third to one-half of input power to IT loads and the majority supplies ancillary support systems. Figure 1 shows a data center with a 1-megawatt (MW) utility substation and 400 kilowatt (kW) of UPS output power feeding a server farm of servers rated at 4 kW/rack, which is a typical density.

To increase server utilization and flexibility while reducing IT costs, organizations are incorporating blade servers and virtualization. These technologies enable rapid server hardware deployment and the ability to reprovision applications and services from one server to the next anywhere on the network within minutes. Power densities in these facilities easily reach 25 kW/rack. Densities in the range of 50 kW/rack are on the horizon. Figure 2 shows the legacy data center from figure 1 reprovisioned to house 25-kW/rack blade servers. Each row in the redesigned data center contains 10 racks, for a total of 250 kW. After two rows, the data center is out of UPS power and cooling capacity.

Freeing Stranded Capacity

Liquid-cooled racks and in-row cooling solutions free some stranded capacity. High-efficiency power supplies can further reduce power and cooling loads. High-efficiency, double-conversion ac UPS systems operate at 97 percent efficiency across the entire load spectrum, while providing maximum protection (see figure 3). These reduced losses, coupled with the reduced cooling load, may free up to 25 percent of the UPS system for IT purposes. High-efficiency transformers in power distribution units add to the savings.

Figure 3. High-efficiency UPS systems

Addressing inefficient cooling strategies and technologies may result in incremental cooling capacities being freed. Traditional computer room air conditioning (CRAC) systems met the cooling demand in legacy data centers but may not adequately or efficiently cool higher-density facilities. Cooling alternatives include hot aisle/cold aisle isolation, self-contained cabinets, and rear-door heat exchangers. These technologies place cooling directly at the heat source, using less energy and freeing cooling capacity.

Incremental Power and Cooling Solutions

Adding utility substations, generators, and cooling units is seldom practical due to the complexity involved of duplicating power and cooling to existing systems. For many facilities, on-site combined cooling, heating and power systems (CCHP) may be a cost-effective solution, especially compared to building a new data center. Prefabricated microturbine or fuel cell solutions enable data center operators to:
  • Add incremental power and cooling quickly.

  • Generate electricity at 40-60 percent below the cost of equivalent grid-purchased energy.

  • Obtain free cooling from absorption chillers.

  • Augment absorption chilling with sidestreaming series flow configurations, allowing the absorption machines to divide the work load with high-efficiency, VFD-controlled electric chillers. This results in expanded cooling production at reduced energy consumption (in a range of 0.15-0.35 kW/ton). Such technology is under development at UTC Power under the trade name Active Redundant Cooling.

  • Divert surplus thermal energy to heat non-data center hydronic or ducted-air heating systems.

  • Mitigate the risk of rising energy prices by locking in using various natural gas supply contract mechanisms.

  • Design the on-site generation system with redundancy that improves the overall availability of the data center.

  • Capitalize, lease, or use third-party companies to outsource the operation and maintenance of the CCHP system.


Figure 4. A CCHP system with four 200-kW microturbines

Figure 4 shows a CCHP system with four 200-kW microturbines providing nominal 750 kW of net electricity and 535° F waste heat fed into two absorption chillers that can deliver about 350 RT nominal output of chilled water. Grid paralleling is handled by onboard system components, eliminating complex wiring. The microturbine generators feed the 480-Vac electrical system through any open 480-V three-phase breaker position.

An absorption chiller captures the turbine exhaust to produce cooling. In a perfectly sized system, the cooling output of the CCHP system will eliminate 95-100 percent of the electric power required to run conventional cooling equipment. In extreme ambient conditions, the conventional equipment remains available to augment the cooling produced by the CCHP system. Therefore the total available cooling is the sum of conventional cooling and CCHP system cooling. For larger data centers, absorption chillers can be integrated with Active Redundant Cooling.

Figure 5. UTC fuel cell

Thermal priority is given to cooling; however, any remaining surplus energy can simultaneously produce hot water at 175°F (79.4°C), or surplus exhaust energy can be diverted to create steam at up to 100 psi, if applications where heating is necessary

For new data centers, CCHP systems can be integrated into the design to achieve 99.9985 percent critical load availability in concert with the normal utility source, which reduces some need for diesel generators. Cooling redundancy can be completely eliminated since CCHP systems designed for grid-independent load service typically provide 2N to 3N of mission-critical cooling capacity.

Equipping the CCHP system for dual-mode operation enables it to make use of surplus power and thermal capacities. Dual-mode allows the CCHP system to parallel the utility under normal conditions. Through paralleling, surplus power can flow to the entire data center or to the surrounding building and even to be exported to the grid. Dual-mode allows electric base loading so the system runs constantly at 100 percent electrical output. When base loaded, turbine exhaust mass flow is at its greatest, maximizing the production of useful cooling (or heating).

In emergencies, dual-mode controls activate the fast-transfer capabilities of the turbine system, switching the critical load to be powered only by the CCHP system. Cooling systems are designed to ride through the momentary transfer outages and be fully operational during grid down conditions.

With data center retrofits, these same capabilities can be achieved provided the CCHP system is also sized for the full capacity of the IT load expansion, with its own inherent redundant power and cooling. Just as with new facilities, system redundancy can be put to full use through Dual -mode configurations. Where the utility does not have capacity for expansion, CCHP becomes the “utility,” and conventional backup systems provide the contingency for high-tier operation.

Microturbine systems are also environmentally green compared to central power plants, producing less CO2 and nitrogen oxides as well as conserving water.

Figure 6. Side-by-side comparison of data centers with and without CCHP systems being utilized. In this example a smaller CCHP system increases the available power and cooling for a 400-kW data center.

Financial Analysis

A CCHP system sized to match the cooling requirement will provide the shortest payback and best efficiency (see figure 6). For example, each megawatt of IT load normally requires about 350 tons of cooling. Two PureCell Model 400Ms with ARC matched to a 350 ton cooling requirement would supply 100 percent of the cooling. This eliminates the need for conventional compression chilling. By displacing 350 tons of conventional chilling, metered power drops by 250 kW. The prime movers (generators) will displace about 750 kW of metered power. The total reduction in metered load will total 1,000 kW. Total displacement is often slightly more, given the displacement of transformer losses upstream of the point of injection. At 1,000-kW displaced energy, the CCHP system is 100 percent base loaded, both electrically and thermally (the ideal).

Cost savings depends on the relative cost of fuel versus electric power purchased from the utility. This cost difference is often called “spark spread. The table in the sidebar shows savings at various spark spreads and illustrates the importance of applying these systems where the cost differential is greatest (see the sidebar).

Such a design does not capture costs for data center reliability and will not eliminate the need for diesel or chiller redundancy. If the grid is down, the system would separate from its grid connection and pause while traditional backup systems handle the emergency.

Sidebar: Fuel-Cell Solutions

Fuel cells augmented by combinations of absorption chilling and Active Redundant Cooling are a viable displaced energy solution.  These combinations allow thermal-to-power ratios to match to datacenter loads. Fuel cells in the 400-kW class are expected to be much lower in cost ($/kW) compared to earlier 200-kW versions. Additionally, improvements in technology have increased cell stack life to10 years so they can last for 20 years with an overhaul at the end of year 11. These improvements, the tight integration of the balance of plant, and remote monitoring capabilities have driven down theO&M cost of a fuel cell by about a half of previous costs.

In part because of their excellent emissions performance, fuel cells show good promise for data center operations, especially as data centers become increasingly environmentally and ‘green’ conscious. Each 1000kW of displaced power from fuel cells can result in almost 2000 metric tons of annual avoided CO2compared to the baseline average for fossil fuel plants in EPA sub-regions where CCHP is most often applied.

Applying Spark Spread

The table illustrates the potential annual savings from various spark spreads for an two by 400-kW PureCell system with an Active Redundant Cooling system in concert with absorption cooling, assuming a gas price of $11.00/MMBtu and a system-installed price of about $4.1M. The comparison is to the time temperature weighted performance of electric compression chilling (annualized at 0.74 kW/ton average).

Using Newark, NJ, ambient conditions, this installation produces a time-weighted annual average of about 800-kW of electricity and up to 600 tons of cooling or 1010 kW of continuous total power displacement.

Such a system would be integrated to a data center with at least 1.0 MW IT load or greater. Considering the state and federal incentives that are now available under TARP and ARRA, the net cost can be under $2.0 M and resulting paybacks under 3.0 years!

For a detailed list of these state-by-state incentives, please refer towww.dsire.org.