Figure 1. The number of servers per rack will doubled from 1996-2005


Conservative estimates suggest that energy costs exceed over one-half of data center maintenance costs, and even these conservative figures show that energy costs are a substantial portion of the overall operations bill. “The real cost, however, is not just in the power being used but in the costs of the infrastructure equipment-generators, UPSs, PDUs, cabling and cooling systems. For the highest level of redundancy and reliability-a Tier 4 data center-for every kilowatt used for processing, some $22,000 is spent on power and cooling infrastructure” wrote Drew Robb in a 2007 Computerworld article.

Figure 2. The increased number of servers and their increased power draws has driven power densities ever higher.

Server Density

The number of servers per rack enclosure is increasing, and servers are becoming smaller with increased processing capabilities. These servers are also becoming more power dense. In 1996, the average rack enclosure housed seven servers. This number is expected to reach 20 servers per rack by 2010, according to research analyst firm IDC (see figures 1 and 2).

These dense solutions contribute to new power and cooling challenges. Smaller and more powerful has been the mantra for design engineers, but while the raw server computing performance has increased significantly, the performance efficiency per watt has been slower (see figure 3).

Most of the industry data describe lackluster efficiencies for power supplies. Experts agree that what needs to be done -to decrease our carbon footprint and reduce our energy costs - is to improve data-center efficiency. Industry efforts such as The Green Grid and new EPA initiatives are ringing the bell for change.

Ac/dc power supplies (ac/dc front ends) within server equipment provide an opportunity to improve data center efficiency. Although the front end is only a portion of the data center power train, the ac/dc front end power supply remains a key element in the efficiency challenge because the number of servers per rack enclosure is increasing.

Figure 3. Server performance increased faster than performance per watt over the decade ending in 2007

Figure 4 shows a typical power train in current data centers. This ac power distribution architecture is the most common architecture, provides the lowest risk in supply base of IT equipment, and has an aggressive cost base for equipment as well. This power architecture also has some associated weaknesses, including low-system efficiency, low power densities, and a higher number of components affecting mean time between failures.

The following are the efficiency ranges for the important elements of the typical ac power train as seen in “existing” data centers, according to Lawrence Berkeley National Labs and Cherokee International:

UPS: 85-92 percent

PDU: 98-99 percent

Figure 4. Cherokee says its power supplies provide provide greater efficiencies that ripple further  down the line.

Power supply: 80 percent (Lawrence Berkeley National Labs (LBNL) research shows a range of 68-72 percent for power supply unit efficiencies, and it is important to note here that these statistics - as reported by LBNL - reflect multiple-output, PC-grade power supplies rather than previous generation higher-efficiency front ends. Cherokee International’s experience shows that efficiency levels are much higher at 80 percent or greater.

Dc/dc: 78-85 percent

In addition, individual data centers will demonstrate different efficiencies at the various stages of the power train. There are numerous studies with varying efficiency ranges, yet no matter which statistics are used, it is clear that these figures leave ample room for improvement in several power train areas - particularly the switch-mode power supply area. 

Figure 5. Peak efficiencies are normally achieved at 80 percent load while power supplies are loaded at about 20 percent.

Current Power Solutions

As the industry has matured, so has the power supply industry’s understanding of IT equipment’s actual performance models. For many years, original equipment manufacturers requested power supplies that demonstrated their optimum efficiency at or near full load. Over time though, the industry observed that servers typically run at a 20 percent load because they are operating in redundancy (see figure 5).

Power supplies in the actual field-installed base were designed for optimal efficiency under full load. Per Lawrence Berkeley National Labs study, typical power supplies that offer 79 percent efficiency (maximum), would yield only 63 percent efficiency under the typically experienced 20 percent load conditions.

Efficiency Challenges

Examine the dissipation losses by mapping areas slated for improvement is a productive way to analyze efficiency improvement within the front end. In a general sense, the power train can be broken into sections including the PFC stage, bias and fan control, primary dc/dc topology/stage, and the output section. The efficiency of these stages is multiplicative. Hence, concentrating on any of the areas mentioned above will have a direct impact on the overall front-end efficiency. The areas with the largest power dissipation will, of course, have the most impact on the entire efficiency equation.

Figure 6. Savings can be captured over the life of a server.

The PFC stage is often already in the high 90 percent efficiency level and is not considered “low-hanging fruit” relative to other areas. A bigger impact can come from improvements made to the dc/dc step-down stage along with the bias and fan control circuitry. Today, on an efficient front end, the dc/dc stage accounts for an efficiency in the low to mid 90 percent. In order to improve on this performance, the losses must be categorized by type so that each one can be addressed independently. Overall front-end efficiency can be improved by focusing on switching losses. This can be addressed through the use of zero voltage switching (ZVS) techniques using a full-bridge topology which serves as a platform to reduce losses during the switching transitions of the MOSFETs. Some patented circuits can also act to recover the energy - associated with the reverse current of the output synchronous rectifiers - which dissipates as heat. Approximately 80 percent of this energy can then be recovered and reused back in the circuit as recycled energy.

Coupling the above approach with a technique called synchronous rectification (replacing Schottky diodes with low RDson MOSFETs) can also maximize performance in this area. Using best-in-class MOSFETs with very low RDson resistance enables the minimization of conduction losses even further. And finally, optimizing the dc/dc transformer turns ratio reduces the primary current and secondary voltage, minimizing losses and improving efficiency performance.

Further improvement in this area depends upon the availability of new components and materials that are not available for purchase on the market today.

Light Load Improvements

Several approaches can be taken to address the load balancing issue within the front end. These include:
  • Lowering bias/fan supply power draw levels

  • Optimizing the power supply hold up time versus transformer turns ratio

  • Reducing PFC switching frequency as the load decreases

  • Using improved material (MPP) for the main output choke to reduce core losses at light load

  • Balancing RDson (primary MOSFET resistance) versus capacitance for optimal total loss solution

  • Optimizing Synch FETs when load is below 30% to conserve bias and housekeeping power


TCO Reduction

To calculate total cost of ownership (TCO) savings via power supply improvements, data can be used from a current 80 Plus Server Project as a baseline for typical efficiencies in existing data centers. The 80 PLUS Server Project is furthering a Silver (75-87 percent efficiency) and Gold Standard (77-90 percent efficiency) for power supply efficiencies. At these levels, 80 PLUS calculates the following server savings over the lifetime of server equipment (estimated at 4-6 years depending on type of server). Using this data, one can demonstrate the impact of these standards on the data center TCO in relationship to server equipment. To calculate TCO, these alternative scenarios provide a representative overview.

The EPA reported an average commercial electricity rate of 8.8 cents/kWh in the US for 2006. Using this rate, the annual savings from the installed base efficiencies (as reported by 80 PLUS) to either the Silver or Gold Standard are substantial. Over the lifetime of a single high-end server (6 years), the Silver Standard leads to $3,168 in savings (36,000 kWh * $.088 = $3168) and the Gold Standard delivers $4,465 (50,742kWh x $.088 = $4,465). With enterprise-class data centers housing hundreds to thousands of servers (EPA, 2007), the reduction of the TCO translates to large sums of money. For 1000 servers, the Silver Standard reduces costs by $3,168,000 and the Gold Standard by $4,465,000.

Summary

Based on the previous data, inherent inefficiencies are currently built into each component of the data center ac power train. These inefficiencies lead to greater server power consumption and recurring electricity costs. These data also show that such inefficiencies are adversely affecting the overall power usage of data center facilities because of the need for cooling equipment, thus creating an environment with some of the lowest power efficiency levels and some of the highest carbon footprints of any commercial facility.

In terms of efficiency, an area for improvement within the power train is the ac/dc front end which allows data centers to leverage existing power architectures while experiencing substantial reduction in their TCO through reduced power consumption and cooling costs. Significant gains can be achieved in a data center by addressing just one component of the ac power train - the ac front end.