Conservative estimates suggest that energy costs exceed over one-half of data center maintenance costs, and even these conservative figures show that energy costs are a substantial portion of the overall operations bill. “The real cost, however, is not just in the power being used but in the costs of the infrastructure equipment-generators, UPSs, PDUs, cabling and cooling systems. For the highest level of redundancy and reliability-a Tier 4 data center-for every kilowatt used for processing, some $22,000 is spent on power and cooling infrastructure” wrote Drew Robb in a 2007 Computerworld article.
Server DensityThe number of servers per rack enclosure is increasing, and servers are becoming smaller with increased processing capabilities. These servers are also becoming more power dense. In 1996, the average rack enclosure housed seven servers. This number is expected to reach 20 servers per rack by 2010, according to research analyst firm IDC (see figures 1 and 2).
These dense solutions contribute to new power and cooling challenges. Smaller and more powerful has been the mantra for design engineers, but while the raw server computing performance has increased significantly, the performance efficiency per watt has been slower (see figure 3).
Most of the industry data describe lackluster efficiencies for power supplies. Experts agree that what needs to be done -to decrease our carbon footprint and reduce our energy costs - is to improve data-center efficiency. Industry efforts such as The Green Grid and new EPA initiatives are ringing the bell for change.
Ac/dc power supplies (ac/dc front ends) within server equipment provide an opportunity to improve data center efficiency. Although the front end is only a portion of the data center power train, the ac/dc front end power supply remains a key element in the efficiency challenge because the number of servers per rack enclosure is increasing.
The following are the efficiency ranges for the important elements of the typical ac power train as seen in “existing” data centers, according to Lawrence Berkeley National Labs and Cherokee International:
UPS: 85-92 percent
PDU: 98-99 percent
Dc/dc: 78-85 percent
In addition, individual data centers will demonstrate different efficiencies at the various stages of the power train. There are numerous studies with varying efficiency ranges, yet no matter which statistics are used, it is clear that these figures leave ample room for improvement in several power train areas - particularly the switch-mode power supply area.
Current Power SolutionsAs the industry has matured, so has the power supply industry’s understanding of IT equipment’s actual performance models. For many years, original equipment manufacturers requested power supplies that demonstrated their optimum efficiency at or near full load. Over time though, the industry observed that servers typically run at a 20 percent load because they are operating in redundancy (see figure 5).
Power supplies in the actual field-installed base were designed for optimal efficiency under full load. Per Lawrence Berkeley National Labs study, typical power supplies that offer 79 percent efficiency (maximum), would yield only 63 percent efficiency under the typically experienced 20 percent load conditions.
Efficiency ChallengesExamine the dissipation losses by mapping areas slated for improvement is a productive way to analyze efficiency improvement within the front end. In a general sense, the power train can be broken into sections including the PFC stage, bias and fan control, primary dc/dc topology/stage, and the output section. The efficiency of these stages is multiplicative. Hence, concentrating on any of the areas mentioned above will have a direct impact on the overall front-end efficiency. The areas with the largest power dissipation will, of course, have the most impact on the entire efficiency equation.
Coupling the above approach with a technique called synchronous rectification (replacing Schottky diodes with low RDson MOSFETs) can also maximize performance in this area. Using best-in-class MOSFETs with very low RDson resistance enables the minimization of conduction losses even further. And finally, optimizing the dc/dc transformer turns ratio reduces the primary current and secondary voltage, minimizing losses and improving efficiency performance.
Further improvement in this area depends upon the availability of new components and materials that are not available for purchase on the market today.
Light Load ImprovementsSeveral approaches can be taken to address the load balancing issue within the front end. These include:
- Lowering bias/fan supply power draw levels
- Optimizing the power supply hold up time versus transformer turns ratio
- Reducing PFC switching frequency as the load decreases
- Using improved material (MPP) for the main output choke to reduce core losses at light load
- Balancing RDson (primary MOSFET resistance) versus capacitance for optimal total loss solution
- Optimizing Synch FETs when load is below 30% to conserve bias and housekeeping power
TCO ReductionTo calculate total cost of ownership (TCO) savings via power supply improvements, data can be used from a current 80 Plus Server Project as a baseline for typical efficiencies in existing data centers. The 80 PLUS Server Project is furthering a Silver (75-87 percent efficiency) and Gold Standard (77-90 percent efficiency) for power supply efficiencies. At these levels, 80 PLUS calculates the following server savings over the lifetime of server equipment (estimated at 4-6 years depending on type of server). Using this data, one can demonstrate the impact of these standards on the data center TCO in relationship to server equipment. To calculate TCO, these alternative scenarios provide a representative overview.
The EPA reported an average commercial electricity rate of 8.8 cents/kWh in the US for 2006. Using this rate, the annual savings from the installed base efficiencies (as reported by 80 PLUS) to either the Silver or Gold Standard are substantial. Over the lifetime of a single high-end server (6 years), the Silver Standard leads to $3,168 in savings (36,000 kWh * $.088 = $3168) and the Gold Standard delivers $4,465 (50,742kWh x $.088 = $4,465). With enterprise-class data centers housing hundreds to thousands of servers (EPA, 2007), the reduction of the TCO translates to large sums of money. For 1000 servers, the Silver Standard reduces costs by $3,168,000 and the Gold Standard by $4,465,000.
SummaryBased on the previous data, inherent inefficiencies are currently built into each component of the data center ac power train. These inefficiencies lead to greater server power consumption and recurring electricity costs. These data also show that such inefficiencies are adversely affecting the overall power usage of data center facilities because of the need for cooling equipment, thus creating an environment with some of the lowest power efficiency levels and some of the highest carbon footprints of any commercial facility.
In terms of efficiency, an area for improvement within the power train is the ac/dc front end which allows data centers to leverage existing power architectures while experiencing substantial reduction in their TCO through reduced power consumption and cooling costs. Significant gains can be achieved in a data center by addressing just one component of the ac power train - the ac front end.