Figure 1. Thermal schematic for the clustered systems design


 

The demand for energy efficiency is driving advances in today’s information technology and critical facilities systems. High-performance, low-energy servers and storage and network devices are now being supported by closely-coupled air and water cooling systems and redundant alternating and direct current power systems…all in the same rack. Leading data center operators are integrating innovative MEP solutions into today’s plug-and-play racks and server environments that provide operating efficiencies and flexibilities never before conceived.
 

Chill Off 2 Results

On October 14, 2010, the Data Center Energy Efficiency Summit celebrated its third successful gathering at Brocade Communications in Santa Clara, CA. Over 500 attendees witnessed the results of the much-anticipated Chill Off 2, which looked at exactly these issues of bringing IT and facilities solutions closer together.

Chill Off 2 is the second such competition organized by Lawrence Berkeley National Labs (LBNL) and Data Center Pulse. This time, they set up several close-coupled cooling technologies side-by-side in a test environment and put them through their paces. Eleven rack level and aisle-containment computer cooling technologies were tested, energy efficiency comparison metrics were developed and used, and quantitative comparisons were made.

Twelve different vendors submitted entries that were subjected to test conditions so all categories of technology would perform in conditions of roughly equivalent conditions, regardless of whether they used chilled water or chilled water and refrigerant.

Although free cooling with outside air may often be the most energy efficient method of all, it was not considered in this contest because it utilizes a very different a method of heat rejection and because it can be used in conjunction with each of the other methods tested.

Here is a good explanation of the types of closely coupled cooling systems that the Chill Off considers to be worthy of their tests.

Rack Coolers. Rittal (LCP+), Knürr (CoolLoop, CoolTherm), and APC (InRow RC with RACS) provided samples of this technology, which is an enclosure system for a small number of racks that blocks hot server exhaust air from entering the computer room.

Row Coolers. Liebert (XDH, XDV) and APC (InRow RC with HACS) entered technology that is placed directly adjacent to computer racks either horizontally or above the racks, gathers exhaust air from the rear of the rack, cools the air, and returns the cooled air near the server air inlet.

Rear Door Coolers. Vette (RDHX), IBM (Rear Door iDataPlex), SUN (SUN Glacier), and Liebert (XDR Passive Rear Door) offered a technology that cools hot air as it exits the IT equipment racks via heat exchangers in the place of the rack rear door and returns cooled air into the computer room using only server fans to move the air.

Direct Touch Coolers. Clustered Systems brought in a unique prototype device that cools hot electronic components located inside rack-mounted IT equipment, directly utilizing conduction and refrigerant phase change to cool equipment.

Modular Datacenters. Oracle (Previously SUN Microsystems) set up an independent container type data center that was tested using the same models of servers as the other tests.

Dean Nelson and Brian Day of Data Center Pulse provide a great video tour and explanation of these technologies at http://www.datacenterpulse.org/TheChillOff. The Data Center Pulse has published more videos on its website and on YouTube, and these are also available on the Mission Critical website.
 

Conclusions and Recommendations

Some valuable takeaways from the report:

  • The variation across all test parameters and devices when considering total power use was 13 percent. For a given test condition, the total power use spanned a 6 to 8 percent range across all devices which saves $180,000 per year for a 1-megawatt IT load data center considering peak day pricing in PG&E’s demand-response program.
     
  • The analysis included two chilled-water plant energy efficiency models. The two models were a code minimum plant and a plant that included a water-side economizer. The advantage of operating a water-side economizer with these systems is minimal with only a 1.5 percent efficiency gain.
     
  • There was a significant energy-efficiency improvement as the temperature of the chilled water increased. The total energy savings was 5.3 percent when the chilled water temperature was raised 15F (from 45F to 60F) and the server air inlet temperature was held constant at 72F.
     
  • Increased chilled-water temperatures could lead to higher server air inlet temperatures and cause server fans to speed up, increasing IT power. Therefore an analysis of total energy used is warranted when raising chilled water temperatures.
     
  • Devices referred to as passive (no fan power required) such as water- and refrigerant-cooled rear door designs tended to have better overall energy efficiency and had the best energy efficiency when installed without a water-to-water CDU.
     
  • Some devices tested exhibited a significant increase in fan power when server air inlet temperatures were raised to 80F and 90F causing energy efficiency to drop.
     
  • Encouraging higher chilled water temperatures, higher server air inlet temperatures, and increased use of free cooling will yield improved energy efficiency.
     
  • Careful planning of chilled water distribution systems using variable-speed equipment along with eliminating bypasses through the use of two-way modulating valves can yield energy savings with existing equipment and improve energy efficiency of facilities still in the planning stages.

All of the detailed results can be found on the LBNL web site at http://hightech.lbl.gov/documents/DATA_CENTERS/chill-off2-final-rpt.pdf.
 

The Winner!

Other results from the Chill Off showed smaller performance differences between different approaches, with rear-door liquid cooling products faring slightly better than rack-level and row-level solutions.

And, although the “Chill-Offers” refrained from identifying an individual winner of the contest, the data clearly showed that the direct contact technology of Clustered Systems earned the best scores of all. Using LBNL’s special energy-efficiency metrics, its 36-server rack showed a clear advantage and offered energy savings of 12 to 16 percent compared to other cooling approaches.

One key to its success lies in the use of conductive cooling (through a metal) instead of convective cooling (through air flow). Conducted heat is rejected so expediently that large air handlers are not needed to move air around the racks and no server fans are needed to cool the processors. According to Clustered Systems CEO Phil Hughes, the new technology will soon be able to cool as much as 100 kW/rack.

But, don’t run off to your HVAC vendor just yet. The technology is still new and only now becoming available on the commercial market through OEM sources. So, “total cost of ownership” is still something to be considered at the outset of your purchase.

LBNL’s report on this promising technology can be found at http://hightech.lbl.gov/documents/DATA_CENTERS/clustered-systems-final-rpt.pdf.

The Critical Facilities Round Table (CFRT) is a non-profit organization based in Silicon Valley that is dedicated to the sharing of information amongst our critical facilities members and to the resolution of pressing issues in our data center industry. Please visit our Web site at www.cfroundtable.org or contact us at 415-748-0515 for more information.