In or around November 2006, I co-authored a white paper with Vali Sorell (vice president, Syska Hennessy Group) that calculated and compared the energy efficiency of cooling strategies employed in most data centers. (This paper titled, “Will Liquid Cooling Solutions Save Energy?” is available on Syska Hennessy Group’s website at http://www.syska.com/thought/whitepapers/wpabstract.asp?idWhitePaper=5.) In the paper we evaluated the following scenarios:

  • Conventional computer room air conditioners (CRACs)
  • Rear door-mounted cabinet cooler
  • Compressor-less liquid cooled cabinet
  • Water cooled cabinet
  • Conventional air handling units (AHUs)
  • Water cooled IT hardware with chillers (future)*
  • Liquid cooled hardware with chillers (future)*
  • Liquid cooled hardware without chillers (future)*

When the paper was first published, several of the major HVAC manufacturers challenged our calculations; we made minor adjustments based on their input, using equipment efficiencies that they agreed to. Interestingly, this had no significant effect on either the comparison results or conclusions that were generated from the study. The most efficient cooling strategy was, by far, the use of liquid cooled electronics (servers, main frames, etc.). Our final conclusion was that “Liquid cooled equipment without a chiller plant (also not yet commercially available but technically very feasible) can offer significant energy savings compared to all other options. In addition, because of the system’s simplicity, it would be expected that this type of system would be easier to maintain and significantly more reliable than the other scenarios.”

?

The most efficient cooling strategy was, by far, the use of liquid cooled electronics
(servers, main frames, etc.). Our final conclusion was that “Liquid cooled equipment without a chiller plant (also not yet commercially available but
technically very feasible) can offer significant energy savings compared to all other options.”

At the time, we predicted that it would be probably three to five years before liquid cooled hardware was commercially available to mainstream customers as a reasonably priced alternative to air cooled hardware. Obviously, we were overly optimistic in this prediction, as seven years later we still have not seen this come to pass. However, the industry is making real progress in turning this into reality. In September 2010, Lawrence Berkeley National Laboratory (LBNL) published the results of a “Chill-Off” competition that tested various cooling strategies.1 This study tested 11 different cooling systems and compared their performance and energy efficiency.

One of the most significant findings of the LBNL study, which was consistent with the white paper, is that the chiller is by far the largest energy consumer in the cooling line-up, accounting for about four times as much energy consumption as the total of all the other cooling components combined (pumps, fans, etc.). It is no wonder that ASHRAE 90.1-2010 Energy Standard for Buildings except Low-Rise Residential Buildings now mandates the use of economizers in data centers.

One of the cooling systems tested in the LBNL study was a prototype system provided by Clustered Systems Company, Inc., referred to in the study as a “direct touch cooling system.” This system differed from the rest in that it used true “liquid cooled electronics.” The servers were modified to remove all server fans, and instead used internal chassis “cold plates” that were cooled directly by the chilled water supplied to mating rack-mounted cold-plates. One of the LBNL testing engineers informed me that during testing, a chiller plant outage occurred. This system’s cooling performance was (inadvertently) shown to provide excellent cooling to the server internals, even at elevated chilled water temperatures, to the extent that it would be a viable solution using typical condenser water inlet temperatures consistent with today’s evaporative cooling towers (e.g., 85ºF). In other words, the chiller plant would not be required, since the system could be supported by a cooling tower delivering 85º cooling water to the rack (via a closed-loop heat exchanger to protect the water quality supplied to the IT racks).

Other manufacturers and HVAC vendors are actively working on developing “liquid-cooled hardware” solutions, some similar to the cold-plate technology and some radically different, including “evaporative immersion” technologies. The common themes are to accommodate higher heat density electronics and to improve energy efficiency. Clustered Systems has recently announced the deployment of a “hyper-efficient,” 100 kW per rack solution at the SLAC National Accelerator Laboratory, claiming that initial testing shows a 1.07 power usage effectiveness (PUE). It is evident that these technologies have finally moved from the research and development arena into the available-for-purchase realm.

The industry challenge is to develop sufficient standards and “plug & play” consistencies between products such that hardware does not have to be customized, and that any IT manufacturer can offer products that are compatible with respective liquid-cooled solutions. Presently, this is not the case. The IT hardware must be customized to fit each available “liquid-cooled” solution, and so end-users become “married” to both the IT supplier and the cooling supplier. This is why this proven technology remains viable to a very small niche of the industry, namely the high-performance computing (HPC) or “super-computer” segment.

ASHRAE TC9.9 published a book in 2006 titled, Liquid Cooling Guidelines for Datacom Equipment Centers, which provides a framework for establishing industry standards, definitions, and recommendations for liquid cooled solutions. ASHRAE TC9.9 followed up with a white paper in 2011 that proposed standards for liquid-cooled solutions, including five classes of liquid-cooled IT equipment (W1 through W5) and respective “facility cooling water” inlet temperatures. Classes W3, W4, and W5 would use inlet water temperatures high enough to preclude the need for mechanical (chiller) cooling.

One may ask, how does all of this relate to “sustainable operations?” Most data center facility managers will agree that the most unreliable, maintenance-intensive, energy-consuming aspect of the HVAC system is the chiller. Chillers take the longest time to restore following an outage of all other mechanical cooling components. Eliminating the reliance on the chiller for critical operations may be the single best innovation for improving cooling system uptime and reducing overall energy consumption. It will also decrease maintenance costs while simultaneously simplifying cooling system architecture and controls. Couple this with the other benefits of liquid cooled IT hardware — such as the elimination of server fans and containment systems, ability to accommodate extreme heat densities without “hot aisles,” and the use of next generation high performance computers — and the advantages become obvious.

This is not to say that liquid cooled hardware data centers will not come with unique challenges of their own. As unlikely as a cooling system outage may become, the extreme close-coupling of the cooling system to these extreme heat sources means the ensuing thermal transient will be accelerated resulting in possible (if not probable) thermal damage to the IT equipment. “Uninterruptible cooling systems” would be the norm, and associated redundancy and control strategies will have to counter these new difficulties. However, taking into consideration the continuing demand for ever increasing IT equipment performance, and for higher energy efficiencies, this may still be our best path forward.

I called my old friend Vali Sorell and we discussed this topic again. It is our shared opinion that another three to five years is a reasonable prediction as to when liquid-cooled hardware solutions will become a viable option for mainstream data centers — but this time, our predictions are based on the migration of proven technology from the niche world of super-computers into the mainstream. 


1. Coles, Henry C. (Lawrence Berkeley National Laboratory). 2010. Evaluation of Rack-Mounted Computer Equipment Cooling Solutions. California Energy Commission.

 

* NOTE:ASHRAE TC9.9 published a white paper stating, “liquid cooled IT equipment refers to any liquid within the design control of the IT manufacturers which could be water, refrigerant, dielectric, etc.” In other words, liquid-cooled IT equipment is IT equipment that uses liquids internally within the IT equipment chassis for cooling IT components.