This paper explores the evolution of data center cooling technology to address the anticipated demand for increased rack density for IT equipment in the white space. In our experience, most of the hyperscale data center operators have reached a density limit for IT equipment, due to the inherent limitations of air-cooled cabinets. On average, much of the industry is achieving rack density as high as 11 kW/cabinet with current designs. The emergence of artificial intelligence processing requirements and further density increases for cloud computing will continue to push that limit. We anticipate that many data centers will require significantly higher density moving forward, with some systems requiring as much as 40 kW/cabinet in the near future.


download whitepaper button


Cooling IT equipment with direct connection to chilled water has existed since the mid-1960s, dating back to some of the original water-cooled products from IBM. Traditionally, the mainframes used chilled water and a heat exchanger to cool the processors. The remainder of the equipment was air cooled using chilled water Computer Room Air Handling Units (CRAHUs). That approach was used throughout the 1980’s and 1990’s for large supercomputers. However, due to concerns about water in data centers, direct connections to chilled water systems were largely eliminated from commercial and enterprise data center designs over the last 20 years. The current industry is heavily reliant on air-cooled racks in mission critical applications.

As we advance the clock to 2023, various technologies for liquid cooling are becoming mainstream. They come in a wide range of deployments and applications, including liquid cooling of single server, liquid cooling an entire rack, or immersion cooling with servers submerged in a non-conductive fluid. These liquid-cooled technologies can collectively be referred to as Direct Liquid Cooling (DLC).
For any data center design, the gold standard used to evaluate cooling technologies is Power Usage Effectiveness (PUE).

PUE was adopted by Green Grid on February 2, 2010. Green Grid is a non-profit organization of IT professionals. PUE has become the most commonly used metric for reporting the energy efficiency of data centers1. This factor represents the Total Facility Energy Use divided by the IT Equipment Energy Use: the closer to 1.0, the better the PUE. This paper will utilize PUE when discussing relative efficiency and performance of various mechanical technologies.