For decades now, air cooling with containment has been the preferred method for cooling the IT equipment (ITE) in data centers. However, in recent years liquid cooling has come to the forefront due to its increased efficiency and ability to cool high-density ITE. What does this mean for future data center cooling? When deciding on a cooling solution for a future data center, there are a number of factors that need to be considered, such as location, climate, power densities, workloads, efficiency, performance, and physical space.

These considerations highlight the need for data center designers to take a more holistic approach to cooling, with the realization that, moving forward, both air and liquid cooling will coexist, especially with HPC data centers. This article examines why it’s important to consider the total cost of ownership (TCO) for each individual business case when choosing which type of cooling to use for data centers.

Transitional shift

It’s easy to assume when it comes to cooling that a "one size" data center will fit all designs in terms of power consumption both now and in the future, but that’s not accurate. It’s more important to focus on the actual workload for the particular data center that's being designed or managed.

For example, in the past, it was a common assumption with air cooling that once you went above 25 kW per rack it was time to transition to liquid cooling. But the industry has made some changes with regard to this number that is enabling data centers to cool up to and even exceed 35 kW per rack with traditional air cooling.

In 2022 the average rack density was approximately 10.5 kW, which highlights that all data centers are different in terms of workloads, growth rates, and power consumption. If you look at other workloads, such as the cloud and most businesses, the growth rate might be rising but it still makes sense for air cooling in terms of cost. The key is to look at this issue from a business perspective, and determine what needs to be accomplished.

Coexistence

At some point, high-density servers and racks will need to transition from air to liquid cooling, especially with CPUs and GPUs expected to exceed 500 W per processor or higher in the next few years. But this transition is not automatic and isn’t going to be for everyone.

Again, this really highlights the need for data center designers and managers to take a holistic approach to cooling their data centers. Air- and liquid-cooling technologies have and will continue to coexist. It’s been done for decades with experts anticipating both cooling methodologies working alongside each other for many years to come. Here are two examples of how they currently are doing that.

Direct to chip

Direct liquid cooling, also known as direct to chip (DTC) utilizes pipes that deliver dielectric liquid to a cold plate that sits on top of the CPU/GPU to draw off the heat. The extracted heat is then transferred to a chilled-water loop and expelled outside. In this case, a portion (50%-75%) of the heat is absorbed in a liquid loop, while the remainder of the heat is removed via traditional room-based air-cooling units. Since traditional air cooling is needed to remove the remaining heat, either hot- or cold-aisle containment will be needed for these deployments.

Air-assisted liquid cooling

Air-assisted liquid cooling (AALC) technology can assist data center designers and managers that need to cool high-density racks but don’t want to make the shift to liquid cooling. AALC can extend the life of existing air-cooled and new high-performance computing (HPC) facilities serving as a transitional shift to liquid cooling while still using traditional air cooling (CRACs, CRAHs, fan walls, etc.) to flood the data center with cold supply air.

Several large hyperscale centers are already considering AALC, since it can cool between 45 kW and 55 kW per rack in some cases. An internal reservoir pump unit (RPU) and heat exchanger assist the existing air cooling to cool these higher densities without the need to run piping with liquid to and from the racks. Hot-aisle containment will be needed when using AALC. The small RPU that is located at the bottom of each rack is important, since it means that liquid and its associated piping do not have to be brought into the data center. The RPU is, instead, a self-contained unit that can easily be maintained and replaced if needed.

Future-proofing

Once again, how do we move forward? There’s no clear-cut answer. The choice will really be dependent on each data center facility and if they’ll be cooling both high-density and regular-density ITE. It will not be a one-size-fits-all solution.

In order to achieve cooling future-proofing for both current and future designs, data center designers and managers will want to separate high- and low-density ITE in the room. Future-proofing also highlights the need for data center designers and architects to take a holistic approach to cooling. It shouldn’t be a case of considering only air or only liquid cooling. Instead, the key is to understand the tradeoffs of each cooling technology and what makes the most sense for each individual business case. This is how we’ll drive future efficiency — it's not just one type of cooling versus the other, but the two working cohesively.