The data infrastructure industry is facing a number of challenges in today’s digital world.

Demand for data services is growing at a phenomenal rate and, yet, there has never been a greater pressure, or duty, to deliver those services as efficiently and cleanly as possible.

As every area of operation comes under greater scrutiny to meet these demands, one area in particular, cooling, has come into sharp focus. It's an area not only ripe for innovation but also where significant progress has been made that shows a way forward for a greener future.

According to some estimates, the number of internet users worldwide has more than doubled since 2010, while internet traffic has increased some twentyfold. Furthermore, as technologies emerge that are predicted to be the foundation of future digital economies, such as streaming, cloud gaming, blockchain, machine learning and virtual reality, demand for digital services will rise not only in volume, but also sophistication and distribution. Increasingly, the deployment of edge computing, bringing compute power closer to where it is required and where data is generated, will see demand for smaller, quieter, remotely managed infrastructure. This one area alone is expected to grow at a compound annual growth rate (CAGR) of 16%, reaching a market value of $22 billion by 2026, according to GlobalData.

This level of development brings significant challenges for energy consumption, efficiency, and architecture. The International Energy Association (IEA) already estimates that data centers and data transmission networks are responsible for nearly 1% of energy-related greenhouse gas (GHG) emissions. While it acknowledges that since 2010, emissions have grown modestly despite rapidly growing demand, through energy efficiency improvements, renewable energy purchases by information and communications technology (ICT) companies and broader decarbonization of electricity grids, it also warns that to align with the Net Zero Emissions (NZE) by 2050 Scenario, emissions must halve by 2030.

This is a significant technical challenge. Firstly, in the last several decades of ICT advancement, Moore’s Law has been an ever-present effect. It states that compute power would more or less double, with costs halving, every two years or so. As transistor densities become more difficult to increase as they get into the single nanometer scale, no less a figure than the CEO of Nvidia has asserted that Moore’s law is effectively dead. This means that, in the short term, to meet demand, more equipment and infrastructure will have to be deployed in greater density. Added to this are recent developments from both Intel and AMD, where their high-end data center-aimed processors will work in the 350- to 400-W range, further exacerbating energy demand.

The impact on cooling infrastructure and cost

In this scenario of increasing demand, higher densities, larger deployments, and greater individual energy demand, cooling capacity must be ramped up too.

Air as a cooling medium was already reaching its limits. As rack systems become more demanding, often mixing both CPU and GPU-based equipment, individual rack demands are approaching or exceeding 30 W each. Air-based systems, at large scale, also tend to demand a very high level of water consumption, for which the industry has received criticism. One estimate equated the water usage of a mid-sized data center to that of three average-sized hospitals.

Liquid cooling technologies have developed as a means to meet the demands of both the volume and density needed for tomorrow’s data services. It takes many forms, but the three primary techniques are direct-to-chip (DtC), rear door heat exchangers, and immersion cooling.

DtC, or direct-to-plate, cooling is where a metal plate sits on the chip or component and allows liquid to circulate within enclosed chambers, carrying heat away. This is a highly effective technique, that is precise and easily controlled. It's often used with specialist applications, such as high-performance compute (HPC) environments.

Rear door heat exchangers, as the name suggests, are close-coupled indirect systems that circulate liquid through embedded coils to remove server heat before exhausting it into the room. They have the advantage of keeping the entire room at the inlet air temperature, making hot- and cold-aisle cabinet configurations and air containment designs redundant, as the exhaust air cools to inlet temperature and can recirculate back to the servers. The most efficient units are passive in nature, meaning server fans move the air necessary. They are currently regarded as limited to 20 to 32 kW of heat removal, though units incorporating supplemental fans can handle higher loads in the 60-kW maximum range.

Immersion technology employs a dielectric fluid that submerges equipment and carries away heat from direct contact. Whilst for many, liquid immersion cooling immediately conjures up the image of a bath brim full of servers and dielectric, precision liquid immersion cooling operates at rack chassis-level with servers and fluid in a sealed container. This enables operators to immerse standard servers with certain minor modifications, such as fan removal, as well as sealed spinning disk drives. Solid-state equipment generally does not require modification.

A distinct advantage of the precision liquid cooling approach is that full immersion provides liquid thermal density, absorbing heat for several minutes after a power failure without the need for backup pumps. Liquid capacity equivalent to 42U of rack space can remove up to 100 kW of heat in most climate ranges, using outdoor heat exchanger or condenser water, allowing the employment of free cooling.

Cundall’s liquid cooling findings

According to a study by engineering consultants Cundall[i], liquid-cooling technology consistently outperforms conventional air cooling, in terms of both PUE and water usage effectiveness (WUE).

This, according to the report, is principally due to the much higher operating temperature of the facility water system (FWS), compared to the cooling mediums used for the air-cooled solutions. In all air-cooled cases, considerable energy and water is consumed to arrive at a supply air condition that falls within the required thermal envelope. The need for this is avoided with liquid cooling. Even in tropical climates, the operating temperature of the FWS is high enough for the hybrid coolers to operate in economizer "free cooling" mode for much of the time. And, under peak ambient conditions, sufficient capacity can be maintained by reverting to "wet" evaporative cooling mode.

A further consistent benefit, the report adds, is the reduction in rack count and data hall area that can be achieved through higher rack power density.

There were consistent benefits found, in terms of energy efficiency and consumption, water usage, and space reduction, as well as OpEx and CapEx, in multiple liquid cooling scenarios — from hybrid to full immersion.

In hyperscale, colocation, and edge computing scenarios, Cundall found the total cost of cooling information technology equipment (ITE) per kW consumed in liquid versus the base case of current air cooling technology varied from 13% to 21% less.

In terms of emissions, Cundall states PUE and total PUE (TUE) are lower for the liquid-cooling options in all tested scenarios. Expressing the reduction in terms of kilograms of CO2 per kW of ITE power per year, results saw more than 6% for colocation, rising to almost 40% for edge computing scenarios.

What does the immediate future hold in terms of liquid cooling?

Combinations of liquid- and air-cooling techniques in hybrid implementations will be vital in providing a transition, especially for legacy instances, to the kind of efficiency and emission-conscious cooling needs of current and future facilities. Though immersion techniques offer the greatest effect, hybrid cooling offers an improvement over air alone, with OpEx, performance and management advantages.

Even as the data infrastructure industry institutes initiatives to better understand, manage, and report sustainability efforts, such as the Climate Neutral Data Centre Pact, the Open Compute Project, and 24/7 Carbon-free Energy Compact, more can and must be done to make every aspect of implementation and operation sustainable.

Developments in liquid cooling technologies are a significant step forward that will enable operators and service providers to meet demand while ensuring that sustainability obligations and goals can be met. Initially hybrid solutions will facilitate legacy operators to make the transition to more efficient and effective systems, while more advanced technologies will ensure new facilities are more efficient, even as capacity is built out to meet rising demand.

By working collaboratively with the broad spectrum of vendors and service providers, cooling technology providers can ensure that requirements are met, enabling the digital economy to develop to the benefit of all while contributing toward a livable future.

[i] “Desktop Study Report - Liquid and Air-Cooling Compared,” Cundall, March 2021