For most data centers, cooling used to be pretty straightforward: A CRAC unit kept the server room cold enough that staff often wear sweaters to work no matter the outside temperature. But, the nature of data centers has changed significantly in the last decade. More applications require high-density cooling because rack densities continue to increase. The thermal design power (TDP) of chips has risen almost 50%, using more power and generating more heat.

In 2016, global data centers spent $8 billion to cool their data centers, and, if not kept in check, that is expected to reach $20 billion by 2024. This is due to the sheer amount of data being used and in need of a place to be stored: 175 zettabytes by 2025. So what is the best way to cool the data centers that are storing this mission critical information?

Air Cooling 

The two most common air-cooling methods are CRACs and CRAHs. In these systems, air is used to cool the entire room or individual racks/rows. CRACs are similar to a home air conditioner — air flows across cooled refrigerant and then blown into the room. CRAHs use chilled water, which requires a chiller. 

Air-cooled systems:

•    Require a specific layout, including raised floors and spaced racks.
•    Have a lot of moving parts, which use space and require maintenance, including compressors or chillers, air handlers, humidity controls, air filters, and backup generators.
•    Require aisle containment, which takes up space that could be used for more equipment. 
•    Are prone to developing hot spots, which threaten sensitive IT equipment.
•    Are not the most efficient heat removal method.
•    Require access to significant power, making it a poor choice for data centers in remote locations or edge data centers.
•    Expose IT equipment to airborne contaminants and to the adverse effects of the air itself, including corrosion and oxidation.
•    Can damage IT equipment as a result of the vibrations of the server fans

Liquid Cooling 

The two most common liquid cooling methods are single-phase immersion and liquid to chip. Single-phase liquid immersion uses a dielectric fluid surrounding the servers, which transfers the heat by circulating through a cooling distribution unit that disperses the heat and returns the cooled liquid back into the compartment. Liquid-to-chip cooling, also called direct-to-chip or cold plate cooling, uses coolant on a cold plate inside the server and a chilled water loop to carry the heat outside.

Single-phase liquid immersion systems:

•    Have just three moving parts: a coolant pump, water pump, and cooling tower/dry cooling fan.
•    Can be completely enclosed or sealed within modular structures, since no airflow is required.
•    Are very efficient at removing heat.
•    Reduce data center power usage and cooling costs.
•    Enable reallocation of power to critical IT load within the same power envelope.
•    Enable easy maintenance for IT equipment — lift the lid, remove the server and set it on integrated service rails.
•    Can cool up to 100 kW per rack (theoretically, up to 200 kW when used with a chilled water system).
•    Increase mean time between failures (MTBF).
•    Extend the life of hardware by keeping temperatures consistent and protecting it from outside air

Liquid Versus Air
While air cooling is tried and tested and has been around for decades, it is becoming a less desirable choice in today’s computing environment. Some air-cooled data centers are capable of cooling upwards of 30 to 35 kW per rack. But, in reality, air-cooled data centers become very inefficient above 15 kW  per rack.

Single-phase liquid cooling works on the principle that liquid conducts heat better than air to remove heat from the servers to keep them operating efficiently and safely. It can support densities up to 200 kW per rack and is a low-maintenance solution for data centers