Preceded only by humanity’s perseverance and talent for innovation, data has become the world’s most precious resource, interwoven with virtually every aspect of our professional and personal lives.

In turn, as digital services usage soared throughout global markets during the first and second waves of the pandemic, data centers have become rightfully identified by governments the world over as critical infrastructure — meaning, as essential to society as energy, food and agriculture, financial services, and the water sectors.

The industry has seen historic surges in eCommerce, streaming and social media, video conferencing and cloud collaboration platforms, telehealth, and gaming. And with AI and big data analytics becoming mainstream across the cloud and enterprise — not to mention 5G, the IoT, augmented and virtual reality, and driverless vehicle technologies right around the corner — the data center business is poised to grow at an even stronger and steadier pace in the near and long-term.

Data centers house various IT devices and equipment to deliver these digitalized services, all of which are powered by electricity. The electricity used by these IT devices is ultimately converted into heat, which requires cooling equipment that also runs on electricity to remain operational.

Although reporting is still disparate and varies widely, it is estimated that data centers consume approximately 1% of all electricity used in the world. And while the latest research shows promising progress made, thanks to efficiency improvements, legacy data centers remain a significant source of greenhouse gas emissions.

New approaches to the design, construction, deployment, and operation of data centers are required to keep pace with rising demand, and for the sake of energy efficiency and sustainability, that extends to how they're cooled.

Traditional Cooling Systems

In order to maintain their facilities at an ideal temperature and battle ever-growing densities, many data center owner-operators use direct liquid cooling techniques, whereby a coolant fluid is delivered to the processor and other electronics for heat transfer. While direct liquid cooling has been used for many years, the technology often employs great amounts of clean water and requires extensive fluid disposal as well as ongoing water treatment. So, especially in developed countries where water is increasingly scarce, direct liquid cooling is perhaps not the best option.

Free cooling, which is the use of naturally cool air instead of mechanical refrigeration, can dramatically reduce the power (and hence, the cost) associated with cooling data centers, but traditional free cooling systems aren’t practical for most facilities. For one, in locations where it gets too hot and/or too humid, traditional free cooling doesn’t work, or it brings cold-aisle temperatures to levels that most data center tenants find uncomfortable. And, two, without conditioning, which can be expensive and reduces the efficiency of the system, outside air can introduce contaminants and/or make the facility too humid or too dry — all of which can cause an outage. And, three, traditional free cooling systems typically aren’t able to meet the cooling needs of high-density computing environments on their own.

The result is that very few data center providers leverage free cooling and, instead, still use traditional chiller plant/forced-air technology. However, their customers are trading energy efficiency and lower costs for location flexibility and reliability. Air-cooled data centers also consume considerable energy to power IT system fans that move air. Additionally, there is the issue of stranding power capacity, which is necessary to act as a reserve for peak cooling requirements when the fans, pumps, and compressors are working at their highest levels. At some data centers, this power capacity can equate to a significant portion of the facility’s total power envelope — power that could be reallocated for IT.

Especially in light of environmental, social, and governance (ESG) objectives, the importance of a cooling system that delivers efficiency at any load, in any climate, and in any location has never been more critical.

Higher Density Without Stranded Capacity        

A new approach was born out of the understanding that the data center cooling problem is actually a heat removal problem. So instead of blowing cold air into the data center, this new cooling solution removes the heat. Removing heat at its source takes far less energy than making outside air cold and blowing it into the data centers to mix with hot air.

A close-coupled heat removal system allows customers to run racks at different densities in the same pod without worrying about hot spots. It also allows customers to run higher densities without having to spread their racks apart, which strands capacity. It works by capturing and containing the heated air exhausted from the equipment and channeling it through an efficient primary unit, where the heat is absorbed and then removed at the source. The heat is then transported to another unit, where it is rejected into the atmosphere. This not only reduces the water flow requirements but also results in a reduction in piping costs. Less water means less piping and pumping.

Heat is absorbed and transported by the first unit, but it is the second unit that rejects the heat into the atmosphere. The secondary unit is an air-cooled adiabatic assisted cooling system comprised of a dry fluid cooler with an indirect evaporative cooling mode and an integrated chiller heat rejection system. Because it is a closed system decoupled from the air handler, there are no external environmental risks or temperature and humidity fluctuations.

Responsive to Workloads

This dynamic approach to cooling technology supports high, mixed, and variable power densities to enable companies to evolve without stranding capacity. It offers the ability to scale vertically or horizontally, supporting 1 kW to 50 kW per rack within the same footprint, resulting in fewer points of failure. Workload densities can scale in place without having to reconfigure existing infrastructure, disperse equipment, or require large-scale investments to augment floors for increasing heat loads.

Because IT loads can vary, dynamic cooling systems must quickly and easily respond to optimize in real time. The units are close-coupled with the racks, so the heat-removal system instantly ramps up and down based on server demand. Variable fan speeds and pumps respond in real time to changing loads.

From a cooling and electrical perspective, it’s become necessary to redesign traditionally static systems to be dynamic — able to handle varying densities and power draws. An essential element of adaptive data center models is cooling technology that captures and removes heat at its source to provide hyper-scalable and ultra-efficient environments that dynamically adapt to IT loads.

Interestingly enough, the fundamental scientific approach to conventional cooling systems was developed in the early 1900s. The original challenge was to solve a printing application problem for a Brooklyn, New York, lithograph and publishing company, which needed to control humidity in its plant.

Even more interesting is that, until now, the infrastructure designed to support IT demands hasn’t changed substantially from those cooling systems that were designed more than 100 years ago. With the data center industry’s recognition as critical infrastructure, and the responsibility that comes with that designation, it’s more than time to adopt a significant technological advance in cooling.