The duties and responsibilities of data center engineers and facility managers are certainly not getting easier as chip capabilities and applications, AI and machine learning, and other technologies place more and more demands on them.

The role of data centers can hardly be overstated. The global economy depends on them as they drive innovation, job growth, and a myriad of other benefits that spring from the computers that store and process data. 

The heat is on for sure — and not just on those whose jobs are to keep data centers running efficiently, effectively, and productively. New technologies and increasing demands are generating a considerable amount of heat as information technology equipment (ITE) power density soars past 5 kW, contributing to tremendous challenges for data centers.

Not only is the industry driving data centers to deal with warmer temperatures, high-performance computing (HPC) has been pushing the boundaries since the increasing use of multicore chips in the 2010-2012 time period. In addition, new, dense equipment hinders airflow.

As power demands increase, air alone is not enough to keep data centers running at peak performance. Data centers are the keystones of the big data boom. An estimated 2.5 quintillion bytes of data are created every day, and the pace is only accelerating due to the rapid growth of computers, smartphones, and IoT devices.

As more chips are generating higher levels of heat, and data centers are housing increasing numbers of cores, liquid cooling is quickly gaining popularity, growing out of its reputation as a niche solution for specialized computing.

Those who formerly shied away from it now appreciate the many advantages of liquid cooling.

Overcoming Hydrophobia

Beating the heat is paramount in the data center industry, but many suffer from hydrophobia — a fear of introducing water into the white space. However, the facts don’t support the fear.

In fact, many engineers and facilities managers are coming to see liquid cooling as a valuable enabler of technology. It serves as a solution for those who thought they were constrained in terms of how much power and equipment they could pack into their data centers.

Large information technology companies and chipmakers, such as Intel, are pushing the move to liquid cooling — it’s no longer a case of just a few companies dipping their toes in the water.

Data center operators who are considering joining the movement and transitioning from air to liquid cooling now can count on increased reliability. With nearly half of data centers’ energy use dedicated to cooling, cost is an enormous factor. Fortunately, today,  the costs associated with liquid cooling are going down along with the risks of leaks.

Old problems and perceptions, such as the risk of having water around electronics, leaky couplings, and pressure drops, are fast fading as the engineering, manufacturing, and installation of liquid cooling systems and components advances, according to Matt Archibald, director of technical architecture for data center and networking solutions, nVent. Today, they are developing and implementing liquid cooling that meets and exceeds data center needs and sets them up for opportunities to add computing power that fuels growth and profitability.

Types of Liquid Cooling and Coolants

Just as there are a variety of ways to cool an internal combustion engine, there are several effective systems for data centers. Data center managers should be familiar with various systems and coolants to decide which ones to employ for their equipment cooling needs. 

Liquid-cooled heat sinks — Heat exchangers that run chilled water to and from the ITE.

Close-coupled cooling systems and CRAH systems — Relatively new for data centers, these are designed to execute the heat transfer close to the equipment rack. Within this category are two configurations: open-loop and closed-loop, which route chilled water or refrigerant through coils associated with the equipment racks. CRAH systems employ fans, coils, and chilled water.

Negative pressure — This design directs liquid to heat-generating chips in a relatively low flow rate under negative pressure to force liquid to run away from the electronics instead of flooding them in the event of a line break.

Positive pressure — This design pushes fluids at greater than atmospheric pressure. If they leak, problems result. These systems require trusted components on reliable equipment, particularly industry-standard, universal, dripless quick disconnects that have undergone strict, rigid testing procedures.

Water and propylene glycol — In general, two kinds of liquids are passed through coils and tubing to absorb the heat from ITE: water and propylene glycol. Some systems also are designed to use refrigerants as the heat-absorption medium.

Nonconductive dielectric fluids and immersion tubs — This is a relatively new process involving immersion of IT hardware into tubs of nonconductive dielectric fluids to dissipate the heat.

Reliable Connectors Are Critical

Liquid cooling allows specific data center IT system components to be cooled to a greater degree with water or other liquids compared to air cooling. A simple example is cooling the hot side of a server rack with liquid traveling through pumps and hoses.

Data centers have, by necessity, been extremely diligent about keeping water away from the electronics, and, therefore, resisted implementing liquid cooling systems. But today’s liquid cooling systems deliver great advantages, such as enabling data centers to be more efficient — maintaining and even adding power density without raising costs.

Scheduled maintenance is important in reducing risks, as is the reliability of connectors, hoses, and other components.  Liquid cooling is a go-to solution today as engineers learn how to minimize the risk of escaping fluid. To counter the risks of leakage, reliable connectors are critically important. Couplings should last as long as the cooling system, and push-lock hoses compatible with a variety of fluids greatly simplify assembly and maintenance. Effective and efficient connectors ensure higher flow rates, low pressure drop for maximum energy efficiency, resistance to vibration and rotation, and no leakage during the process of connecting and disconnecting.

Standardization

Open computing, led by Intel and others, is resulting in more standardization, which, in turn, drives down costs and simplifies the infrastructure product portfolio, Archibald said.

“An infrastructure manufacturer doesn’t need 10 different manifolds, and we can get by with fewer varieties in pieces and parts,” he said. “We can standardize on one manifold with four universally accepted connectors. This makes it simpler for manufacturers and simpler for data centers, as costs are reduced even for the customers of data centers.”

Reliability is enhanced because everybody can standardize around a good common solution. Parts, therefore, become more interchangeable.

Archibald noted that connectors are critical in liquid cooling systems, and universal, dripless quick disconnects are becoming the standard. Performance is critical, he said, so it is wise to select a connector with an advanced internal design and robust functionality that incorporates nonspill valves. Using a flat-sealing design prevents fluid loss in the vicinity of sensitive electronics and electrical connections.

The result is what everybody in data center management strives to achieve: very efficient cooling through hardware that ensures no spills during connection and disconnection.

Many engineers and facility managers have come to recognize that connectors and related parts are areas where data center operators must not compromise on quality at the expense of saving a few dollars. The industry is gaining an appreciation for the importance of investing smartly in hardware that effectively reduces risks while maximizing efficient thermal management.

The Smart Choice

HPC is pushing the boundaries for data centers, which are seeing more and more chips that require water cooling. The advent of multicore chips in 2010-2012 started the new chase on the ever-increasing curve for efficiency in thermal management. HPC drove the industry to find a way to beat the heat beyond what could be achieved with airflow alone.

Is liquid cooling the best choice going forward? The answer for many is yes, but, for others, it depends. Trends point to certain applications and situations where liquid cooling clearly is advisable. They include:

  • Data centers that are unable to expand due to air-cooling constraints.
  • Banking/financial centers and the insurance industry, as they use more high-performance chips.
  • Data centers facing the choice of building a new facility or expanding within their existing footprints.

CapEx and OpEx Trends

Infrastructure components, like cooling distribution units (CDUs), manifolds, connectors, and such, are relatively inexpensive, yet misperceptions about their cost versus value linger.  When weighed against the cost of IT solutions, infrastructure cost is actually less than 1% of the total spend.

A 2017 report for the U.S. Chamber of Commerce Technology Engagement Center by its senior vice president, Tim Day, and Nam D. Pham, managing partner at ndp|consulting, noted the largest expenditure to operate a data center is power. Half of that expenditure goes for running ITE, while the balance is for cooling and power infrastructure. The report stated that a typical data center might spend  $215.5 million in initial capital expenditures and then $18.5 million in annual operating expenditures. Of  the operating expenditures, power represents 40% of the total. 

As open computing, led by Intel, leads to more standardization in universal, quick disconnects and other components, many expect a continuation of product portfolio simplification and accompanying reduction in costs. If CDU manufacturers don’t need to stock 10 different manifolds and can settle on a model with four universally accepted connectors, it simplifies their lives and drives down costs for their data center customers.

Ultimately, spending a little more on liquid cooling is smart because that’s what will guarantee the IT equipment works. The alternative of cutting corners to save a few dollars here and there exposes data center operators to poor workmanship and ineffective — sometimes even risky — cooling solutions.