Data center thermal management technologies continue to evolve to serve new demands of both large and small (edge) data centers. Colo, cloud, enterprise, and edge facilities are utilizing a variety of cooling options — chilled water, pumped refrigerant plus aisle, row, and rack-level air or liquid cooling. Depending on the density of applications within a data center, more than one method of cooling may be in play. Regardless of the method of cooling selected, advanced thermal controls have become the critical “glue” that brings just the right temperature and airflow to racks to ensure uptime, maximize efficiency, and reduce operating costs.

The increasing use of high-density applications such as artificial intelligence, machine learning, and high-performance computing and data analytics is also creating new cooling demands for existing data centers. This is leading to hybrid cooling solutions, where more than one cooling technology is applied to provide specific rack or even chip-level cooling.

 

Delivering solutions to meet trends

Today’s cooling technologies must respond to several data center trends.

  • Data centers are getting larger. It’s not unusual to see projects more than 10 MW in size, which was unheard of just a few years ago. Thermal management must meet the goals of maximizing uptime, reducing costs, and increasing both efficiency and speed to market.

  • Conversely, some data centers are getting smaller. At the other end of the spectrum, the number of edge sites is exploding. The edge is becoming increasingly mission critical, requiring thermal management solutions that ensure availability while also delivering efficiency benefits that cascade across large distributed networks. As these are often unmanned, or “lights out” facilities, remote thermal monitoring and control is a key factor in uptime and maintenance.

  • They’re also warmer than ever before. Chances are, you’ll no longer need a sweater or jacket when walking through a data center. ASHRAE 2016 thermal guidelines increased the allowable temperature range from 18°C (64.4°F) to 27°C (80.6°F). That opens up cooling options and potentially reduces operational expense and capital expenditures. But it’s imperative to remember that reliability and availability remain at the fore.

  • In some places, data centers are building up instead of out. Real estate costs in high-density population areas such as the San Francisco Bay region, London, Tokyo, Singapore, and Hong Kong have dictated the need to add height rather than a wider footprint. Building height has a direct correlation to the specified cooling technology. Chilled water solutions lend themselves well to multistory buildings (three stories or more). That means that while chilled water may not be a specifier’s first choice, it may be required due to height limitations for many current pumped refrigerant-based cooling solutions.

If physical conditions allow its application, current pumped refrigerant technology adds value by providing annual power usage effectiveness (PUE) of 1.05 to 1.20 and by saving millions of gallons of water per MW. Recent control advances can increase annual energy savings for pumped refrigerant units by up to 50% depending on the application. These systems utilize refrigerant pumps rather than compressors in low to moderate ambient conditions. These systems enter economization mode whenever possible, maximizing economization based on outdoor temperature and load rather than a fixed point. Capacity may be increased by adding units, making this an extremely scalable solution.

But if building height restrictions require its use, chilled water cooling offers flexibility to provide cooling for CRAC units as well as a means for supplemental rear door cooling. However, water-based solutions present additional cost issues, such as plumbing/piping, centralized chiller plants, and cooling towers, plus the cost of system maintenance to prevent leaks and corrosion.

Using chilled water may also add to the cost of cooling if water is relatively scarce and/or expensive, again, depending on location. But even for a region where water is considered a commodity, the millions of gallons per MW that will be used over time may in fact be a conundrum in terms of a facility’s commitment to sustainability. That impact may be somewhat minimized through the application of a modern control system.

  • Flexibility is key in large data centers, with non-raised floor environments often employed to reduce construction costs. This new construction trend lowers building costs and speeds data center deployment but also removes the ability to change airflow by moving floor tiles. Pumped refrigerant, chilled water, or evaporative technologies can provide required cooling levels, but there are several other flexible methods of achieving the necessary levels, which include:

  1. Aisle containment solutions that improve efficiency by keeping hot and cold air from mixing.

  2. Rack chimneys that mount to the rear of racks to capture and channel heated exhaust to a ceiling plenum and returned to a CRAC unit for room recirculation. The downside is that the chimney ducting fixes the racks in place, potentially limiting future flexibility.

  • Medium-sized or smaller data centers may employ in-row thermal management units that reside alongside equipment racks and have the same small footprint while providing cool air to the front of the racks.

Regardless of the type of cooling chosen, applications can benefit from modern control systems that improve overall system performance and stability by automatically adapting to changing conditions in the data center. Controls can provide input into the operating conditions 24/7, allowing authorized personnel to remotely monitor and take action using a smart phone app.

 

High density requires creativity

Operations and industries such as finance, research, academia, colocation, and the petrochemical industry require high-density computing to support applications such as facial recognition, advanced data analytics, artificial intelligence, and machine learning. This requires cooling to the rack and chip level, and cooling technologies are in development to answer these needs.

These high-performance computing deployments involve 30- to 60-kW racks housing ultra-high performance, high-powered servers. Such racks are typically placed in pod environments within large cloud and colocation facilities that may already be cooled overall by chilled water, pumped refrigerant, or evaporative technologies. Air-cooled options include containment, in-row cooling, or rear door heat exchanger modules that replace rear doors and may use supplemental fans to draw air through the coils.

It is unlikely an entire facility would require rack or chip level cooling. But high-density direct liquid cooling — a radical departure from traditional thermal management approaches — may be appropriate or even necessary in these environments and will require changes to the facility as well as adjustments to the surrounding equipment.  Even so, liquid cooling would likely only reduce the total heat load by approximately 50%. That means another cooling source, such as rear door cooling, would be required to provide full thermal management.

The fluid used for high-density cooling could be chilled water, which would be a possible consideration for a facility already employing this method of cooling. But the liquid could also be a dielectric fluid (which won’t short out equipment in the event of a leak), or a refrigerant. There will likely be a uniformity of cooling techniques per rack.

 

Peering past the edge

The edge continues to grow as the need to reduce latency and keep data closer to the end user and customer revolves around factors such as high-quality video content, IoT applications, e-commerce, and the coming 5G network. Edge computing is a powerful revenue driver for both business-to-business and business-to-consumer data center applications. Thus, smaller data centers, closets and converted storage spaces comprise the market and all are becoming more mission critical. Existing building air conditioning is not an option for today’s edge environments.

That’s not to say cooling options cannot fit into building management systems (BMS). Remote monitoring and management via IoT smart device apps make dedicated thermal management more desirable, providing advanced, reliable cooling in these small spaces.

There are a variety of dedicated systems, depending on needs and site limitations. These include wall-mounted systems, or others mounted above dropped ceilings, with mini splits that deliver high efficiency through variable capacity compressors and fans. The newest edge option involves rack-based cooling with heat rejection to the room or ceiling plenum, delivering 3.5 kW of IT cooling. This saves valuable floor space to allow server expansion when space is at a premium.

Some edge applications offer the ability to use smart, row-based self-contained modular data centers, or “data centers in a box.” These are turnkey solutions that eliminate the need to build out data center spaces, providing up to 10 racks, integrated cooling, uninterruptible power supplies (UPS), power distribution, fire suppression, monitoring,  and backup ventilation.

 

The future... dry or wet, maybe, but cool for certain

As data center demand continues to grow and evolve, a wide variety of cooling options are required to meet the needs of different IT environments. The demand for pumped refrigerant cooling continues, but it was somewhat abated as data centers grew up instead of out, opening the door once more for chilled water. This could change as new refrigerants and cooling technologies are developed to address multistory applications.

The edge will continue to grow and expand in response to consumer and business demands and expectations. With 5G beckoning and applications such as smart cities more of a reality every day, development of a reliable IT infrastructure is a must — and that infrastructure will have to include reliable, efficient cooling.

To say the least, it’s an exciting time in our industry. It’s never been more important to work with thermal management manufacturers and specification specialists when designing new or retrofit cooling methods. Keeping a cool head — and cool facility — are keys to future success.