As data center configurations and applications require more computing power, managers are looking at new ways to address thermal management challenges. Cooling high- and ultra-high-density setups require equipment and systems that deliver far more capacity. For instance, if the configuration requires 18 kilowatts (kW) of power consumption and cooling capacity, it is technically unfeasible using conventional approaches because it requires supplying 2,500 cubic feet per minute (cfm) of cooling per enclosure. Since traditional solutions may not be enough, engineers now have new approaches to consider, including focused cooling, scalability, flexibility, and efficiency.

 

AISLE COOLING

The most common approach to thermal management is aisle cooling, which involves configuring enclosures to create a hot aisle where exhaust heat ends up, and a cold aisle where HVAC systems supply cool air. The cool air is then pulled into the front of the enclosure and exhausted out of the back to mitigate thermal loads. This allows the HVAC system to effectively cool an entire data hall. Separating the cold and hot portions of the system allows for maximum delivery of cooling capacity and maximum removal of waste heat. In essence, airflows do not get “mixed up.”

While aisle cooling is effective at creating a thermal management baseline within the data center, power density can create some hot spots that have to be managed aggressively. For these challenges, solutions include ambient air fans and rack-mounted air conditioning. Roof or rack-mounted fans are a simple, cost-effective way of increasing the ambient air circulation to better leverage the air in the cold aisle. A common mistake among data center operators is to use floor-mounted fans to push the air upward; however, this causes the air to move too fast to be pulled into the enclosure and delivers a null effect on internal temperatures.

In certain climates, during the winter or cool seasons, data center managers are able to take advantage of airside free cooling to offset their energy usage. For managers in the right environment, free cooling enables them to pump cool air from the outside instead of operating their chillers.

On average, it takes 160 cfm per kilowatt to cool a large data center. For a one-megawatt data center, it would take over 40,000 cfm of outside air to achieve the same result as the current air conditioning set up. This would require large holes, additional ducts, and refrigeration to move, filter, and dehumidify the outside air. However, managers who are able to create the supporting infrastructure have shown significant improvements in their power utilization effectiveness (PUE) during cooler months. And as ASHRAE 90.1 becomes code in many states, free cooling will become part of future data center design to improve its energy cost budget (ECB).

 

IN-ROW COOLING

For ultra-high densities, in-row cooling can provide targeted cooling without overcooling the room. In particular, door-mounted air conditioners are a “plug and play” energy-efficient solution that requires no external piping. Additionally, some managers have even found rear-mounted heat exchangers can be also effective localized solutions.

Liquid cooling has the potential to be highly efficient and deployable across a wide range of designs, which accounts for its growing popularity among engineers. Liquid cooling is particularly beneficial because water is a far better conductor of heat than air. In-row liquid cooling solutions that use open- and closed-loop technology are scalable and efficient, but do require pipes, pumps, valves, and fittings to provide cold fluid to cool air that is circulated through bayed enclosures.

 

IMPROVE AIRFLOW/ALTER EQUIPMENT ARRANGEMENT

Since hot/cold aisle is the most common climate solution, airflow must ensure that mixing doesn’t happen within the cabinet. One of the easiest ways to expand airflow is to improve cable management to deliver greater space efficiency within the cabinet. To that end, routing cables so they do not block vents and HVAC components can deliver immediate airflow improvements. Most engineers don’t realize that mounting rails also create space within the cabinet to assist airflow, so it’s important to keep cables from blocking this space as well. Often racks will have claws or other cable management to direct cables efficiently out of the space that is necessary for unrestricted airflow.

 

GENERAL BEST PRACTICES FOR HIGH- AND ULTRA-HIGH-DENSITY COOLING

Considered separately, the singular cooling solutions may not be enough for high- and ultra-high-density systems. Instead, engineers may find that combining them leads to the thermal management payoffs that deliver the proper return on investment.

Using a combined approach requires planning and coordination within the enterprise. These include operations, management, HVAC, and design professionals. If questions linger, OEM and infrastructure suppliers can be a valuable resource. The chances are that they have seen similar high- and ultra-high-density cooling installations and are ready with answers.

While designs that utilize a combined technology approach may have a higher installed cost, they deliver greater scalability and uptime, and extend the service life of vital components that improve the data center’s bottom line.