Data center managers are constantly challenged to find new approaches to physical infrastructure as they add capacity to their data centers or other IT spaces to drive down operating and capital costs. Traditional raised floor designs using perimeter cooling have given way to non-raised floor data centers or data centers using package cooling systems external to the buildings. Beyond chilled water, companies are using economization solutions such as indirect evaporative and pumped refrigerant cooling.

There are more cooling options than ever before for saving money, operating efficiently, and managing the thermal environment more effectively. Today’s customers demand modular systems that can be easily scaled and fit easily into existing physical infrastructure and software management systems.

ASHRAE 2015 thermal guidelines have a recommended range of 18°C (64.4°F) to 27°C (80.6°F). Data center temperatures are increasing, and we typically see temperature set points between 70°F and 72°F, and sometimes higher. A visit to a data center today doesn’t always mean putting on a light jacket before entering. But that doesn’t mean that cooling isn’t as critical today as it ever was.

 

SUPPORTING HIGH DENSITY AND CAPACITY

There are several cooling options that allow data center managers to expand capacity and density without having to use valuable white space.

A surprising number of data centers still haven’t utilized aisle or cabinet containment to provide a more stable thermal environment and increase capacity and efficiency. Uncontained aisles allow hot and cold air to mix, which lowers the temperature of return air to the existing cooling units, thus reducing their efficiency. Aisle containment increases return air temperatures and reduces airflow requirements, both increasing the cooling equipment efficiency and capacity. This can be done with perimeter, raised floor cooling, or with row-based cooling that puts heat removal closer to the source.

Another option is to evacuate heat from the top of racks to a ceiling plenum by direct ducting through a “chimney.” Properly placed blanking panels improve airflow management, allowing for substantial increases in rack capacity. We have customers achieving 35kW in rack capacity using this strategy.

One option that we see gaining in popularity is rear-door cooling using passive (fanless) or fan-assisted chilled water heat exchangers, delivering rack capacities up to 50kW. This solution is basically room neutral. Cold air moves through the rack equipment, heat is exchanged through the rear-door coils, and the cooler air is exhausted out the back at approximately the same temperature that it entered the rack. The risk of water leaks is minimal, thanks to swivel-joint door technology. Condensation can be eliminated using a coolant distribution unit (CDU), which acts as an isolating heat exchanger between the building’s chilled water source and circulating cooling water and eliminates the risk of condensation by controlling the fluid temperature above the wetbulb temperature.

 

CHILLED WATER, INDIRECT EVAPORATIVE OR REFRIGERANT-BASED COOLING

For new construction or retrofits, chilled water vs. indirect evaporative or refrigerant-based cooling continues to be debated. All remain viable options — but it all comes down to the availability of water and cost of treating and maintaining the system.

We have seen a lot of customers turning to indirect evaporative free-cooling economizer systems for large deployments, which offer low annual water and energy consumption and may achieve mechanical PUEs of less than 1.2. Efficient heat exchanger design keeps peak power low. Because data center air and ambient air is kept separated in the heat exchanger, these systems provide the additional benefit of not having to introduce potentially damaging outside air into the data center.

We’re finding more and more customers who have calculated their cost of using water, including treatment and system maintenance. The newest, and perhaps most sustainable, approach eliminates the use of water altogether by utilizing pumped refrigerant economization, consuming up to 50% less energy than legacy systems and providing highly flexible configurations for virtually any data center design.

Advanced controls automatically engage or disengage compressors to maximize the use of a highly efficient refrigerant pump, according to outdoor ambient temperatures and data center loads. This provides the most efficient use of free-cooling and DX operations throughout the year. The benefits of such a system is that it delivers efficient economization in a simple manner, delivering high reliability without the use of water and requiring lower maintenance costs.

 

COOLING THE EDGE

Never underestimate the need for cooling small spaces or IT closets beyond the capabilities of a building’s existing air conditioning system. With so much riding on remotely located IT equipment, our customers are reporting tremendous challenges in achieving the right level of cooling capacity needed in these growing environments that are becoming more and more mission critical. There is no one-size-fits-all cooling solution for edge environments. Some of the small rooms that comprise edge environments were designed for other purposes — office space, storage/utility rooms. Others are satellite data centers, designed specifically for edge computing. Given location and space constraints, the top three questions you should be asking in determining the most effective cooling application are:

  • Where can you locate the cooling equipment?
  • How will you reject the heat?
  • How will you monitor the space and equipment?

We see a lot of customers using ductless mini-split cooling systems designed for commercial, not data center, use. They don’t provide humidity control and are not built to withstand years of 7x24 operation. Generally, you might have to replace them every few years.

Geography is also a factor when considering ductless mini-split systems. Some units can only operate at outdoor ambient temperatures down to 0°F. That might be a problem in colder climates, so you’ll want to look at lower ambient temperature ratings. The lowest rating for these types of units is -30°F.

Ceiling mount systems will typically deliver better reliability, higher capacities, and temperature and humidity control. They’re also easier to connect into a building or infrastructure management system and can be located outside of the space to duct air to the location to which it is required. Other small space solutions include rack coolers, in-row cooling, and smaller perimeter or wall systems designed for precision cooling.

 

MAINTAINING CONTROL

Cooling units are only as effective as their controls, so be sure to focus on controls and communications when selecting solutions for any critical environment. Ease of integration to BMS software is very important, as our research shows that half of all small space cooling is managed this way.

The availability of cloud-based apps for monitoring remote spaces has become common. The most flexible control systems allow you to access them via your BMS, at the desktop behind a firewall, or through cloud apps.

Advanced controls provide the ability to manage at both the cooling unit and supervisory system level with touch-screen interfaces. Unit level controls monitor hundreds of data points to keep system components operating at their highest efficiency. Supervisory-level control harmonizes and optimizes all units to optimize capacity across the data center while providing access to quick, actionable data, automated system diagnostics, and trending.

 

ON THE HORIZON

Smart and IoT devices are not going away, and the world continues to find new ways to apply them. Increases in e-commerce. Driverless vehicles. Streaming video that changes the way people entertain themselves at home and interact with each other.

All drive the demand for data centers — starting at the edge, which continues to expand outward as the need for speed grows exponentially — and back to enterprise and colo locations that do the heavy lifting in support of the edge.

All need to remain cool to be effective, and thermal management continues to evolve to serve those needs.