It is an exciting time to be part of the data center industry: new technologies, changing customer demands, and a strategic place in the development of the next generation of communication, commerce, and security. As we move toward this new horizon there are several new thermal management technologies that promise to provide substantial reductions in energy usage and greater design flexibility with higher levels of IT equipment protection.


Over this past summer, members of my team of engineers toured 15 data centers that served everything from midsized businesses to Fortune 50 technology companies. In each case they were able to identify significant energy saving opportunities — up to 30% savings — by asking just a few general questions. Often, these opportunities could be realized with the implementation of a thermal management best practice. Below are the most common opportunities they came across.

  • Increase the temperatures in your data center. The old standard was 72°F for return air (the mixture of air returning from computers to the cooling unit) and relative humidity at 50%. Today, you can push return air temperatures as high as 95°F. It is recommended that this be done in small increments to avoid unexpected humidity trouble and to ensure all the IT equipment is functioning properly. This can be done over a few days with little risk to applications and IT equipment. Enlist your facilities manager or vendor partners to assess the safest way to do this. Remember for every 1°F increase in temperature you will save 1.5% to 2.0% of your energy costs.
  • Raise chilled water temperatures. For data centers using chilled water, 45°F was the standard temperature for water in the chiller. Today, it is possible to operate chillers up to 55°F, reducing energy consumption by 20%. Every degree matters — each one degree increase in water temperature reduces chiller energy consumption by 2%. It is important that you work with your facilities manager as raising chilled water setpoints could reduce cooling capacity in your data center cooling units.
  • Match your cooling requirements to your IT load. Your thermal management equipment should have variable capacity components (fans and compressors, if applicable) to adjust cooling capacity up and down with your IT load. While constant speed fans are common, they are unable to adjust to a data center’s actual performance. A 10 horsepower (hp) fan motor uses 8.1 kWh of electricity at 100% speed, but only 5.9 kWh at 90%, and 2.8 kWh at 70%. Savings are significant (and exponential) when fan speed can be matched to the data center’s actual requirements. In addition, the compressor is the highest consumer of power in a DX based system. So the more you can turn it off — through economization — or reduce its speed — through a variable capacity compressor — the more energy you will save.
  • Use hot or cold aisle containment. Containment prevents the mixing of hot and cold air, which increases the temperature of the return air (the hot air being expelled from racks and circulated back to the heat removal equipment). Higher return air temperatures allow heat removal units to operate more efficiently. A 10°F increase in return air temperature can create a 38% increase in unit capacity and an increase in efficiency.
  • Upgrade your controls. New controls provide the ability to safely implement and coordinate each of the previously mentioned strategies above. When new controls and variable capacity component strategies are added to the operational tweaks described previously, power consumption from cooling in a typical enterprise data center with 500 kW of IT load, can potentially drop more than 50%, from 380 kW to 184 kW or $171,690 in annual energy savings assuming $0.1/kWh. That potentially can lower the mechanical PUE from 1.76 to 1.37.


Intelligence is one of the hallmarks of a thermal management approach and intelligence requires sophisticated controls. However, sophisticated does not have to mean expensive. A significant amount of functionality is built into the current generation of thermal management system controls, including multi-unit teamwork control with fan coordination, coordination between external condensers and indoor cooling units, capacity and power usage monitoring, auto-tuning and auto-optimization, economizer control, and custom staging and sequencing.

In unit-level control, auto-tuning constantly monitors key parameters, such as fan speed, and if oscillations in the setpoint are detected, control parameters are modified to remove the oscillations. This enables the unit to intelligently adapt to new IT systems or other changes in the data center and helps streamline commissioning.

On a system level, controls enable machine-to-machine communications between units to prevent units from working at cross-purposes and allow the system to adapt to changes in facility-level demand as efficiently as possible. Four units with variable capacity fans in teamwork mode can operate 56% more efficiently than four fixed speed units operating autonomously. At extremely low loads, the controls can place some units in standby mode for further savings. Intelligent controls integrated into the next generation of thermal management solutions (new economizers, custom air-handling units, and free cooling chillers) have shown to be instrumental in helping achieve a new threshold of PUEs between 1.05 and 1.2.

Data center management software, typically part of a data center infrastructure management (DCIM) suite, can offer additional value by using data from thermal management controls to support capacity forecasting and management, trending and analytics, and thermal visualization of the data center.


The more visibility the controls have into equipment operating parameters and conditions in the data center, the better decisions they can make. The new generations of wireless sensors make it simpler to extend visibility beyond return air and supply air temperatures to include multiple server inlet temperatures. When variables such as server inlet temperature, supply air temperature, return air temperature, water temperature, outside temperature, fan speed, and unit energy consumption are all monitored in real-time, the control system has the data it needs to optimize performance within a single unit, across multiple indoor units and between indoor and outdoor units.

When combined with other thermal management technologies, sensors can deliver real efficiency gains. For instance, a colocation facility that recently implemented wireless sensors, variable speed fans, and intelligent controls to enable closed loop control of heat removal reduced the facility’s PUE from 1.47 to 1.30, saving $200,000 annually in the process.


Traditional economizer systems have used outside air or water systems to minimize the use of compressors or chillers, the largest consumers of power in a cooling system. While the use of outside air provides high efficiency, it also can introduce issues with humidity control, often overlooked maintenance of louvers and dampers, and potential problems with smoke, pollution, and air contaminates into the IT space. Water economizers also improve efficiency, but they do require water treatment, possible water storage, and additional piping and infrastructure.

A new pumped refrigerant economizer recently has been introduced that requires no outside air and no water. It is fully integrated into a variable capacity DX system with advanced controls that automatically switch in and out of full and partial economization mode in minutes based on the IT load and outdoor temperatures. This system has proven to be about 70% more efficient than traditional DX systems and has shown to help achieve PUEs as low as 1.05 and typically around 1.1 to 1.2.


While typical standard rooftop air-handling units are simply unable to maintain the thermal management control and redundancy demanded by mission-critical environments, there are a growing number of custom air handlers on the market that are designed specifically for data centers using evaporative, chilled water, and DX technologies.

Custom air handling units designed for the data center offer a level of customization and flexibility needed by these unique environments and are typically used in data centers from 5 to 30+ megawatts (MW). The units usually feature intelligent controls that provide advanced protection and enable the automatic adjusting of air flow, temperature, and economizer function based on IT load and ambient conditions. These control systems enable the units to work more efficiently and help data center managers achieve annual mechanical PUE under 1.2. They also usually offer simple integration with building management systems and DCIM solutions.


There is a new generation of air cooled chillers designed specifically for the needs of data centers and mission critical applications. These chillers use state-of-the-art components — such as advanced controls, digital scroll compressors, EC fans, microchannel condensers, and electronic expansion valves — built in economizers and system optimization software to deliver high efficiency (mechanical PUE as low as 1.08) and high availability. They contain built-in redundancy, fast restart capability, and continuous cooling availability in case of water shortages, extreme ambient temperatures, and unstable power supplies. Some models also include adiabatic pre-cooling to improve efficiency and reduce peak electrical loading. They come in units up to 400 tons in capacity and are ideal for chilled water data centers up to 6 MW.


These are just a few of the many innovations and new thermal management technologies currently available in the market. In the coming months and years we will see further breakthroughs in efficiency, ease of use and deployment, sustainability, and control. It’s a great time to be in the data center industry as computing and data transform and improve our lives.