The primary objective of the data center manager is to keep IT equipment online and serving the functions it was designed to provide. Traditionally, servers were kept as cool as possible to prevent failure from overheating, regardless of the additional energy consumed by cooling equipment — not to mention the additional energy costs, which at an average electricity rate of $0.19/kWh in New York City (the second highest in the nation) can be costly.

Even with high energy costs and the ability of modern servers to operate at higher temperatures, most New York data center managers continue to drive temperatures as low as possible to keep the uniform data center temperature well below 75°F. Typically, this means setting supply air temperature setpoints anywhere between 55 and 65°F, well below ASHRAE’s recommended data center rack inlet air temperature and humidity ranges:


Low End

18°C (64.4°F)

High End

27°C (80.6°F)


Low End

5.5°C DP (41.9°F)

High End

60% RH & 15°C DP (59°F DP)

Table 1. 2008 ASHRAE Thermal Guidelines. ASHRAE TC 9.9 - Thermal Guidelines For Data Processing Environments, 3rd Edition

In 2013, my colleagues at Willdan Energy Solutions and I wrote a white paper assessing the thermodynamic parameters that control the cooling efficiencies at different temperature set points across a range of refrigerants. We found that for every 1°F increase in cooling temperature, cooling efficiency improves by 2% to 4%, which produces the following benefits:

  • Increased cooling equipment capacity
  • Improved cooling equipment efficiency
  • Increased economizer run hours in cooling systems with economizer capabilities, which is especially advantageous in New York City’s climate
  • Reduced supply fan speed for CRAC/CRAHs with VFD supply fans

Additionally, we found that increasing the data center space temperature is most effective when carried out in conjunction with airflow management, providing the racks with uniform inlet temperatures while increasing hot aisle or return air temperatures. In one case, Willdan worked with a data center manager to raise temperature setpoints, increase economizer run hours and setpoints, and implement airflow management. Through these measures, energy consumption of the data center decreased by 3 million kWh/yr and annual energy costs were reduced by $570,000.

It is important to note, however, that data center temperature set points should be raised in small increments to observe the effect on IT infrastructure, mechanical equipment, and overall data center power draw. All key systems and equipment should be measured – especially server inlet temperature. The temperature should not be raised to the point where the server’s internal temperature increases or internal server fans operate at higher speeds, increasing energy consumption.

In summary, overcooling the space to keep server inlet temperature and/or supply air temperature low is an inefficient, short-term solution that needlessly increases data center operational costs —  especially in New York City data centers.