In my last column, I wrote about the fate on the enterprise data. Only a few years ago enterprise facilities managers were happy if they kept their data centers “cool” (at 68°F or lower in many cases). However, as IT equipment power densities rapidly increased, they were generally satisfied if most areas were kept cool enough — hopefully with just a few hot spots (over 80°F) — which they typically tried to mitigate by “overcooling” the entire room. In 2007, The Green Grid (TGG) was formed and created the power usage effectiveness (PUE) facility energy efficiency metric and suddenly we all seemed to realize that cooling was using a lot of the energy going into the data center infrastructure.
This revelation lead to, and also coincided with the 2008 release of the 2nd edition of ASHRAE’s TC9.9 Thermal Guidelines, which increased the “recommended” temperature range to 64.4° to 80.6°F, in response to the improved thermal tolerance of modern IT hardware and to save energy. The bigger news which was often overlooked then (and even now) is that the temperatures were to be measured of the air entering the IT equipment — not the room.