It has been almost three years since the ASHRAE 90.4 Energy Standard for Data Centers was finalized and went into effect in 2016, yet even today, many in the data center industry are not fully aware of its existence or its implications. Far more people are familiar with The Green Grid power usage effectiveness (PUE) metric, first introduced in 2007, which started the data center industry thinking about the energy efficiency of the physical facility. While originally PUE was based on power measurements (kW) which were snapshots, it was updated in 2011, and based on annualized energy usage (kWh), which reflected a more meaningful efficiency picture under various operating conditions. This snapshot power measurement was one of the loopholes of the original version of PUE metric. Subsequently in 2011, TGG updated the PUE metric to Version 2 (

Its purpose was to help data center operators to baseline and improve their own facility. PUE has been criticized by some since it only covers the energy efficiency of the facility (not the IT systems), however, that was clearly its stated purpose. Nonetheless, its underlying simplicity allowed managers to easily calculate (or guess) a facility’s PUE which drove its widespread adaption. In addition, the PUE metric helped prompt the U.S. EPA to create the Energy Star program for Data Centers which became effective in 2010. This program continues to be a voluntary award program and currently there are 152 Energy Star Certified Data Centers listed at



PUE is considered the defacto metric by the data center industry, and in 2016 became an ISO standard (ISO/IEC30134). Yet, despite this, most building departments do not know much about data centers and never heard of The Green Grid or the PUE metric. Nonetheless, while data centers may be different than an office building, like other buildings, they still need to comply with any local and state building codes for safety, and more recently for energy efficiency. In many areas of the U.S., “ASHRAE 90.1 Energy Standard for Buildings Except Low-Rise Residential Buildings” is referenced and incorporated as part of some state or local building codes. Data centers were previously more or less exempted in the 90.1 standard, however as of 2010, data centers were included. There were complaints that it was too proscriptive from the data center industry and was updated in 2012 to try to address this issue. In 2016, “90.4 Energy Standard for Data Centers” was introduced and subsequently 90.1-2016 transferred references to energy performance requirements for data centers to the newly issued 90.4 standard.

The other aspect of PUE is that technically speaking, it is not a design metric. It is meant to measure baseline and continuously improve and optimize operating energy efficiency. Nonetheless, PUE been used as a reference for building design goals before construction. It is also sometimes referenced in colocation contractual SLA performance or in energy cost schedules. In contrast, the ASHRAE 90.4 standard is primarily a design standard meant to be used when submitting plans for approval to build a new data center facility. It also covers facility capacity upgrades of 10% or greater, which could complicate some facility upgrades.

In contrast, the energy calculation methodology for the 90.4-2016 standard is far more complex that the PUE metric. However, one of the issues with the PUE metric is that it does not have a geographic adjustment factor. This means that since cooling system energy typically represents a significant percentage of the facility energy usage, identically constructed data centers would each have a different PUE if one were located in Miami, while the other was in Montana. But PUE originally and still provides a simple uniform number which is easy to understand and monitor efficiency, regardless of location.

The 90.4-2016 standard separated the electrical power chain losses from the cooling system energy efficiency calculations. While primarily focused on cooling performance, the 90.4-2016 standard also details and limits the total maximum electrical losses though the entire power chain, from the utility handoff, through the UPS and distribution system, and ending at the cabinet power strips feeding the IT equipment. Moreover, this is very strictly prescribed by “Electrical Efficiency Compliance Paths,” along with calculations and detailed in a table with specific limits on energy losses for varying levels of redundancy: N, N+1, 2N, and 2(N+1) for the UPS and distribution losses at various operating load levels.

ASHRAE typically revises and updates its standard every three to four years, and publishes proposed revisions for public comments, which can be found at In March, there were three proposed Addendums: f, g, and h, released concurrently and posted for 30-day public review. The first, addendum “f,” is focused on UPS efficiency and is described “to better align with current vintages of UPS technology in terms of performance and industry evolution.” The original and proposed Maximum Design Electrical Loss Component (Design ELC) table had UPS efficiency/loss listings at 100%-50%-25% loads (per system or module — depending of system design N, N+1, 2N, etc.). For systems with ITE loads greater than 100 kW, the proposed revision substantially decreases the maximum allowable UPS losses from 9% to 6.5% (at 100% load), from 10% to 8% (at 50% load), and from 15% down to 11% (at 25% load). The newer UPS units are more efficient across a wider range of load levels and many may achieve 93.5% efficiency (6.6% loss factor) at full load. However, it is more difficult to deliver 89% efficiency (11% loss factor) when operating at only 25% load.

It is the cooling system calculation section known as mechanical load component (MLC) that includes location as factor for meeting the cooling system energy compliance. It incorporates a table with 18 U.S. climate zones listed in ASHRAE Standard 169, each with an individual Maximum Annualized MLC compliance factor. In the proposed addendum “g” revision, for data centers with greater than 300 kW of ITE load, the maximum MLC compliance factor was substantially decreased for each climate zone (requiring less cooling system energy), that really kicks up the requirements a notch. In some cases, the new maximum MLC would be reduced by as much as 50% to 60% in some zones; such as 4B, 5B, and 6B.

The last item, addendum “h,” has less impact on data centers since it focused on wiring closets. However, it is an additional cooling efficiency factor that could be subject to scrutiny and would need to meet mandatory compliance requirements.

Ironically, the ASHRAE Thermal Guidelines for Data Processing Environments, which is widely considered an industry bible by the majority of data center operators, is not legally recognized by governmental agencies responsible for overseeing and enforcing building codes related to the design and construction of buildings. It has undergone four revisions since its inception in 2004, which had a very tight recommended environmental ITE envelope. It was the third and fourth editions which promoted and endorsed cooling energy efficiency by introducing the expanded allowable IT intake temperature ranges and also broadening humidity ranges, which effectively negated the need for energy intensive tight humidity control.


The Bottom Line

I have been a longstanding advocate of data center energy efficiency and have written and spoken about it well before PUE was introduced. When 90.4 originally came out in 2016, I wrote that it was about to “move your cheese.” The proposed addendums which tighten the efficiency requirements, that will take effect in 2020, may move it a bit further. But is this really necessary? Clearly the designs of the newest facilities, and especially the colocation and hyperscalers, are highly self-motivated to focus on energy efficiency. The massive shift toward colocation and cloud service providers have directly or indirectly made energy efficiency a competitive mandate and part of the justification for lowering TCO.

Nonetheless, many older data centers were designed with availability as the highest priority, efficiency was not given the same consideration as in modern designs. Moreover, it is also true that prior to the PUE metric many organizations that owned their own data centers were not very aware of their facility energy efficiency. In some cases, the managers never saw or were not responsible for energy costs. However, while fewer enterprise organizations are building their own new sites, there are still many older sites that are still operational. As a consultant, I perform data center energy efficiency assessments and have seen older facilities which are still in good condition, which unfortunately may have a PUE of 2 to 3, primarily due the age of their electrical and cooling infrastructure.

While it is easy to simply recommend equipment upgrades, these are costly and payback can be hard to economically justify. In addition, it is very difficult or  impossible to replace key, but inefficient, components without shutting down or disrupting the facility. Critical elements, such as large chillers, cannot be upgraded, especially if there is limited or no redundancy. Even today I have found that in most instances, a significant amount of cooling system energy can be saved in data centers by the low cost or no-cost fixes to basic airflow issues.

Fixing the basic low hanging fruit, such as installing blanking plates, adjusting or relocating floor grilles, is non-disruptive and is within the capabilities of most in-house staff. This can solve most of their cooling issues, which allows raising temperatures to save energy. More importantly, these older sites typically have little or no visibility into how the facility infrastructure energy is consumed, other than see the total on their monthly utility bills. So if you are considering major equipment upgrades to an existing facility, now would be a good time to consider reviewing ASHRAE 90.4-2016, as well as the pending addendums to see if they apply to your project. Consider purchasing a DCIM system or granular energy metering and monitoring system to continuously optimize cooling system efficiency before investing in expensive forklift upgrades.