The introduction of 2011 ASHRAE’s TC 9.9 Expanded Thermal Guidelines brought forth a new awareness of being able to successfully operate new IT equipment in the expanded “Allowable” ranges, far beyond the previous 2004 -2008 ASHRAE “Recommended” environmental envelopes. 

It brings forth the question: Is there any new IT equipment (excluding tape) that is only rated and temperature limited to “A1”  “Allowable” ranges of  64.4-89.6°F (15-32°C)? In effect virtually all new IT equipment is A2 compliant by implication, since they have been specified by all the major manufacturers at 50-95°F (10-35°C), as well as 20-80% RH for the last several years (perhaps ASHRAE should consider printing up some “A2  Ready” stickers to place next to the Intel or AMD and Energy Star stickers).

Even the descriptions provided in the 2011 whitepaper introducing and defining the Expanded Equipment classes are a little ambiguous in clarifying the differences between A1 and A2, when it comes to servers. Wherein classes A1-4 are all generally categorized as being applicable to “Data Center”; A1 equipment is described as “Enterprise servers, storage products” and the Environmental Control is listed as “Tightly controlled”.   In comparison Class A2 IT equipment is described as “Volume servers, storage products, personal computers, workstations” and the Environmental Control is listed as “Some control” (A3-A4 is listed exactly the same as A2, but with even wider temperature and humidity ranges).

It is important to remember one of the original reasons why data centers have had “Tightly Controlled Environmental Conditions”. In the early days paper was a “critical component”, in the form of punch cards, paper tape and printouts.  Tight humidity control in particular helped to prevent paper jams (the near equivalent of the blue screen).  While most data centers today use fewer punch cards to enter data, the mindset of tight humidity control is still embedded in the collective psyche.  

The creation of the ASHRAE Technical Committee “TC 9.9” included IT equipment manufacturers (but no paper manufacturers) which help raise the “Recommended” operating range in 2008 up to 80.6°F/27°C (still with a relatively tight humidity range), but hardly anyone in the mainstream data center community ran out and changed the temperature much (if at all), from the traditional 68°F (20°C).  Between the energy crisis and the rapidly and ever increasing cost of energy, came the The Green Grid who created the PUE metric in 2008 and finally got many to consider energy efficiency, even if it was only lip service initially. 

Then came the mavericks at Google, Yahoo, and more recently Facebook that did not feel the need to comply to any standards, ASHRAE or otherwise, nor buy “industry standard” cooling equipment, who ignored everything and proved that the IT equipment would not have a meltdown beyond 68°F (20°C) and could even survive 80% humidity from “adiabatic cooling”, and would still operate using the same air that we humans can breath, outside air, aka direct “Free Cooling”.   

Moreover, last week in early March, The Green Grid “TGG” released their whitepaper #46 “Updated Air-Side Free Cooling Maps: The Impact Of ASHRAE 2011 Allowable Ranges”, which reflected the updated ASHARE classes A2-A4 and the potential number of hours that can be saved using air-side economizers.  The projections are startling; for example, the original 2009 US version of TGG maps showed that Florida could expect 1000 hours per year of economizer operation, while the 2012 version shows 5000 hours for A2 equipment (based on 95°F/35°C) and 8000 hours for A3 equipment (104°F/40°C), as and when it becomes available.

The Green Grid whitepaper summarizes it as:

“The new Class A2 maps below show that 75 percent of North America is covered by the 8500+ hours per year color. In Europe, the A2 Allowable range results in 99 percent of locations being able to use free cooling all year. The only locations in Europe that cannot use 100 percent free cooling are a small area in northwestern Spain (too hot), a small area in southwestern Ireland (too humid), and as small area in Sicily. In Japan, 14 percent of locations can use free air cooling every hour of the year if data center operators allow temperatures and humidity in the A2 Allowable range.”

I previously interviewed HP about their new G8 servers regarding the expanded ASHARE classes and they stated that while the new server was not offered as A3 equipment, it was fully compliant with A1 and A2, here was their response:

“The A1 and A2 classes in 2011 are the same as in Class 1 and 2 in previous ASHRAE releases so yes, HP ProLiant Gen8 would meet those requirements.  What ASHRAE added was A3 and A4 to provide datacenter users expanded guidelines with respect to their datacenter environment.  In general most IT equipment is still designed to operate at a max of 35°C so is accounted for in A1 and A2 of the guideline.”

Just as a reminder for those who only follow the ASHRAE guidelines, the telco industry (who “invented” and has always operated based on the proverbial “five 9s”) has always used a different standard; NEBS which has a much wider temperature and humidity range – effectively what ASHRAE now considers A4.   IT manufacturers have been offering NEBS rated versions of the standard IT equipment for decades (telco buys a lot of IT gear) so it can be done- it is a question of price and in most cases, primarily more airflow.

In fact, when I originally asked HP if the G8 was designed to go beyond A2 they responded:

“HP endeavors to deliver the highest quality of service to our customers so we closely monitor customer needs.   For certain Hyperscale and Telco customers, we currently support our equipment in higher temperature deployments.  For example, select models of the HP ProLiant DL360, DL380 and HP BladeSystem c7000 products are NEBS Level 3 certified and have a short term operating temperature limit of 55 degrees C.

However, as you know, raising the temperature in the data center, while it has the potential to reduce total cost of ownership, must be done very carefully to avoid unintended consequences. For example, raising the ambient temperature reduces chiller power, but increases IT self cooling power (fans) and semiconductor leakage power. There is no consensus in the industry on the best operating temperature but we continue to aggressively investigate the various alternatives. Today, the vast majority of our customers maintain the 35C degree environment to ensure business continuity.”

 

While some industry stalwarts may remain dismissive of the non-conforming designs built by the social media and internet search cadre, as well as the wider operating conditions discussed by ASHRAE,  the TGG Free Cooling Update also cited Deutsche Bank who built a data center in the NY metro area in early 2011.  The original design projected nearly 100% free cooling based on the local bin weather data for the area.  

I had the opportunity to originally write about this project and now revisit it. I just recently spoke with Andrew Stokes, chief scientist of Deutsche Bank, to see what his actual operating results were. He shared that despite the near record heat wave that NYC experienced last summer (many days in excess of 100°F/38°C), the design could have run mechanical-free 97% of the year (the actual mechanical runtime for this first year was higher, due to some teething problems that have since been ironed out).

Interestingly the new TGG A2 map shows the NY metro as offering 8000-8500 hours (91-97%) of free cooling. Stokes also imparted that they allowed the conditions in the cold aisle to rise to 85°Fand 80% RH. In addition he noted that they had observed no higher IT equipment failure rates after approximately one year of operation. They are running all off-the-shelf standard IT equipment; blade-servers and network gear, with cabinets at 20KW or more.  So kudos to Andrew Stokes and Deutsche Bank for building the project, and even more so for sharing the information.

The Bottom Line

So while you won’t really find any marked-down closeout sales for servers listed as “A1 only”, start rethinking your operating conditions using standard IT equipment environmental specifications.  While I don’t suggest that we simply raise the operation conditions of “traditional” data centers to the A2 limits, it is time the industry begins to give serious thought to the designing new sites to incorporate some form of economization into their data centers.  Ultimately it will still be up to the facility operators and the IT teams.

The choice of what environmental limits they feel “comfortable” with will control the decision, but hopefully ASHRAEs new expand recommendations will help them make a reasonable and rational trade-off between the energy saving projected by the updated TTG Free Cooling maps and a sensible approach to operational reliability.