As I write this it is early July, as we are about to move into the dog days of summer, anticipating the usual issues that seem to appear when cooling systems are pushed to their limits—and sometimes beyond. Of course, in the data center world failure is not an option is not just a phrase, it is a mandate, and if your site has a failure that causes an outage it may even make headlines, as has been the case of some high-profile sites.

I sit pondering different data centers and their various cooling system types, as they face maximum cooling requirements from ever-rising heat loads and higher densities, while operating under worst-case ambient conditions. Yet they all still need to provide good annualized energy efficiency under widely varying heat loads in the face of wide temperature and humidity ranges that they encounter over the entire year, not just the peaks of the summer.

In my last column I pontificated on what would be the new de factosafe operating environmental conditions going forward, in light of the release of the 2011 ASHRAE TC 9.9 expanded thermal guidelines. Besides offering wider environmental options, it also included x-factor tables that could be used to project reliability risk using free cooling. These charts were based on allowing air intake temperatures to exceed the recommended temperatures and to rise the limits of the allowable range over a certain number of days during the course of a year.

Of course, not all data centers operate under the same operational or design goals or have the same redundancy levels, but historically virtually all have generally tried to operate within the recommended ASHRAE Class-1 (now A1) equipment guidelines of 65°F to 80°F. Or more likely, most have just simply nailed the temperature settings at 68°F and the humidity set at a tightly controlled 50 percent relative humidity (RH).

Over the last few years, we have used and heard the term free cooling with increasing frequency. Of course, the correct term is economizer system, which can be part of a water-cooled evaporative system, an air-cooled system, or even a combination of both. In particular, the direct fresh-air economizer is making those ultra-low power usage effectiveness (PUE) headline-generating numbers possible. Although primarily associated with internet search and social media organizations having service-level agreements (SLAs) that are not really defined (how much did you pay for your last search or social media page?) but nonetheless are real.

Of course, your own organization may have very well defined and tight SLA requirements, especially if you are a large enterprise or financial firm or a colo whose customers expect a very stable environment. Alternately, you could be somewhere in between, offering an IaaS or SasS platform or other cloud-based services, in which case your SLA is really based on the consistent availability of services delivery. Your servers could be running anywhere from 0°F to 150°F, and as long as your service offerings continue to be available, your customers would not care less about ASHRAE’s recommendations.

So when cooling system designers look at historic weather conditions at a selected location (or have been asked to help evaluate the site), they must balance the need to meet the worst-case outside conditions (highest temperatures and humidity) with the more typical average conditions to deliver a design and equipment specification list that is both cost effective (CapEx) and provides a good overall energy efficient solution (OpEx). Yet they are still forced to over-design the system, so that it can support the data center at full load and hold the internal temps to 68°F on those very rare (but not unknown or unforeseen), extremely hot and or humid days (even in the event of a cooling component failure).

So is it really possible to have a low-calorie, free-lunch diet, take advantage of some of the designs that primarily rely on fresh air cooling and allow the wide temperature and humidity swings, and still have the safe operating conditions that a traditional closed airflow loop cooling system offers?

The simple answer is yes, with some caveats; it is clearly technically possible and actually practical to utilize direct outside air free cooling, as the primary cooling methodology with adiabatic cooling. In fact, The Green Grid published a 2012 updated set of maps based on the ASHRAE 2011 expanded thermal guidelines, which showed that even when staying within the A2 Allowable environmental parameters (50°F to 95°F, 20 to 80 percent RH, virtually all modern IT equipment except tape, is rated A2), that 75 percent of U.S. sites could operate on air-side economizers for 97 percent of the year. Moreover, 99 percent of European locations would be able to operate on free cooling all year long.

However, if you are more conservative in your equipment intake temperature ranges, you will need to ensure that on those days that are just too warm to simply use direct outside air (even when used in conjunction with adiabatic cooling), you must have some form of secondary/back-up mechanical cooling. This has a double advantage. It allows you to maximize the number of safe free cooling days (those directly within ASHRAE the recommended envelope), and be able to trim the incoming air temperature or humidity with a minimal amount of mechanical cooling energy, but completely revert to full mechanical cooling when environmental conditions require doing so. Also, not everyone is (or ever will be), comfortable with the use of direct adiabatic cooling of internal air, especially after 50 years of having CRACs begin dehumidifying if the internal humidity went past 55 percent RH.

And of course, even adiabatic cooling is not free since it uses water, which is not just a recurring OpEx cost. It now should be part of your overall operational efficiency calculations via the water usage effectiveness (WUE) metric, which was introduced by The Green Grid in 2011 (WUE applies to any water usage, such as the traditional evaporative cooling towers which help make traditional water-cooled chillers more energy efficient than air-cooled systems).

THE BOTTOM LINE

However, while this will clearly allow you to have your cake and eat it too, from an energy-efficiency prospective, this dessert comes with a price. You still have the CapEx cost of a full mechanical cooling system (including the electrical system capacity to support it, utility feeder and gen-set up-sizing), in addition to the cost of the air-side economizer system and the integrated control system. Nonetheless, this still offers a huge opportunity for cooling system energy savings, plus the added advantage that, in the event of a failure of the mechanical system, the air-side economizer can still ventilate the hot-aisle exhaust to prevent or mitigate the severe over-temperatures that would have occurred during a major failure in a traditional re-circulating air, closed-loop cooling system.

I believe that as more mainstream data centers begin using this methodology it will become a more common design for those sites whose locations can make effective use of ambient conditions. It is not just for the headline-making social media organizations like Facebook’s new data centers, even financial stalwarts like Deutsche Bank have begun to test these designs, as well as other well known and respected firms, such as HP’s EcoPod, a high-density self-contained modular data center uses this combination. These are no longer lab experiments or proof-of-concept sites; they are in production today handling live data.

So unlike most diets, which are just passing fads that do not really work in the long run, I think this combination of a low calorie free lunch even if you need to buy the higher calorie dessert, offers the data center operators the ideal balance of a low energy diet, with all the safety of a traditional balanced meal plan.