I would like to introduce myself and my first “Hot Aisle Insight” column here atMission Critical. I have readMission Criticalfor many years and occasionally contributed articles. I am honored to have been asked join as a regular columnist, in addition to writing my blog on the website. I hope to cover the trends and technology of infrastructure designs, in addition to the developments of the IT equipment (which can obviously impact the design of the data center power and cooling infrastructure). Lest we forget, supporting the computing hardware is the ultimate “mission” of the mission critical data center. I hope to keep it topical and technically interesting, yet with a bit of skepticism and my personal commentary and opinion. I already posted my 2012 predictions on my blog, but I thought that I would expand on what I foresee for 2012.


I hear the sound of tectonic plates shifting in the computing world. In the last months of 2011, we saw some interesting alliances formed. Imagine vendor-based groups forming alliances with customer-based organizations. Can you picture mavericks from social media collaborating with the belt-and-suspenders, big-business crowd? What’s next, cloud computing everywhere, yet the actual computing hardware never to be seen again by mere mortals? All hardware to be hidden away in faraway hyperscale data centers whose operators will also become carbon-credit arbitrage traders?

As we move from “traditional” computing (a shifting term in itself) to virtualized and cloud computing, the old rules and norms seem to be dissolving rapidly.

What am I ranting about? In early November, The Green Grid (TGG)—IT and data center vendors’ leading voice for advancing resource efficiency in data centers and business computing ecosystems—and the Open Data Center Alliance (ODCA)—the leading end-user driven cloud requirements consortium—announced a strategic collaboration at Cloud Computing Expo 2011 West.

On what are they going to focus their first collaboration efforts? The carbon produced by cloud computing. What, you did not realize that cloud computing uses real energy and produces carbon just like “real” computing?

The ODCA and TGG group effort brings together the leading customer voice on cloud computing and the global authority on resource-efficient data centers and business computing ecosystems. The ODCA, a group of more than 300 companies that represent over $100 billion in annual IT spending, recently published the first customer-driven requirements for the cloud with release of its initial usage models.

TGG, which was launched in 2007 mainly by the major manufacturers of data center infrastructure equipment and computing hardware, is now a global consortium focused on driving resource efficiency in business computing by developing meaningful and user-centric metrics to help IT and facilities better manage their resources. Their first efforts resulted in the introduction of the power usage effectiveness (PUE) metric for data center physical infrastructure, which has now become PUE version 2, a globally accepted metric. In December 2010, TGG introduced the carbon usage effectiveness (CUE) metric, which is again based on the physical data center.

Correlating how cloud computing corresponds to actual data center power usage is the key question at hand, and the initial focus of the collaboration. In an email interview, Mark Monroe, executive director, of TGG, commented, “The alliance between ODCA and The Green Grid will result in user-centric work focused on the efficiency of cloud computing in real world application scenarios. The strengths of the two organizations, when combined, cover the full spectrum of efficiency and operational excellence in the emerging field of cloud computing.”

ODCA was founded in 2010 by major global business customers, but it is highly focused on cloud computing. The ODCA claims to represent $100 billion IT purchasing power, which could bring new meaning to “collective bargaining.” As we all know, money talks, especially in today’s economy.

So what about the tectonic plates? Well, surprisingly in late October, the big business, financially conservative ODCA also announced that it was collaborating with the Open Compute Project (OCP). OCP was formed as an offshoot of Facebook’s innovative, but maverick, Prineville, OR, data center design, in which it entirely re-invented the center’s power and cooling infrastructure, as well as even building their own unique non-standard (e.g., 1.5U) servers and racks, using 277 Vac as primary power and 48 Vdc for rack-level battery backup. Moreover, Digital Reality Trust, a major co-location provider, joined OCP and is offering to build OCP compliant suites or even data centers for their clients.

And not to be overlooked, the 2011 update of ASHARE TC 9.9 Thermal Guidelines is a potential game changer, with the stated goal being to eliminate the use of mechanical cooling whenever and wherever possible, primarily by the wide ranging use of airside economizers, with allowable equipment air inlet temperatures of up to 113°F (not a typo—Class A4). It is nothing less than an open challenge to end the legacy thinking of the sanctity of the data center as a bastion of tightly controlled environmental conditions and potentially rendering “precision cooling” an archaic term.

Clearly not everyone will suddenly rush to run 95°F or more in the cold aisle (will that term become an oxymoron?), and virtually abandon humidity control (think 8 to 95 percent RH). However, it may cause many to re-evaluate the need to tightly control the environmental conditions in the data center, while others will still keep the temperature at a “traditional” 68° to 70°F and 50 percent RH (complete with “battling” CRACs trying to control the humidity within ±5 percent) and wasting huge amounts of energy to support the perception that the reliability of IT equipment will be impacted if the temperature even went near 77°F (the 2004 recommended limit), or the humidity fluctuated.

In fact, the new ASHRAE guideline has gone so far as to put forth what once would have been considered pure heresy; the “X” factor, which introduced a scenario of assuming and accepting a certain amount of IT equipment failure as an expected part of allowing far broader environmental conditions in the data center.


And 2011 also brought forth many new metrics, even PUE is now PUE version 2, and while the PUE acronym still stands for power usage effectiveness, it actually now relates to annualized energy. TGG added more metrics beside CUE, such as pPUE, WUE, ERE, DCcE, and still more metrics to come, as well as the Maturity Model.

And to help measure and track all those metrics, look to data center infrastructure management (DCIM) systems. DCIM, as a term and category, only came into being in 2010, began to emerge in 2011, and will explode in 2012. DCIM sales will skyrocket as data center facilities and IT managers look for ways to share information and manage for a common goal—energy efficiency and optimization by collectively coordinating the use of limited resources (constrained CapEx, OpEx, and energy resources). Picture facilities and IT all together singing “Kumbaya,” assuming your imagination can stretch that far.

Moreover, while one part of the industry moves toward ever larger, super-sized mega centers, others think in smaller modular terms in the form of containerized data centers, which offer “near perfect” PUE numbers approaching 1.0 (if you believe the marketing department hype) as well as rapid deployment and flexible growth. In addition to the modular data centers and containers from the likes of IBM, HP, and Dell, power and cooling modules are being offered by the infrastructure manufacturers as well.


So the days of a typical data center full of “standard” CRACs and racks may evolve into the next generation of hyperscale computing, driven by social media and search, to be housed in mega-sized data centers or in rows of modular containers in a parking lot or both—many utilizing free air-cooling (imagine servers that can tolerate the same outside air as humans). These new designs may look radically different from today’s hot-aisle, cold-aisle data centers, which could make our current data centers seem as out of date as the old “legacy” mainframe glass house looks to us today.

The formally conservative ASHRAE is now openly advocating free cooling, and ODCA members are using their purchasing clout to influence equipment manufacturers and are giving serious long-term commitments and bringing sustainably thinking to cloud-based services. They are all well aware that there will still have to be real computing hardware running reliably in data centers somewhere (using real energy with a related carbon footprint). Nonetheless, while some changes in the name of efficiency can be good, it is important to remember that everything has a price, and that ultimately there is no such thing as a carbon-free lunch. n