Mission Critical is now celebrating it 5th anniversary. However, before you assume the editors didn’t catch the extra zero in the title, “50th Birthday,” let me note that the rate of change in the industry has been accelerating, especially over the past five years and the number of changes that occur in one year appears to be on the order of magnitude of 10 years, when compared to the earlier, relatively stable, 60-year history of the data center.

However, before I get into the original planned thread of this column, I also want to point out the sad irony that Superstorm Sandy hit the New York metro area, and turned parts of NYC and some of the surrounding areas into swampland on the eve of the fifth anniversary of Mission Critical whose cover bears the tagline “Data Centers and Emergency Backup Solutions.” Sandy even shut down power in the lower half of Manhattan for five days, including the New York Stock Exchange (NYSE), which remained closed for an unprecedented two days (the Exchange required emergency back-up generator power to finally re-open and operate for the balance of the week).

Clearly, the storm created an emergency far greater than losing utility power for a few hours. Many underlying critical infrastructure systems that were previously taken for granted as secure and reliable, even under prior adverse events, failed.

There are many lessons to be learned (or re-learned) from this storm, and Sandy is far from the only recent and major weather or geologically related event that impacted data centers, critical infrastructure, and the general population. Katrina in the U.S., Japan’s tsunami, and the volcano eruption in Iceland all come to mind.

The severe impact of Sandy on New York City, in particular, should be a wake-up call, especially because of the concentration of data centers in an area of the country thought to be relatively immune to catastrophic system damage from weather and which had been further hardened after the September 11, 2001, terror attacks. The disaster should serve as a more significant reminder that we need to remember the core requirement of the data center—availability—and one of its key supporting pillars—autonomy—even in the most severe emergency conditions. In some cases, so much of the general infrastructure was damaged that even fuel availability and delivery to back-up generators became a severe problem for data centers, general businesses, and the civilian population. I am sure that there will be many blogs, columns, and white papers written and published about this in the coming months in Mission Critical and other industry publications.

Getting back to the main topic of this anniversary column: what has happened in the data center industry since Mission Critical was founded five years ago? 

The Green Grid was formed that same year and became the unifying voice for data center energy efficiency by developing the power usage effectiveness (PUE) metric, many other subsequent metrics, and new white papers, The Green Grid grew quickly and became a globally recognized organization.

ASHRAE’s environmental recommendations changed three times since 2007 (having revised TC 9.9 twice within three years), and in its 2011 Expanded Thermal Guidelines, ASHRAE boldly stated the new goal of ending the need for mechanical-based cooling wherever possible for new data centers to save energy.

Virtualization took hold and broadly expanded, and while it did help consolidate server hardware, total growth, and demand for more computing resources did not reduce the total IT load in data centers. In fact, these loads continued to grow and also raised heat levels as more blade servers and 1U servers began to populate the racks. Moreover, since it was no longer necessary to purchase an actual server to “create” a server, “server sprawl” became a management problem. Nonetheless, virtualization continues to offer many advantages, even as it is deployed in more hardware and systems. 

Energy Star for Servers was born in 2009 and produced hardware efficiency. The Energy Star program was then expanded in 2010, making it possible to certify an entire data center.

Perhaps in response to the ever-expanding size and scale of data centers, data center infrastructure management (DCIM) software and hardware came out of nowhere. While the definition of DCIM remains confusing, DCIM grew from zero to a multi-billion dollar category and continues to grow as vendors, owners, and operators try to optimize management and efficiency of the entire data center.

Stepping outside of the facilities environment and into the daylight, let’s not forget the “cloud,” a nebulous term at best describing almost anything related to delivering computing services “magically” everywhere. The hype surrounding the cloud, which while still in its relative infancy, unfortunately gives the perception that we no longer need physical data centers. When coupled with the explosion of mobile computing devices and services, it just expands the general perception that data no longer needs to be stored locally but, of course, is always available (there’s that word again) from anywhere (of course this only further adds to the need for expanded wireless data carrier capacity and bandwidth as mobile computing becomes ubiquitous).

Internet search and social media grew and exploded over the past five years, and we saw Google, Yahoo, and Facebook become part of most of our personal and business lives. In 2010, Facebook designed and then built its first data center in 2011, while breaking virtually all the conventional rules and standards for the building itself and the power and cooling infrastructure systems, as well as the IT hardware, all in an effort to achieve a PUE as close as possible to 1.0, seen as the “holy grail” of energy efficiency, via free cooling. But while previously there were snippets of general information given out by Google and Yahoo about their unique and proprietary designs, Facebook created the Open Compute Project, published complete technical details about the project, and invited other companies to share and contribute. And the word adiabatic crept in our lexicon.  

The Open Data Center Alliance (ODCA) was also created in 2010, but unlike the non-conformists at Facebook, ODCA members were primarily from the upper echelons of the ultra-conservative financial and manufacturing world. The original primary focus of the ODCA was the standardization of the cloud as a viable resource for the mainstream big business. Yet in late 2011, the ODCA formed a working relationship with the Open Compute Project to liaison and cross-share information regarding facility-side and hardware developments.

More recently, in late September, the New York Times took a swipe at the data center industry for its poor use of energy. The article was highly biased and cited outdated or unrelated examples scenarios. However, underlying it all was the unspoken issue related to the inherent inefficiency of redundant systems. Four days later, Congress wrote an open letter to the DOE and EPA demanding an explanation of how they “allowed” this to happen and what was going to be done to “fix” the problem. While the full repercussions of the Congressional letter have yet to be seen, the industry is waiting for the other shoe to drop.

THE BOTTOM LINE

While there is still some contention about how much global warming will impact the world, it is no longer a matter of “if.” Planning based on 100-year flood zones may no long be considered conservative enough. The evaluation of any potential data center or other critical infrastructure site is no long a cut-and-dried exercise. Geographic diversity for replicated or back-up sites is no long a simple question. In fact, I just checked the U.S. Government Flood Insurance site for New York City, and as of November 1, 2012 the next update of the flood zone map, which was scheduled for January 2013 has been marked “on hold.”

By no means is this meant to be a comprehensive list of technical developments, trends, and new practices over the last five years. The data center industry has evolved in many ways and continues grow in size, shape, and scale as it strives to improve itself across the board, including energy efficiency. Therefore, we must never forget the basic reason for the data center: to ensure that mission critical loads are always available, regardless of what happens. Given the recent and more frequent catastrophic weather related events (and not just in NY metro), we all need to review and perhaps re-evaluate our basic assumptions.

So with that said, on the 5th anniversary of Mission Critical, hats off to its founding editor Kevin Heslin and BNP Media, its publishers, as well as my fellow columnists and the many other contributors who will continue to deliver the latest developments, opinions, and advice for our industry.