The Green Data Center Conference 2012 hosted by the Global Strategic Management Institute took place on January 31 through February 2 in San Diego, California.  Though one of the centerpieces of the event as in prior years was a tour of the UC San Diego Supercomputer, this year, the event was held at the Sheraton Mission Valley, a much more central location, but GSMI still offered tours of the Supercomputer.

As well as leading one of the pre-event workshops and a presentation on data center site selection, KC Mares of Megawatt Consulting, also gave the keynote, which set the tone and direction for the entire conference and a great direction that was.  KC’s vast knowledge of data center design, functionality and sustainability are communicated from experience and not just from an understanding of the theories that have been postulated. 

Christine Page, Yahoo!’s Global Director of Energy & Sustainability Strategy, gave a great presentation that leveraged, but didn’t rehash the famed Chicken Coop presentation.  Most notable were a number of great thoughts about achieving sustainability.  Citing William McDonough from his book, “Cradle To Cradle,” Christine postulated that we “Design more good instead of less bad.” Moreover, Christine mentioned a number of anecdotes that were fascinating including that about how the humidity standard came into being.

  In the era of early mainframes, punch cards were used for programming.  Because the punch cards were made of paper, it was determined that humidity would interfere with their use in the computers, so a standard stating that the data center humidity couldn’t exceed 45% – 55% was put into place.  Modern day manufacturers now recommend a broader standard of between 0% - 90%.  Though not earth-shattering, Christine connected the dots with regard to how Cloud computing is so green.  It creates economies of scale, allows diversity and aggregation of loads from a variety of users, fosters better utilization of servers, greater flexibility, creates greater reliability with less backup by individual, physical servers and finally ends the division between IT and facilities.

Kevin Donovan of the National Renewable Energy Lab in Colorado gave a remarkable presentation about their Net-Zero Carbon footprint project.  It was a design-build that incorporated a 2.6MW photovoltaic array.  Focused on efficiencies in every aspect of design and utilization, they currently employ a 9.8:1 blade-to-virtualization ratio and are seeking to achieve a 20:1 ratio.  At every opportunity they are using energy efficient equipment, consolidating more servers, right-sizing their IT infrastructure and measuring everything.  This last piece is allowing them to track and manage their data center energy consumption, which was driven by including energy consumption a part of the total cost of ownership.  As part of the design process, Kevin suggested standardizing all of the equipment supporting the data center.  Of course, this is not a new idea, but at the NREL, a very strict discipline enforcing this practice was applied.  One last statistic that Kevin cited was that in 2009, worldwide data center energy usage was greater than the energy used by the nation of Sweden.

I have seen a number of presentations about the benefits of liquid cooling, as it takes a lot less energy to move liquid to cool environments than it does to move air using fans, even those with VFDs.  David Filas, Data Center Engineer with Trinity Health highlighted a liquid cooling system that Trinity installed using a rear door heat exchanger on a 15kW rack that returns 69 degree air back into the aisle.  This system uses glycol. They’re also implementing a program to use more efficient servers and storage arrays to reduce overall energy consumption and heat creation.  This liquid cooling design is an absolutely brilliant solution that is readily available and produces a very low PUE in real life situations.

One of the very interesting case studies presented was that given by Mark Dereberry of Harley-Davidson.  Mark started with a brief IT history of this fabled company, which began its existence in 1903.  I hope that I got this right:

1970 – HD initiates a centralized mainframe;

1980s – HD implements decentralized compute platform using the AS400 and begins phasing out the mainframe platform;

1990s – HD implements a centralized computer client server and the mainframe is decommissioned in 1996;

2000s – HD creates a hybrid model using standardized blade servers with VMWare;

20XX – HD begins a consolidation project to build a world-class data center in Milwaukee, WI.

 

One of the main goals of the consolidation was to create maintenance cost efficiencies.  As part of the process, the consolidation of necessity tied to the construction time line of 12 months.  As part of the process, HD worked with IBM to migrate the data for a seamless transition.  One aspect of the consolidation that I really think was well-conceived was power measurement.  Mark decided to track power consumption prior to the consolidation.  Or course, this isn’t novel, but it points to the meticulousness of thought that was brought to the situation.  In addition to the power measurement, as one would expect, a manifest of all the IT gear was prepared.  With the stakeholders in mind at every stage, the final cost of the conversion was $336,000.  This yielded an annual savings of $171,000 and a power savings of 64%. 

HD learned a number of lessons along the way.  They discovered that they had 2,600 applications running on 1,400 servers and that a number of environments (i.e. SAP) require being run on a number of servers.   Avoiding the reuse of legacy servers and, wherever possible, maximizing server density aided in improving efficiencies.  They leveraged outside expertise as much as possible, if there was no in-house expertise, though as the project matured, the in-house team was able to fulfill an ever increasing number of tasks.  By utilizing economies of scale, they were able to maximize savings and the end result was a reduction of their server landscape by 60%.  This project also yielded a reduction in annual maintenance costs by $1,000,000.  Despite the fact that the resulting PUE after the consolidation project was completed is 2.77 (astonishingly high), the number of servers per kW increased from 1.79 prior to the consolidation to 7.88 after the consolidation.

The last presentation that I was able to view was that given by KC Mares.  The presentation was noteworthy not just in terms of its content, but the range of information that KC covered.  Rather than just review site selection criteria, KC used the project on which he’s working in Reno, Nevada as a case study to exemplify the application of the site selection data set.  KC underscored how site selection criteria are changing constantly as well as the fact that business case definitely has implications as to the importance of the different aspects of the criteria matrix. 

Rather than just discuss the importance of power, KC really explored the concept of understanding all of the components that combine to create the total cost of power including the demand charge, service charge, capacity charge, power surcharges and sales tax.  Unless one fully understands that entire cost model, pricing is really meaningless.  Furthermore, applying the data center PUE to the power cost model to usable IT power is the real underlying importance of that metric.  Finally, in order to determine the overall quality of the utility, one must evaluate the utility’s portfolio, its level of NERC CIP compliance, its future capacity and history of rates increases.  I was also startled to learn that there isn’t one utility in the US today that is in full compliance with the NERC CIP regulations and standards.

KC cited a number of other facts in his presentation:

He postulated that the Northeast power outage a few seasons ago might have been caused by a variant of the Stuxnet virus.  This is not at all out of the realm of possibility based on the looming threat to control systems;

In opposition to a predication made by the Energy Information Association in 1970 that power rates would decline, energy rates have steadily increased;

That not all electricity is created equal – Diversity is good for keeping pricing low, but in some instances, it might be damaging to the environment;

That, surprisingly, the most seismically active area in the US is in the Southeast, not the Pacific Rim; and

That lightning takes down more data centers in the US than any other natural disaster, which might lead one to rethink Phoenix as the ultimate disaster recovery zone.