At the beginning of 2010, it seemed that investment in data center energy efficiency had stalled because of the economy; however, it now appears that becoming more efficient may actually be a major factor pulling the economy out of its slump. This year, PUE has become the de facto standard by which data center efficiency is being judged. With this widely accepted standard in place, the competitive nature of our industry’s most creative companies are causing them to vie for the lowest PUE and carbon footprints. In recent months, we have seen repeated claims of PUE designs from 1.3 down to less than 1.1 as many of industry’s largest operations chase the bragging rights for having the most energy-efficient data center in the world.
We are seeing designs incorporating air- and water-side economizers, adiabatic coils, containers/modular solutions, control strategies tying active HVAC capacity to CPU usage, and solar and wind solutions to name just a few. We are even seeing combinations of the above as companies try to squeeze every last wasted watt out of the process.
So what is the end goal? Is it sufficient to just make your particular operation more efficient by plucking low-hanging fruit or are we as an industry willing to go the extra mile? Can we integrate our IT performance needs of reliability, redundancy, and resiliency with the facility operational and economic needs to eliminate energy waste?
To keep the creative juices flowing, and to up the ante on the competitive nature of our industry vendors, I would like to suggest that the ultimate goal is a completely self-sustaining data center requiring no carbon-based fuel. I would further suggest that this goal can be accomplished with known technology readily available on the open market.
In 2008, I outlined how this could be done in Mission Critical. Perhaps this idea’s time has come.