Energy Secretary Steven Chu recently announced that the Department of Energy (DOE) has awarded $47 million to 14 projects to improve energy efficiency in the information technology (IT) and communication technology industries. The federal funds for these projects will be matched by more than $70 million in private industry funding, for a total project value of more than $115 million.

The American Recovery and Reinvestment Act provides funding for research, development, and demonstration projects in three separate subject categories, including equipment and software, power supply chain, and cooling to assure a balanced approach in achieving government goals. The anticipated savings from the projects demonstrate how incredibly much energy can be saved in our facilities and that the Environmental Protection Agency (EPA)’s target of a 50 percent reduction in energy use in data centers is realistic.

The largest award went to YAHOO! to help design a next-generation data center with passive cooling. The integrated building design, including the building’s shape and orientation and the alignment of the servers within the building, allows the data center to use outside ambient air for cooling more than 99 percent of the year. The relatively low initial cost to build, compatibility with current server and network models, and efficient use of power and water are all key features that make this data center a highly compatible and replicable design innovation for the data center industry.

Yahoo!’s director of IT Facilities Scott Noteboom describes the facility as capable of an operating PUE of 1.08 with a construction cost of about $5 million per megawatt of critical power. Yahoo! plans to take advantage of the favorable climate in upstate New York and the cool prevailing winds coming off Lake Erie to achieve both low electricity costs and low capital costs for their HVAC equipment. Yahoo! actually uses outside air 100 percent of the time, unless the outside air is contaminated for some reason, and continuously operates the facility with server inlet temperatures of 80˚F and even higher during unusually hot days when air temperatures rise to as high as 90˚F. Noteboom explains that “we are utilizing a custom evaporative cooling solution to decrease temperatures during summer months. Along those lines, we stretch humidification levels higher than ASHRAE recommendations, but well within gear warranty ranges and our own test ranges to avoid condensation on or in the servers”. Noteboom envisions many energy-efficiency advances in Yahoo!’s future network operations and expects that they could achieve PUEs of 1.03 and construction costs of $3.5 million per megawatt, as well as shorter construction schedules over the next several years. These are numbers that were absolutely unimaginable just a couple of years ago … even for an Internet search engine like Yahoo!

Yahoo’s utility compute architecture, geographic replication of functionalities, and automated data center system recovery allow them to push their operating envelope in ways that may shed some light on how the rest of us might operate our facilities in “parallel computing” environments sometime in the future. And, this is just one of the exciting advances that we should expect to see more of in the future. Please read on about the rest.

Three Technical Categories

Many great ideas were reviewed by DOE, of which many are sure to be developed without DOE funding. So, many congratulations are due to the winners listed below. The specifics and final details for each award will depend on contract negotiations between the grantees and the DOE. But most of us looking on can agree that they look very promising at this point.

Here are descriptions of what DOE has agreed to fund, and what you might expect to see in your facilities at some time in the not-too-distant future.

Equipment & Software Projects
  • $1.6 million to IBM’s T.J. Watson Research Center, Yorktown Heights. NY, to develop and field test software tools that reduce cooling power consumption by using real-time data to optimize HVAC and outside air and save 10 percent of energy.
  • $9.3 million to SeaMicro, Santa Clara CA, to field test new server systems with hundreds of low-power, tiny, and interconnected Central Processing Units (CPUs) that use power efficiently in all operating modes and reduce computing energy by 75 percent compared to conventional servers.
  • $300,000 to Alcatel-Lucent/Bell Labs, Murray Hill, NJ, to develop and simulate methods that synchronize telecom network energy demand with real-time network traffic activity, reducing energy requirements of worldwide networks.
  • $300,000 to the California Institute of Technology, Pasadena, CA, to develop a method for managing energy consumption based on customer demand across servers and data centers according to preferred energy use goals.

Power Supply Chain Projects

  • $2.4 million to Lineage Power Corporation, Plano, TX, to develop a new and more efficient ac-to-dc power rectifier that will allow some rectifiers to operate at lower levels so that the remaining rectifiers can operate at peak efficiency.
  • $222,000 to BAE Systems, Rockville, MD, to develop a model for real-time optimal control (RTOC) algorithms designed to shift network power consumption up or down based upon the need for services.
  • $5 million to Power Assure, Santa Clara, CA, to demonstrate software that manages the power-state of servers by turning servers “on” and “off” as needed to save up to 50 percent of server energy use.
  • $7.4 million to Hewlett-Packard Company, Palo Alto, CA, to test high-efficiency power and cooling distribution systems that can interface with renewable energy sources such as solar, wind, and geothermal power.
  • $2.8 million to Columbia University, New York, NY, to develop “on-chip” technology making power conversions more efficient for central processing units (CPU) and increasing server energy efficiency by at least 10 percent.

Cooling

  • $2.3 million to IBM’s T.J. Watson Research Center to combine advanced metals and liquid-cooled heat sinks that transfer heat from the data center to ambient air available for room or water heating elsewhere.
  • $584,000 to Federspiel Controls, El Cerrito, CA, to integrate variable speed fans, adjustable server fan inlets, and wireless temperature sensors to continuously adjust the volume and targets for cooled air according to temperature.
  • $1.8 million to Alcatel-Lucent, Murray Hill, NJ, to further test and develop advanced chip-level cooling systems that supply liquid refrigerant to micro-channel heat exchangers to more effectively remove heat by bringing refrigerant closer to heat sources resulting in a 90 percent energy reduction in cooling compared to current methods.
  • $2.8 million to Edison Materials Technology Center, Dayton, OH, to develop a liquid-cooling technology for ultra-high density computing, powerful enough for high-performance computing and cost effective enough for the enterprise.
  • $9.9 million to Yahoo!, Sunnyvale, CA, to develop “next generation” passive-cooling methods by designing and engineering a key internet data center.
I wish the best of luck to each and every project manager as you step forward and lead us into a new paradigm of high-performance computing with optimal capital and operating costs! This is truly becoming the most exciting era of data center achievements in the history of computing.


Critical Facilities Round Table

On November 6, 2009, the Critical Facilities Round Table (CFRT) gathered together at Fortune Data Center in San Jose to witness cutting-edge operating solutions with unique air flow controls and supply air temperatures elevated to ASHRAE limits and to see new designs involving containerized IT and MEP assets for flexible and energy efficient data center operations.

CFRT is a non-profit organization based in the Silicon Valley that is dedicated to the open sharing of information and solutions amongst our members made up of critical facilities owners and operators. Please visit our Web site at www.cfroundtable.org or contact us at 415-748-0515 for more information.