At the beginning of 2010, it seemed that investment in data center energy efficiency had stalled because of the economy; however, it now appears that becoming more efficient may actually be a major factor pulling the economy out of its slump. This year, PUE has become the de facto standard by which data center efficiency is being judged. With this widely accepted standard in place, the competitive nature of our industry’s most creative companies are causing them to vie for the lowest PUE and carbon footprints. In recent months, we have seen repeated claims of PUE designs from 1.3 down to less than 1.1 as many of industry’s largest operations chase the bragging rights for having the most energy-efficient data center in the world.
We are seeing designs incorporating air- and water-side economizers, adiabatic coils, containers/modular solutions, control strategies tying active HVAC capacity to CPU usage, and solar and wind solutions to name just a few. We are even seeing combinations of the above as companies try to squeeze every last wasted watt out of the process.
So what is the end goal? Is it sufficient to just make your particular operation more efficient by plucking low-hanging fruit or are we as an industry willing to go the extra mile? Can we integrate our IT performance needs of reliability, redundancy, and resiliency with the facility operational and economic needs to eliminate energy waste?
To keep the creative juices flowing, and to up the ante on the competitive nature of our industry vendors, I would like to suggest that the ultimate goal is a completely self-sustaining data center requiring no carbon-based fuel. I would further suggest that this goal can be accomplished with known technology readily available on the open market.
In 2008, I outlined how this could be done in Mission Critical. Perhaps this idea’s time has come.
Server cabinets are now burning 12, 18, 25 kilowatts (kW), and more of power
Total power increases because we are slow to dispose of old technology
Cooling systems cannot match the added demands
Companies are moving from two data centers to triangulating between three
- The cost of energy continues to rise and water resources are becoming more scarce
Raised-floor distribution. Most basic data centers use cooling delivered via an underfloor plenum from computer room air conditioners (CRACs), which take warm air in the computer room, cool it, and discharge cold air back into the plenum. The cold air is released into the computer racks via floor cutouts and perforated floor tiles.
This basic cooling process has its limitations due to raised floor height, underfloor obstructions (cables, conduit, piping, etc.) as well as the limitations of CRACs. It is generally accepted that the upper limits of cooling from this method is in the range of 8 to 12 kW per rack or 120 to 150 watts/square foot (W/sq ft) of data center space.
Over 15 years ago, I led a live test of data center cooling that resulted in the conclusion that as a data center approaches 100 W/sq ft, the square footage of the CRACs begins to approach the square footage of the electronic equipment. Deeper raised floors have pushed this to 150 W/sq ft, but eventually the space occupied by the cooling system will exceed the space occupied by the computing equipment it is cooling.
Cabinet-based systems. With newer technology pushing 20+ kW per rack and 150+ W/sq ft, the industry has turned to spot cooling of high-density racks. First, fans were added to move more air through the racks. This increased the rack-cooling capabilities; however, it also added heat load from the fan motors to the room and electrical load to the UPS, thereby reducing the overall data center capacity.
Next, the industry turned to cabinet-mounted cooling systems. These entail mounting a water-cooled or refrigerant-cooled coil on the front, back, sides, top, and/or bottom of the rack. These designs also used fans to re-circulate the rack air between the coil and the electronics in the rack. These coils removed electronics-generated heat from the room; however, the designs place the electronics at risk due to the proximity of water-based systems in and around the racks as well as the added piping, valves, fittings, and liquids throughout the data center. Further, due to the limits of routing pipes through the underfloor of existing data centers, it is not always a practical solution. In new data centers this design demands higher raised floors (over 30 in.), which cannot be accomplished in most multi-story buildings due to floor-height limits.
Water-cooled processors. Lastly, the industry is turning to direct water-cooling of the processors. Although water cooling provides maximum heat removal in the most efficient manner possible, it also poses the highest risk due the presence of pressurized hoses, connections, and manifolds within the rack and in close proximity to the electronics. We have all seen what happens to cabling in the cabinets. Adding a multitude of hoses to these cramped conditions is not necessarily an improvement for data center processing.
Like the water-cooled coil, this application also poses the challenge of routing pipes thru existing underfloor obstructions. Furthermore, due to the small inside diameter of the hoses connected to the heat sinks, the cooling water must be pure and free of any contamination or risk flow blockages that might produce equipment failures. In most facilities, chilled-water systems are not generally known for their water purity.
None of the cooling solutions address power failure situations. High-density processors will go into thermal shock when cooling systems stop due to a power failure. Automatically opening the cabinet doors is not a solution. If it was, then why have doors at all? Without adding the entire cooling process to the data center’s UPS system, cooling will be interrupted until the cooling systems restart after power restoration or a generator start. Adding a mechanical systems UPS to the facility, however, would significantly raise the operation’s PUE.
To address the shortcomings of today’s designs and operations, imagine a fully integrated process that simultaneously provides power and cooling and where the facility’s energy could be 100 percent from a storable renewable resource.
There is energy in the air we breathe. The winds move ships across the seas and carry moisture and particulates around the globe. Air/wind is a powerful source that we need to harness. It is also storable in the form of compressed air, which can be employed to provide cooling and power. If we re-evaluate the tools of previous generations and update their application we can use this knowledge to solve today’s challenges.
Wind turbines are becoming more widely accepted; witness the existence of wind farms. Today’s modern units produce electricity from winds as low as 12 mph with blades that are several hundred feet across, but despite their size they are not imposing, operate in near total silence, and produce zero pollutants.
For over half a century, vortex cooling has been used in industrial facilities for cooling critical processes. These devices are readily available, are currently used to cool electronic cabinets in dirty environments, and are capable of producing sub-zero spot cooling temperatures. They offer an advantage in that they can be thermostatically controlled and are available in models producing up to 5,000 British thermal units per hour. For higher capacities, multiple units can be used. Vortex tubes are available from several manufacturers and are generally 6 to 10 in. tall and 1 to 2 in. in diameter.
Here is how they work:
A vortex tube is used to create a vortex from compressed air and separate it into hot and cold airstreams. The vortex tube’s cylindrical generator (no moving parts) causes the input compressed air to rotate as it is forced down the inner walls of the hot (longer) end of the vortex tube. At the end of the hot tube, a small portion of air exits through a needle valve as hot air exhaust. The remaining air is forced back through the center of the incoming airstream at a slower speed. The heat in the slower moving air is transferred to the faster moving incoming air. This super-cooled air flows through the center of the generator and exits through the cold air exhaust port.
The Self- Sustaining Data Center
Imagine the following:
A wind turbine compresses air filling onsite storage tanks. Tanks can be designed as large and to whatever pressure as necessary. From the tanks, we run a compressed air pipe to the computer cabinets (or two for redundancy). At the cabinet, we install a vortex cabinet cooler for general cabinet cooling. If we have a high-density cabinet, we can run compressed air, in lieu of water, through the heat sinks in the servers.
In both cases, we take the heated exhaust air from the servers/vortex coolers and route it through either a server-based, on-board micro turbine generator that in-turn powers one (or both) of the dual power supplies or collect the exhaust air to drive a cabinet-mounted turbine generator.
For general building power, the exterior of the facility is covered with solar panels with sufficient battery storage to operate thru sever nights.
What we have imagined is a thoroughly reliable and redundant combined power and cooling process that does not require power from a utility or carbon based generator. With the wind turbines we can have one, two, or a dozen filling the compressed air tanks. The tanks provide energy storage for long-term needs (minutes, hours, days). We need only one air supply line, thus we have eliminated 50 percent (return line) of the mechanical cooling piping in the data center.
We have eliminated all water issues (desert cooling is not a problem). Air pipes can even be run overhead (reduces underfloor clutter), and underfloor airflow is no longer an issue.
With the on-board generator we have eliminated 50 to 100 percent of the UPS and power distribution needs. We also eliminate 50 to 100 percent of our battery requirements.
No wind or space for wind turbines? In a pinch, mechanical compressors can always be utilized. These can be permanently installed or rented as needed from your local rental store.
With this concept there is no power input thus the operating PUE and the carbon footprint will be zero. So long as the wind blows and the sun shines it is 100 percent sustainable. I challenge you all to develop similar solutions that meet or exceed this ultimate “Holy Grail” of data center efficiency.