Dennis  Cronin


The difficulty of removing the ever-increasing heat loads in the data center continues to challenge even the most diligent engineers, manufacturers, and end users. Skyrocketing energy costs and diminishing energy reliability make continuous operations more difficult and expensive. While processing densities have rapidly increased, data centers have propagated, and energy costs gone through the roof, we’ve seen little true innovation in addressing data center power and cooling problems.
The problem:
  • Server cabinets now burn 12, 18, 25-kilowatts (kW) and more of power.
  • Old technology is slowly decommissioned so total site power/cooling increases
  • Cooling systems cannot match the added heat load
  • Companies are moving from two data centers to triangulating operations among three
  • The price of oil flirted with $100 per barrel in January; expect electricity and gas prices to follow
I’ve outlined a tiered solution that is simply an aggregation of time-tested tools and systems that are readily available. I’ve also described a radically different solution that addresses the shortcomings of the familiar solutions.



The Tiered Solution:

Raised-Floor Distribution
Cabinet-Based Systems
Water-Cooled Processors

Raised-Floor Distribution.Most basic data centers use cooling delivered via the under-floor plenum from computer room air conditioners (CRACs) that take and cool the warm air in the computer room before discharging the cold air back into the plenum. The cold air is released into the computer racks via floor cutouts and perforated floor tiles.

Raised floor height restrictions, underfloor obstructions, and the CRAC unit capacities limit the effectiveness of this basic cooling process. It is generally accepted that the upper limits of cooling from this method is in the range of 8 to 12-kw per rack or 120 to 150 watts per square foot (W/sf).

More than 15 years ago, I led a live test of data center cooling that concluded that as a data center approaches 100 W/sf, the footprint of the CRACs begins to equal the footprint of the electronic equipment. Deeper raised floors have pushed this limit to 150 w/sf.

Cabinet-Based Systems. With newer technology pushing past 20 kW per rack and 150 W/sf, the industry has turned to spot cooling of high-density racks. First manufacturers added fans to move more air through the racks. This increased rack cooling but also introduced heat load from the fan motors, thereby reducing the overall data center capacity.

Next the industry turned to cabinet-mounted cooling systems. These entail mounting a water- or Freon-cooled coil on the front, back, sides, top, and/or bottom of the rack. These designs used fans to re-circulate rack air between the coil and the electronics in the rack. This removed heat generated from electronics from the room; however, the designs place the electronics at great risk from water in the racks as well as from the added piping, valves, fittings, and liquids throughout the data center. Further, routing pipes through the underfloor of existing data centers it is not always practical. In new data centers, this design requires higher raised floors (over 24-in), which many facilities cannot accommodate due to floor-to-floor height limitations in most multi-story buildings.

Water-Cooled Processors.Last, the industry is turning to direct water-cooling of the processors. Water cooling removes the most heat in the most efficient manner, but the presence of pressurized hoses, connections, and manifolds within the rack and in close proximity to the electronics poses the highest risk to the servers. We have all seen what happens to cabling in the cabinets. To add a multitude of hoses to these cramped conditions is not necessarily an improvement for data center processing.
Routing pipes through existing under-floor obstructions can make this solution impractical. Furthermore, the small inside diameter of the hoses connected to the heat sinks means that the cooling water must be pure and free of any contamination to eliminate flow blockages. Most chilled water systems are not generally known for their water purity. 


Power Failure

High-density processors go into thermal shock when cooling systems fail. If automatically opening the cabinet doors were a solution, we could eliminate doors. If we add the entire cooling process to the data center’s battery backup, we would significantly increase the loads on the battery back-up systems.
A Radical New Idea: to address the shortcomings of today’s designs and operations, imagine a process that simultaneously provides cooling and power. Further imagine an operation where the facility’s energy could come from a 100 percent storable renewable resource.

The air we breathe stores energy. The winds move ships across the seas and carry moisture and particulates around the globe. Air/wind is a powerful source that we need to harness. Compressed air is also a very familiar energy storage technique that can be employed to provide cooling and power.
I recently traveled to Europe for a factory witness test and was amazed by the prolific use of large 1-2 megawatt (MW) wind-generating turbines. Despite having blade lengths several hundred feet across, they were not imposing. They operate in total silence and produce zero pollutants. I just learned that a Canadian firm is producing a design for a shorter and more powerful wind turbine.

For many years I have wanted to use vortex cooling inside data centers. Vortex-cooling devices (readily available) have cooled electronic cabinets in dirty industrial environments for decades and are capable of producing sub-zero spot-cooling temperatures. These devices can be thermostatically controlled and are available in models producing up to 5,000 Btus. Multiple units can be used for higher capacities. Several manufacturers produce vortex tubes that are generally 6-10-in. tall and 1-2-in in diameter.

A vortex tube would cool electronics by separating a vortex of compressed air into hot and cold air streams. The vortex tube’s cylindrical generator (no moving parts) causes the input compressed air to rotate as it is forced down the inner walls of the hot (longer) end of the vortex tube. A small portion of this air exits through a needle valve as hot air exhaust. The remaining air is forced back through the center of the incoming air stream at a slower speed. The heat in the slower moving air is transferred to the faster moving incoming air. This super-cooled air flows through the center of the generator and exits through the cold air exhaust port.




Five Steps of Innovation

Imagine using a wind turbine to fill on-site storage tanks with compressed air. The tanks can be as large and filled to whatever pressure is necessary. A pipe (or two for redundancy) would transport compressed air to computer cabinets designed to be cooled with a Vortex cabinet cooler. For high-density situations, the compressed air could run through the heat sinks in the servers in lieu of wa-ter.

In either case, routing the heated exhaust air from the servers/vortex coolers through either a server-based on-board micro turbine generator that in-turn powers one (or both) of the dual power supplies or collects the exhaust air to turn a cabinet-mounted turbine generator.   

What we have imagined is a thoroughly reliable and redundant combined power and cooling process that does not require power from a utility or carbon-based generator.
 
The wind turbines could supply one, two or a dozen compressed air tanks. The tanks provide energy storage for long-term needs (minutes, hours, days) We need only one air supply air line, thus we have eliminated 50 percent (return line) of the piping in the data center.

We have eliminated all water issues (desert cooling is not a problem). Air pipes can even be run overhead (reduces under-floor clutter).

Under-floor airflow is no longer a significant issue.

With the on-board generator we have eliminated 50 - 100 percent of the UPS and power distribution needs. We also eliminate 50-100 percent of our battery requirements.

No wind or space for wind turbines? In a pinch, mechanical compressors can always be utilized. These can be permanently installed or rented as needed from your local rental store.

Your job: Point out the shortcomings of these ideas or let us know what barriers prevent use of these tools in our data centers.

SIDEBAR: Win a Mission Critical coffee mug

The enormous creativity of our industry means that we face radical changes in the way we design, build, operate, and maintain mission critical facilities. Making this change will demand strong debate about what is right, wrong, and best practice.

We have created “Cronin’s Workshop” to foster this debate. Each month our columnist Dennis Cronin will describe a problem and pose a solution. We want to hear what you have to say about these problem/solutions posed by Dennis. In the subsequent issue we will publish the best positive and negative opinions received from readers.

Next issue we will discuss the correct applications of “Free Cooling”
  • Why it has failed so miserably in the past
  • Risks in applying it to a data center operation
  • How it can succeed now.