Banks and other financial institutions keep track of all their “cold cash” using data centers filled with racks of computer servers. Unfortunately, it takes a lot of that cold cash to prevent those racks from overheating.
If the information technology (IT) equipment itself uses 50 kilowatts (kW), for example, then it may require two times as much electricity — or 100 kW — to fully support and cool the IT equipment. But a Connecticut-based company has invented a way to slash data center cooling costs by employing advanced variable-speed compressor and fan motor technologies.
“With traditional mechanical cooling on an 88° day, it may take 300 kW or more to cool 350 kW of IT load,” says David Robinson, senior vice president of strategic development for Inertech, LLC, Danbury, CT. “But our solution does it with just 7 kW. Plus, our design has proven that by optimizing cooling requirements, customers avoid huge expenses, can save millions of gallons of water, and can massively reduce their carbon footprint. This new approach is a game changer in data center cooling.”
A new model for total system energy savings
Traditional cooling methods typically employ an underfloor air distribution system using fans to push cold air up into the server racks and through the entire room, which wastes a lot of energy and water.
Figure 1. An innovative cooling design earned this facility a PUE ranging from 1.02 to 1.05.
One study shows the cost of electricity to operate a server farm over its four-year life span now exceeds the cost of buying the servers in the first place. Moreover, the water lost from a data center’s cooling tower during summer can easily exceed 100 gallons every minute, over 13 million gallons during the three summer months.
“That’s why the industry developed the hot aisle/cold aisle containment model,” says Robinson. “In this design, the racks are arranged in opposite rows facing each other. Cold air is fed into the rack inlet on the ‘cold aisle’ side. Hot air is exhausted out the back into the ‘hot aisle.’ Containment structures channel the cold air directly through the racks instead of circulating randomly around the entire room.”
“Our cooling solution is part of a whole-building, total system approach,” continues Robinson. “It employs some type of hot-aisle containment, a close-coupled compact cooling distribution unit that enables one side to take over if another side fails, and an external heat rejection unit. It’s integrated with real-time monitoring and data acquisition to ensure optimum performance and minimal maintenance. Compared to traditional systems, our system approach can cut cooling energy consumption by up to 90% and water usage by up to 80%, depending on the local environment conditions.”
Driving data center cooling costs down
Aiming to create the most efficient data center cooling systems on Earth, Inertech found a technology partner in Danfoss.
In 2012 and again in 2013, Inertech, with its construction management partner Skanska, built and commissioned two super data centers for TELUS in Canada using Inertech’s eComb solution with its eOPTI-TRAX® cooling distribution unit in a traditional aisle environment. In this system, the rear side of the servers face each other as they exhaust hot air into a contained hot aisle plenum. As the air exits the electronics into the hot aisle, a series of “heat sync” heat exchangers absorb the electronics’ heat at its source, and then transports the heat to the atmosphere utilizing Inertech’s patented cycle. The warm air then passes through a secondary set of overhead heat exchangers, which can provide backup or be run in tandem with the heat syncs for greater efficiency in heat transfer. Inertech’s cycle facilitates multiple levels of cooling redundancy at the electronics level. This patented method can cool a single server for 0.3 W of electricity instead of the 90 W usually required by traditional systems.
“We realized we could create a design that absorbed heat without needing any form of compression at up to 74°F wetbulb conditions — and handle varying loads, control it, and use relatively little power in the process of this control,” says Gerry McDonnell, Inertech’s co-founder.
“This is a revolutionary approach to data center cooling,” McDonnell emphasizes. “It uses free cooling 100% of the year without introducing outside air, and all of the security risks associated with that, into a mission critical facility. By the nature of its design, our solution is always optimized to the atmosphere with efficiency as the by-product of that optimization. In extreme hot weather, we use Danfoss Turbocor oil-free variable-speed compressors to trim the load proportional to changes in the atmosphere. This ensures mission-critical precision cooling.”
PUE and water conservation
All the expert design, engineering, and implementation is paying huge dividends for the Delaware financial center and for TELUS. The industry’s system of measurement is power usage effectiveness (PUE), which is total facility power consumption divided by IT equipment power consumption. A PUE of 1 would mean 100% of the facility’s power is used to operate IT equipment and none for lighting, cooling, or heating — an impossible situation. That’s why it’s remarkable that the Inertech cooling solution has been able to achieve a design-day mechanical PUE ranging from 1.02 to 1.05 for TELUS. A RACK-PAX™ system, which cools hot exhaust air as it exits the racks, is expected to do the same for the Delaware project.
“Mechanical PUEs in the industry these days are typically in the range of 1.5 to 3,” says Earl Keisling, Intertech’s co-founder and CEO. “Part-load PUE zooms up to 5.0 to 6.0. That brings the mechanical energy overhead for the typical facility to 80% to 90%. We can drop that down to 3% or less. This solution is 10 to 25 times more efficient than a chiller plant, and six to 11 times more efficient than evaporative cooling and air cooling. It’s not just about saving electricity. Those efficiencies can free up electrical capacity in sites that are maxing out their power capacity.”
Reducing energy and water use
With the solution, a customer spending $20 million annually can knock down energy costs to $2 million without compromising reliability. Worldwide, that level of efficiency would reduce the estimated $2.7 billion cost of powering today’s data centers to less than $500 million. The savings are not just in cost but also in carbon footprint reductions. A facility with input power of 2,000 kW and a PUE of 2.9 can slash carbon emissions from 223 tons to just 80 tons by dropping to a PUE of 1.04.
There are other significant savings. With 39% of water in the United States used to cool buildings and power-production facilities, the potential water savings are immense. Using Inertech’s 80% water savings, a data center using 13 million gallons a year, for example, could cut annual water consumption to 2.6 million gallons. With more than 500,000 data centers worldwide, that level of conservation could save trillions of gallons of water.
“The bottom-line benefits for mission critical data centers are incredibly attractive,” concludes Robinson. “It’s not just huge energy and maintenance savings; it enables capital cost reductions on the chiller plant, cooling towers, backup generators, piping, etc. And our solution can be implemented for retrofit applications and for new data centers using a modular, phased-construction approach that can grow as data demand grows, which reduces total cost of ownership by 30% to 40% compared to traditional installations. Thanks to Danfoss, we’ve created a solution that delivers impressive energy, water, and operational savings, plus a faster return on investment that mission critical data centers really appreciate.” n