As anyone in our business is all too fully aware, it takes an enormous amount of electricity to cool a data center. According to Global Market Insights, on average, cooling systems represent 40% of total data center energy consumption, and then of course there is the collateral cost on the environment.
In utilizing approximately 3% of the global electricity supply, data centers account for about 2% of the world’s greenhouse gas emissions, according to Yole Développement, a market research and strategy consulting company. While 2% may not seem like much, that makes the data center industry’s carbon footprint equal to that of the airline business. Stream a video or audio file, conduct a search, send an email to friend or colleague, access a customer relationship management (CRM) application, or binge-watch the latest must-see over-the-top (OTT) television series and to some degree you’re tapping into the finite resources that make the internet work.
One way to increase the energy efficiency of a data center and reduce its carbon footprint is the decision of where to build it. Witness what Facebook and Google have demonstrated in siting two of their facilities in Sweden and Finland respectively, where naturally low year-round temperatures require less mechanical cooling capacity. Facebook’s data center resides in Luleå, which is only 70 miles south of the Arctic Circle. Google’s Hamina data center uses seawater from the Bay of Finland for its cooling system, which reduces energy use and is the first of its kind in the world.
But what if your data center isn’t located in Scandinavia, but in Phoenix, where summer temperatures can spike to well over 100°F? Hot climates are a particularly difficult challenge to cooling data centers, significantly impacting operational expenditures. And then there’s the challenge of accommodating for high, mixed, and variable-density environments, which many data center models fail to accomplish, leading to stranded power and space capacity, as well as energy and water inefficiencies.
Data centers address cooling in many different ways. Microsoft’s self-described “moonshot,” Project Natick, is an underwater data center off the coast of the Orkney Islands in Scotland. While only in early trials, the subsea data center has roughly the same dimensions as a 40-ft long ISO shipping container. Because the world’s oceans at depth are consistently cold, the thinking of the project’s R&D engineers is that they will one day provide free and ready access to a virtually unlimited cooling resource.
More commonly, many data center owner-operators opt for direct liquid cooling techniques to maintain their facilities at an ideal temperature. Direct liquid cooling, which has been around for many years, utilizes a wide range of techniques whereby a coolant fluid is delivered to the processor and other electronics for heat transfer. However, the issue here from an environmental perspective is that this practice often employs an extraordinary amount of clean water, requires extensive fluid disposal, and ongoing water treatment.
This is especially a problem in developed countries such as the United States, where data centers contribute to industry’s consumption of all available clean water. A study by the U.S. Department of Energy (DOE) and the Lawrence Berkeley National Laboratory found that five years ago, data centers across the country consumed approximately 165 billion gallons of water in cooling and power. By next year, the DOE predicts that annual water use in the nation’s data centers could increase to 174 billion gallons. While these numbers might seem abstract, they certainly raise the issue that these resources are not limitless.
While air cooled data centers are generally considered efficient, they too consume considerable energy to power the facility and the IT system fans that move air. And here again there is the issue of stranding power capacity, which is necessary to act as a reserve for peak cooling requirements when the fans, pumps, and compressors are laboring at their highest levels. This is power capacity, which at some facilities can represent as much as a third of a site’s total power envelope that could be reallocated for IT.
The data center cooling challenge is actually a heat removal challenge
From a cooling and electrical perspective, it’s become necessary to redesign traditionally static systems to be dynamic, able to handle varying densities, as well as variable power draws. An essential element of an adaptive data center model, cooling technology that captures and removes heat at its source rather than pushing cold air into the data hall, provides a hyper-scalable and ultra-efficient environment that dynamically adapts to IT loads. Simply put, the data center cooling challenge is actually a heat removal challenge. Significantly improving the efficiency of existing infrastructure, such a purpose-built cooling system can accommodate both new data centers and retrofitted facilities.
Combining innovation with simplicity, this cooling system is based on three aspects of thermodynamics: heat absorption, heat transportation, and heat rejection.
First, the cooling system captures and contains the heated air exhausted from the equipment racked in the cabinets, channeling it through an extremely efficient primary unit, where the heat is absorbed and then removed at source. This cooling technology requires significantly less power, only 1% of the IT load as compared to 10% of a typical computer room air conditioning (CRAC) unit, resulting in one of the most energy-efficient methods of heat capture and rejection available today.
The heat is then transported to another unit, where it is rejected into the atmosphere. This not only reduces the water flow requirements by 50%, but also results in a 75% reduction in piping costs: less water means less piping and pumping.
Heat is absorbed and transported by the first unit, but it is the second unit that rejects the heat into the atmosphere. The secondary unit is an air cooled adiabatic assisted cooling system comprised of a dry fluid cooler with an indirect evaporative cooling mode and an integrated chiller heat rejection system. Because it is a closed system decoupled from the air handler, there are no external environmental risks or temperature and humidity fluctuations. Even on the hottest day of the year, it offers a peak mechanical power usage effectiveness (PUE) of 1.15, which is up to 40% less than traditional cooling systems. Moreover, the unit is factory-built and incrementally scalable, in 750kW to 1.5MW blocks.
Considering that capacity demand for hyperscalers, platform, and cloud providers, and even enterprises dependent on high-density computing, can fluctuate quarter-by-quarter or even day to day, these last two points cannot be overemphasized.
Given the cyclicity of their revenue streams, the launch of new products and services, or the expansion of their businesses into new markets, demand forecasting and related business planning can prove challenging for these types of organizations. Faced with these uncertainties, cooling technology that is incrementally scalable allows companies to deploy what they need, where and when they need it. Pre-fabricated cooling components facilitate easy, fast, and efficient deployment to ensure speed-to-market. Taking the installation of the most critical equipment out of the field and into the factory, prefabricated components accelerate project timelines and reduce cost, as compared to on-site assembly.
This dynamic approach to cooling technology supports high, mixed, and variable power densities to enable companies to evolve without stranding capacity. It offers the ability to scale vertically or horizontally, supporting 1 to 50kW per rack within the same footprint and resulting in fewer points of failure. Workload densities can scale in place without having to reconfigure existing infrastructure, disperse equipment, or require large-scale investments to augment floors for increasing heat loads.
And because IT loads can vary, a dynamic cooling system quickly and easily responds, and optimizes in real-time. The units are close-coupled with the racks so the heat removal system instantly ramps up and down based on server demand. Variable fan speeds and pumps respond in real-time to changing loads so the system is responsive to today’s and tomorrow’s needs.
Future-proof cooling technology for a sustainable planet
According to a study by the International Energy Agency, hyperscalers already account for 20% of the world’s data center electricity usage, but by next year, they may well draw nearly half of it. Cisco estimates that in two years, hyperscale data centers will account for 55% of all data center traffic, 65% of data stored in data centers, and 69% of all data center processing power.
Given Gartner’s estimate that more than 20 billion Internet of Things (IoT) devices will be deployed next year, we can be certain that data generation will only continue to accelerate. Factor in the growing use of other new and emerging technologies, such as consumer and industrial artificial intelligence (AI) and machine learning (ML) applications, autonomous vehicles, drones, and augmented and virtual reality (AR/VR), and it’s clear that hyperscalers, cloud, and platform providers will have to be able to secure scalable capacity — where and when they need it — that is capable of serving potentially hundreds of millions of users.
Even as they experience exponential growth, as good stewards of the planet, these organizations will seek to enhance sustainability by reducing the energy, water, and space needed to operate their physical data center environments, while striving to improve cost-efficiencies. In many cases, big tech companies such as Google, Amazon, Microsoft, and IBM have built their data centers in proximity to low cost, renewable energy sources. However, the distributed nature of hyperscalers also calls for regional solutions, where workloads can be localized to reduce latency and provide an improved user experience.
Once again, unpredictable usage and growth models require a new breed of adaptable data centers that enable these organizations to deploy infrastructure quickly as needed, and reconfigure seamlessly if necessary.
Especially in light of the sustainability objectives, the importance of a cooling system that delivers efficiency at any load, and in any climate, regardless of location has never been more critical. Future-proof cooling technology can significantly lower energy and water usage to reduce environmental impact while decreasing the total cost of ownership (TCO).