Welcome to 2017, and as predicted in my “Show Me the Money” prognosis, the digital ecosystem continues to expand exponentially. This, of course, requires ever more physical data centers which are being planned and built at scales measured in millions of square feet of whitespace and with project budgets bursting through the billion dollar mark.
January saw Digital Realty announcing that it will double-down this year with massive expansion plans, and other major colocation providers making similar positive statements, as well as Verizon selling its data centers to Equinx. Almost every week brings more such announcements; they are now on a financial scale that we can watch the stock market tickers for news of acquisitions, divestitures, mergers, and the overall game of musical chairs of the major and upcoming players.
Putting my Hot Aisle Insight financial crystal ball away for now, we can get back to some of the more basic topics of the Hot Aisle Insight column; cooling and energy efficiency. These new data center projects require power demands approaching or expected to reach the gigawatt range.
Of course, the data center industry in particular has long been a target for Greenpeace who still wants to be the self-appointed grand overseer of the industry. But for now, even Greenpeace seems to grudgingly acknowledge that the data industry has improved. A new Greenpeace 102-page report published in January 2017, “Clicking Clean: Who Is Winning the Race to Build a Green Internet?” (http://bit.ly/2ifCXl0) finds that Apple, Google, and Facebook, are leading the charge to build a renewably powered internet. The report focuses more on renewable energy sources, which is usually a good thing (unless you clear-cut hundreds of acres of trees for a solar farm). While I will not discuss why the data center has any greater responsibility that any other industry, it is good to see some form of positive acknowledgment from Greenpeace.
Setting aside Greenpeace’s “compliments,” things have clearly improved in those areas since 2007 when The Green Grid (TGG) was founded and created the power usage effectiveness (PUE) metric, but there is always room for improvement within our industry. Since then, TGG produced many more sustainability related metrics such as carbon usage effectiveness (CUE), water usage effectiveness (WUE), and, most recently, the performance indicator (PI). ASHRAE also delivered 90.4 Energy Efficiency Standard for Data Centers in 2016 as well.
So now we have plenty of metrics and standards. Moreover, the overall effective use of energy seems to be getting better, despite the ever increasing gigawatt level demands of massive new data centers, which was reported in the June 2016 report published by the Lawrence Berkeley National Laboratory (LBNL) LBNL-1005775 United States Data Center Energy Usage Report. (http://bit.ly/2k4lbU0).
This 65-page report details historic energy usage and projects trends covering 2000-2020. It cites multiple aspects of energy efficiency improvements, such as IT hardware performance gains, coupled with higher utilization rates due to virtualization, as well as facility cooling systems. The report summarizes it as:
“The combination of these efficiency trends has resulted in a relatively steady U.S data center electricity demand over the past 5 years, with little growth expected for the remainder of this decade.”
According to the report, the efficiency of the hyper-scale data centers play a significant role in helping to lower the projected future energy usage, when compared to 2010 efficiency trend (Figure 1).
Some like Google and Facebook have already reduced the facility overhead to nearly zero, with PUE’s in the 1.0x area. That effectively leaves optimizing the energy used by the IT equipment, which has continued performance-per-watt improvements for servers, storage, and network equipment. While ongoing advances continue with each new generation, ever growing computing demands still outpace the performance gains. As a result, there are still more data centers being built, all requiring massive amounts of power. This not only is a huge energy cost, the total of the waste heat contributes to climate change, regardless of how low the PUE of the data center is. This is true regardless if the waste heat is dumped in a river, lake, or ocean, instead of the air. Even if the data center is powered from renewable energy, it is still all just turned into waste heat.
As the data center industry continues to evolve and mature, more and more of the hyper-scalers may begin to reconsider at the economics of energy recovery. They are not dependent on using the standard servers from the major OEM manufacturers and have been designing and have their own custom-built low cost servers for some time. Intel even creates different proprietary versions of their CPUs for each of the internet giants since they now purchase more processors than the major branded vendors such as Dell, HP, and Lenovo. The internet giants do not need to meet any standard form factors and are free to revise designs constantly. Even the open source groups such as the Open Compute Project (OCP) and Open 19 (which was formed by LinkedIn last year), frequently change designs. While the IT hardware designs are driven to lower cost and maximize performance, this is only part of total cost of operations (TCO). Another, and very significant element of TCO, is energy cost. Like any other industry making toasters, cars, or widgets, in the digital economy, TCO of a data center can be summarized as “cost-per-click” (transactions per kWh).
So what is the ultimate answer to this issue? Energy recovery and reuse. Clearly air cooled IT hardware is well established, mass produced, and simple to install and upgrade. However, while the exhaust heat produced is hot enough to affect people working in the facility and higher intake temperatures can impact IT equipment reliability, it becomes a significant management airflow issue beyond a certain power density, which can impact the reliability of IT equipment. Nonetheless, this has obviously not stopped its wide-scale use and massive deployment.
Even if, for the sake of efficiency by extending the “free cooling” temperature range, and the IT intake is allowed to go to 100°F, the exhaust airflow can reach 130°F (or higher). This air temperature is still too low to be effective beyond heating the relatively small offices of a typical large multi-megawatt facility. In contrast, liquid cooled servers can produce 150°F or higher fluid temperatures (without impacting ambient room temperatures or IT equipment reliability). This would allow the heat energy to be more easily harvested on a greater scale.
There has been a lot of interest, activity, and new liquid cooled IT systems (and related support systems) since I last wrote about liquid cooling (http://bit.ly/1mt7ZOQ). This past year, I had the opportunity to be part of a The Green Grid workgroup that wrote the Liquid Cooling Technology Update Whitepaper, which was originally released in November for TGG members, but will also be made available for public download this March.
The white paper discusses the numerous benefits and type of various systems and provides a solid foundation for anyone considering liquid cooling (LC). However, there is a general misconception that LC systems (IT and facility) are more expensive than air cooled data centers. In most cases, this is because up until recently, the majority of LC implementations have been in the high-performance computing arena and also often for cooling higher density racks (at 20 kW or more). More vendors are offering complete LC racks of high-performance servers that are closer in cost to the comparable performing air cooled systems, but are much denser and take less space and power. If produced at higher volumes, costs could be reduced to be comparable or even lower than air cooled IT systems.
Energy recovery and reuse has been a long range goal for data centers. There have been some successful small scale projects over the last few years, such as heating nearby swimming pools or keeping walkways from freezing. However, these were primarily designed for proof of concept rather than for economic justification reasons. In some other cases, waste heat was fed to district heating systems, mostly in colder countries like Finland and Sweden. In most cases, these were air cooled systems in which only a small fraction of the heat was harvested and repurposed.
While large scale LC IT-based data centers with energy recovery have not yet been built, in order to be cost effective they would need be to paired-up with an application or location which could directly use the waste heat of the output fluid at 130° to 150°F. Alternatively, there are some heat pump technologies which could potentially take the input heat at that temperature, and then only need to add some additional energy to raise the temperature on the output side of the heat pump (200°F or higher). This could be applied to some manufacturing processes, or even to drive a low-temperature expansion turbine to produce electricity. The electricity could then be used by the data center or fed back to the grid.
While the basic processes and technologies exist, it will need considerable investment to develop this to be viable on an industrial scale. In order for this to make commercial financial sense, the conversion process must be efficient enough to make the cost saving from recovered energy provide a payback for capital investment within a reasonable period.
Nevertheless, ultimately, due to the magnitude of the energy being drawn by these new massive scale facilities, this could make it cost effective (and far more sustainable). Over the long term, the potential for large scale energy recovery and the resultant reduction of TCO, could drive the economics of a data center designed to support liquid cooled IT systems.
THE BOTTOM LINE
It has taken almost 10 years for low PUEs to be part of every new data center design goal. Now renewable energy sources are also becoming part of the considerations when a new site is reviewed. Nonetheless, while many organizations have social responsibility policies, in order for sustainability and energy efficiency to be even more widely adopted — they cannot be an end in itself. They must be justified economically, just as any other business decision and investment.
Exascale and Zetascale (hyper-scale seems old now), such as cloud service providers, as well as search and social media (which are free to run any type of hardware), are in the best position to develop and operate a data center with significant form of energy based on LC. Just as Facebook is willing to build “free cooling” data centers in extreme locations (such as Luleå, Sweden — which is less than 70 miles south of the Arctic Circle), a data center designed for energy recovery is not so farfetched!
Wholesale colocation providers in particular have become hyper focused on TCO, since they understand this helps make their offerings more economically competitive to their customers. Data Center Alley in Virginia, as well as in other areas of the country, have grown because of an abundance of relatively low cost energy and favorable tax incentives, lowering the TCO. Therefore, I would like to think that in the next few years mega-scale data center projects will include energy recovery systems, for the same reason, which I believe will be made cost effective by liquid cooled IT systems.
While previous colocation and cloud providers were competing with each other, in some cases they have become mutually symbiotic via the creation of the “connected campus,” which is a growing trend for cloud providers to be situated within a wholesale colocation campus. This concept could be extended to make campus-based or centralized energy recovery systems more feasible and cost effective and help drive adoption. One would hope that government incentives (federal, state, or local), could also be created to help drive development, just as it has for solar and wind projects. This would help offset the need to build more power generation and reduce the need to burn fossil fuels of existing power plants.
Cost effective energy recovery could not only become the “next big thing” and part of the next generation of “green” data centers, it will be driven by TCO economics, and not just to generate some PR announcements to appease Greenpeace about social responsibility.