Energy efficient, renewable, sustainable, carbon neutral, and of course the catch-all, going green — all make for great sound bites, but how do we really quantify how well we are doing in the age of gigawatt scale data centers? While the power usage effectiveness (PUE) metric from The Green Grid debuted in 2007 as a relatively simple way to quantify the facility site energy efficiency, it was only the first step. However, the water usage effectiveness (WUE) and carbon usage effectiveness (CUE) sustainability metrics made data centers look beyond the walls of the facility to the source of the energy generation, and then we all discovered what Kermit the Frog knew when he sang, “It’s Not Easy Being Green.”
Nonetheless, the hyperscale giants claim they are all committed to the same target of using 100% renewable energy. These claims were examined in detail by the 2017 Greenpeace report, “Clicking Clean,” located at here.
As it turns out, some sustainability leaders, such as Google, state they have reached the goal of purchasing 100% renewable energy in 2017 (having purchased more than 7 billion kilowatt-hours of electricity — which is approximately the same as Rhode Island uses). Google details their efforts in their recently released report titled, “Moving toward 24x7 Carbon-Free Energy at Google Data Centers: Progress and Insights”.
However, even Google admits that it is not the same as operating on renewable energy 100% of the time. What is the difference? In essence, they use a variety of power purchase agreements (PPA), which means that over the course of the year the total kWh they used was equal to the renewable energy that was purchased via the PPA. Clearly, there is no doubt this is an admirable milestone. Nonetheless, Google admits that it is not the same as actually operating continuously on renewable power sources exclusively during a 24-hour cycle.
The Google report provides various graphs plotting the number of hours of carbon-free energy (CFE) over the course of a year, in relation to carbon-based energy sources. In the end, they contractually achieve the 100% (CFE) purchase via PPAs.
The immediate solution that comes to mind is energy storage systems, with all the new improvements in performance capacity Lithium Ion and other battery technologies. However, in reality this is not as simple or cost effective as it would seem. While Google (and the other internet giants) have the resources, the Google report indicates that even if they were to invest in large scale on-site storage capacity, it would not actually ensure that the site would not need to utilize some amount of non-CFE sources.
In fact, they provide performance charts for some of their major locations. One such example is their North Carolina data center, which uses regionally generated solar energy. It showed that while on average the midday electricity use matched solar sourced energy, overall in 2017, only 67% of this data center’s electricity use was matched on an hourly basis with carbon-free sources (however, they note that this is still CFE, owing to North Carolina’s nuclear generation). In fact, they state, “In the absence of long-duration energy storage, a single source of renewable energy is generally not sufficient to provide a 24x7, 100% match with a data center’s load. Even in a region where our wind PPAs can produce up to three times as much power as our data center requires, there are also breezeless hours or days when our load is matched with scarcely any carbon-free power.”
Like Google, Microsoft also employs and supports utilizing the PPA as a practical mechanism to purchasing renewable energy stating, “In less than a decade, renewable energy created from corporate PPAs went from zero to more than 13 gigawatts in the U.S. alone.” Moreover, they are actively promoting and making it easier for others to buy and sell renewable energy and reduce the risks of intermittent sources by partnering with Allianz insurance and organizations. They are calling it a volume firming agreement (VFA), and Microsoft, in addition to co-developing it, will become the first adopter. This was announced in an October Microsoft Blog, “Buying renewable energy should be easy — here’s one way to make it less complex.”1
In the end, while these are mostly financial transactions, it clearly benefits everyone by lowering the cost of renewable energy sources. It also becomes a catalyst helping increase the overall percentage of renewable energy generation capacity being added, based on increasing demand which becomes more available to other data centers. Of course, indirectly it also helps drive adoption and the benefits of lower cost renewable energy sources for everyone, since in some areas the cost of renewable energy is lower than fossil fuel based generation.
Like many of the other hyperscale operators (Amazon, Apple, Facebook, Microsoft, etc.), Google is a business and the cost of energy is a major element of their operating expense, regardless how it is generated. So they closely monitor and unceasingly strive to optimize their facility energy efficiency (with PUEs well under 1.2), as well at their IT energy efficiency. However, until more recently, purchasing renewable energy was more expensive. Therefore, promising to go 100% renewable energy was an expensive issue that most other businesses (data center or otherwise) were in no hurry to fully commit to.
RENEWABLE ENERGY — TWO CENTS PER KWH
However, by committing large scale projects, Google and others believed that over the long term, renewable sources such as solar and wind, can have a lower cost than fossil fuel energy sources and it is proving to be true. While neither Google or Microsoft disclose how much (or little) they are paying for renewable using their PPAs or VFAs, the U.S. Department of Energy Lawrence Berkeley National Lab Wind Market Report (August 2018) shows that wind power PPAs have dropped from approximately 7 cents per kWh (in 2009) to around 2 cents per kWh moving into 2018.
Nevertheless, in some cases, the optimal sites for solar and wind generation does not always coincide with large scale local demands for the energy, which complicates the generation vs. transmission capacity and related delivery cost equations.
Moreover, local existing power plants (such as coal, and even newer natural gas plants) had, and still have, a vested interest in losing revenue from reduced annualized energy revenue (kWh), while still being called up to meet peak demand (kW) when the intermittent renewable sources do not meet the real-time power demand. As a result, in many cases, power plant operators and utilities initially reacted by trying to impede or delay the spread of renewable energy sources, rather than trying to adapt and integrate renewables into their energy generation portfolio.
THE BOTTOM LINE
So as we enter the last year of this decade, think about how we can all help make our existing data centers “greener.” I am writing this article on the heels of teaching a U.S. Department of Energy Data Center Energy Practitioner (DCEP) training class, which is primarily focused on optimizing the efficiency of existing data centers. The DCEP program examines a wide range of “traditional” but energy inefficient practices, such as trying to overcome IT hot-spots by lowering cold aisle temperatures to 60°F or less and trying to rigidly maintain 50% RH, while overlooking the importance of the basics of proper airflow management, which are relatively easy to correct. The range of opportunities to improve existing older data centers is quite extensive, since they tend to be less efficient than the new sites.
And while Amazon and Google and others recently announced they are already beginning to utilize AI and ML to continuously optimize their cooling system energy efficiency, you don’t need to go that far. You will be amazed by how much energy can be saved by just adding blanking plates to your racks, then slowly raising the supply temperature a few degrees, and then reducing your cooling unit fans speed by just 20%. However, in order to make meaningful and ongoing improvements, make sure you have some level of real-time energy metering (not just the monthly utility bill), or a DCIM system, so that you can verify or model if, and how much, if any, changes you make have a positive impact.
Finally, think about the “Negawatt,” the power that you did not need to consume. Start by holistically examining and optimizing every aspect of the computing environment, starting with the IT equipment, and the software (yes software, which can be energy optimized or just demand massive amounts of processors — just to make it work). This is one of the reasons that the hyperscalers and cloud service providers are so efficient. Hopefully, it may be an additional catalyst to enterprises organizations to increase cooperation between IT and facilities departments. Of course let’s not forget hunting down the dreaded zombie servers hiding in the racks, and the power cooling and space that they waste.
And finally, Best Wishes for a Happy Holiday Season, as well as a Greener and Happy New Year!