Google has consistently logged exceptional energy-efficiency ratings over the past several years. A quarterly report in October last year indicated the company had a trailing twelve month (TTM) energy-weighted average power usage effectiveness (PUE) rating of 1.21 for six of its company-built facilities, with one facility achieving a TTM rating of 1.15. On top of that, Google data centers have been widely hailed as cutting edge in terms of low carbon footprints and overall “green” design and operation. It’s an exceptional achievement on three crucial fronts. But Jimmy Clidaras, principal data center engineer and director of data center research and development, says that although Google operates with a “do no evil” philosophy with regard to the environment, the company’s green operation is largely a result of its desire to come up with the lowest total cost of ownership solution possible.

“Basically, our metric for measuring performance is dollars per queries served,” Clidaras notes. “And when you start with those kinds of business-imperative metrics, you’re driven to efficiency from the get go. So the power efficiency we’ve achieved in our data centers is very much a combination of our real-world business needs and our internal company philosophy. The things you read about carbon neutrality and green initiatives were on our minds as we began to think about building our own data centers, but to be frank, we were pleased at how well those seemingly differing viewpoints complemented each other once we began to put our ideas into action.”

An Innovative Mid-step

Today, Google’s data centers are noteworthy for a wide array of efficiency innovations. Every aspect of the centers-from the electricity the servers use, how power is managed inside a facility, and (in Belgium) even the use of water from an industrial canal-treated in an on-site water purification facility before it is evaporated in the cooling towers-have all yielded positive efficiency gains for Google. But it wasn’t always that way. In 2000, Google wasn’t doing anything different than the vast majority of dot-com companies looking to make their mark in cyberspace. “Before 2005, we’d moved up to leasing as much co-location space as we could,” Clidaras says. “It was cheap at the time, because the initial bubble had burst.”

But unlike the vast majority of dot-coms, Google was a huge international hit. The company was growing by leaps and bounds, and try as they might, it seemed its data center executives were perpetually trying to catch up to its ever-expanding data center capacity requirements. “We’d taken the next logical step,” Clidaras says. “We were even looking at purchasing or taking out long-term leases on entire buildings when we could get them cheaply.”

But even that solution wasn’t going to work in the long run. By 2003-2004, with the economy going like gangbusters, affordable facilities were getting harder and harder to find. “At the same time, we were continuing to grow at an exponential rate,” Clidaras says.

It was a perfect storm: The rental and co-location markets were not feasible solutions any more. Clidaras explains, “We knew if we weren’t going to stand in the way of [Google’s] growth, we were going to have to design and build our own data centers.”

But designing and building multi-million dollar data centers takes time-the one resource Google didn’t have given its explosive growth.

“The idea of a container-based data center had been bouncing around Google internally before I joined the company in 2004,” Clidaras notes. “We’d looked at a wide range of options and more traditional approaches, but nothing seemed to fit the unique circumstances we were in at the time.” The company already had two full-time employees working with data center design and construction consultants DLB Associates and basic container design elements were already decided upon. According to Clidaras, the overall design was “holistic” in nature. “We didn’t have data center people per se on the program,” he notes. “Will Whitted brings an industrial engineering background. Bill Hamburgen came from HP research labs. My background is in aerospace. So we really didn’t have a lot of HVAC or data center people around with a lot of legacy thinking.”

“The initial challenge from Google was to “Google-ize” data center design, specifically the power and cooling systems,” explains Dan Dyer, co-program manager for DLB Associates.“However, once we had started we realized that this was too narrow a focus, and we actually needed to redefine the entire method/delivery system of designing and constructing data centers.”

Dyer says a “boundary-less,” unconstrained week-long brainstorming workshop was held with various internal and external people to come up with all sorts of ideas to potentially pursue. “Some of the ideas were way out there, but all ideas were considered with no preconceived notions,” he adds.

The team approached the container concept from a physics perspective with an eye on what was possible. DLB Associates played a crucial role by providing logistical “reality checks,” aiding in facility design and container interface, and generally keeping the Google team grounded. “It’s great to dream up all these concepts, but you need somebody to tell you about the unforeseen details that can sink you,” Clidaras says. “Things like building or fire codes… There’s a very real threat of putting a lot of effort into designing shipping containers, and at the end of the day you miss a provision for a system that a local authority may mandate. As containers become more commonplace in the data center industry, these concerns should ease. But back in 2004, we simply didn’t know what to expect.”

All the hard work paid off. By Christmas 2004, a prototype container was ready for experimentation and evaluation. The first Google-designed and -built data center was able to handle live data by September the following year.

The adaptive design, construction, and commissioning approach that was used allowed for radical changes to be made well into construction, says Neil Chauhan, co-program manager for DLB Associates. “For example, significantly changing the size of the footprint of the data center mid–construction to better suit Google’s needs within months of being online was a huge departure from normal construction industry operating parameters. Other challenges included meeting schedule and efficiency goals that significantly outperformed the data center industry.One of the more difficult challenges involved how to address the design of something never seen before with the various construction code officials who were charged with approving the compliance of the construction under traditional building codes.”

The container design uses a standard 1AAA shipping container modified heavily for Google’s needs. Each can handle 1,160 servers and require approximately 250 kilowatts in total power usage with a container power density of approximately 780 watts per square foot and a 27 degree Celsius cold aisle.

Taking Every Advantage

The container solution gave Google the breathing room it needed to take its data center concept to the next level. And again, Clidaras says the effort was a team approach with a focus on the fundamentals. They weren’t out to re-invent the wheel. Recognizing there is only generational evolution in data centers, the team opted instead to concentrate on the physics at hand. “That was the guideline as we established our best practices,” Clidaras says. “And they ended up being very simple things, like using efficient transformers, measuring your power, minimizing the UPS losses, doing intelligent things with cooling, not mixing hot and cold air and raising the thermostat whenever you can.”

Another much-publicized innovation is the use of onboard batteries for the uninterruptible power supply (UPS) for each individual server. “It’s a very efficient solution,” Clidaras notes. “We are more than 99.9 percent efficient because of the way it’s done. But a very efficient standard available UPS today can achieve 98 percent. So if I’m getting a PUE of 1.2, all things being equal and the only difference is I have my UPS versus your high-efficiency UPS, you’ll get a PUE of 1.2-something. We discovered the best practice is a high-efficiency uninterruptible power supply, instead of having these legacy-style UPS battery rooms - where power essentially comes in as alternating current, then is rectified to direct current, and then feeds all the batteries before being inverted back to ac. Our UPS is a little bit of a game-changer however, but not simply in the performance and efficiency respect. It’s also about cost. Ours is a more cost-effective solution that wastes very little energy.”

In spite of the publicity Google has earned for its ultra-efficient PUE ratings, Clidaras is adamant the company isn’t performing alchemy. “The things we do can be mimicked by anybody,” he insists. “There are no secrets there. And we’re happy to share what we’ve learned. These best practices are just that: you can go out and implement them in a Tier 3 or Tier 4 data center and achieve a PUE rating better than 2. We think anybody can achieve something along the lines of 1.5, or even 1.3 if you measure, manage, and implement them properly. The remaining trick is how you implement those practices. The how is clearly important to your business because it reflects at which cost point you’re able to achieve these things.”

Flexibility and taking advantage of geography and environment helped in the never-ending quest for lower PUE ratings as well. As the planning for a Belgian data center began, the Google team looked at the climate data over 10 years with 50-year extreme spikes and determined the data center could run effectively and efficiently with no refrigeration. “We figured out chillers would only run a very short number of hours each year and the downside for not having them was we’d see temperatures fluctuate a little bit. Still, we had the ability to respond to those temperature changes by either moving traffic around or sequestering the people working inside the data center, because the IT equipment can tolerate higher temperatures better than humans can.”

A second part of the cooling solution in Belgium was to take industrial wastewater from a canal and clean it in an on-site purification facility just enough to use in the center’s cooling towers. This efficient use of water is in keeping with Google’s efforts around the globe. Two of the company’s facilities currently run on 100 percent recycled water, and the company has announced it is planning for recycled water to provide 80 percent of its total data center water consumption by 2010.

Today, Google data center operations are quite large, according to Clidaras.“Per our current PUE reporting, there are nine data centers that satisfy the inclusion criteria of six months in service and produce a minimum of five megawatts of IT power,” he reports. “Data center A was previously identified as the container-based data center and has been in service since late 2005, with a critical capacity of 10 megawatts, and houses approximately 40,000 servers.” If you do some math, you can see these nine data centers probably house a total of hundreds of thousands of servers.

In terms of reliability, Clidaras says Google focuses most on the service level, not always on individual data center performance. “For example, it’s critical to not have any customer-facing outages of search or Gmail, but that can be accomplished by having redundancy across groups of servers or data centers, and not always within a single building,” Clidaras explains. “Google data centers are not designed to a specific tier level. There are varying levels of redundancy across the data center depending on the criticality of the part. For example, our UPS solution of having the on-board battery is non-redundant - if one battery fails, so does the server it supports (when the power is lost). But, the criticality of losing a single server is not high; therefore it’s tolerated. So, Google’s UPS solution for individual servers is only at Tier 1, but other elements of the power distribution (and cooling) system are at tier 2, 3, or 4. The goal is to achieve a customer-facing service level that is exceptional at all times.”

Google's Best Practices

Google says obtaining low PUE ratings is possible for any data center following these simple best practices:

Measure PUE. Know your data center’s efficiency performance by measuring energy consumption and frequentPUEmonitoring.

Manage air flow. Good air flow management is fundamental to efficient data center operation. Start with minimizing hot and cold air mixing and eliminate hot spots.

Adjust the thermostat.Raising the cold aisle temperature will minimize chiller energy use.Don’t try to run at 70°F in the cold aisle, try to run at 80°F; virtually all equipment manufacturers allow this.

Use free cooling. Water- or air-side economizers can greatly improve energy efficiency.

Optimize power distribution. Whenever possible use high-efficiency transformers and UPS systems.

Buy efficient servers.Specify high-efficient servers and data storage systems. The Climate Savers Computing Initiative offers resources to identify power-efficient servers.