Until recently, few businesses examined how effectively they used energy within their data centers. These costs were often buried within larger corporate or departmental bills and were typically viewed as the cost of doing business.”
Companies have shifted from benign neglect of data center energy use because of four specific factors:
- Increasing demand for computing capability
- The accelerating rise in high-density computing
- The significant energy used within data centers, which the EPA estimates as almost 2 percent of the United States’ annual electricity consumption
- Increases in energy costs
Operational inefficiency is particularly culpable in contributing to this substantial increase in energy costs. In fact, Gartner Research VP Ralph Kummer estimates that “traditional data centers typically waste more than 60 percent of the energy they use to cool equipment.” Suboptimal performance drives the need for demonstrable energy efficiency within the data center. The achievement of this goal requires a systematic approach on the part of data center providers,
Measuring EfficiencyA standard of comparative measurement is an essential requirement for assessing the efficacy of green data center initiatives and providing accurate benchmarking for industry-wide initiatives aimed at increasing energy efficiency. These data are also critical for customers who want to make educated decisions about potential data center providers and their facilities and who need to demonstrate how their data center supports the overall corporate green strategy.
Through the development of this standard unit, data center efficiency goals can be clearly articulated and, more importantly, competing facilities can be compared on an apples-to-apples basis. This standard would be analogous to the miles per gallon estimates provided by the auto industry
The adoption and use of such a standard would provide customers with a valuable evaluation tool when comparing two or more competing facilities. This ability to compare providers would let a prospective customer evaluate the energy architecture of a facility and also assess the operational cost implications of choosing one facility over another. This is not a non-trivial consideration when the length of the average data center lease (five to ten years) is factored into the equation.
Both PUE and DCIE provide a clear answer to the primary issue surrounding energy efficiency within the data center, which is how much power is devoted to driving the actual computing/IT components (servers, for example) versus the ancillary support elements such as cooling and lighting.
The components of both the PUE and DCE calculations are identical, except that they are calculated as inverse equations. Each looks at the relationship between total facility power (TFP) and IT equipment power (IEP). TFP is measured at the utility meter for the data center space and includes all of the components required to support the IT load including:
- Power components including UPS systems and PDUs
- Cooling elements such as CRACs and chillers
- Other infrastructure components, such as lighting.
PUE can range from 1 to infinity, and values approaching one express better efficiency. DCIE ranges from 1 to 100 percent and the level of efficiency in this system improves as the percentage moves closer to 100 percent. Of the two, consensus is building around the use of PUE as the measurement standard for data center efficiency. Beginning earlier this year Digital Realty Trust began to publish this figure for all of its new facilities worldwide. This provides an added level of assurance to customers that these data centers are optimized for efficient energy utilization.
Fundamental DesignThe operational efficiency of a data center is a function of effective power utilization and heat removal. Ensuring that a data center is designed to maximize its heat removal capability requires an unobstructed pathway from the cool air source to the intakes of the servers. This cool air pathway must then be coupled with a similar path for the flow of server-generated hot air to the return ducts of the data center CRAC/H units. The overarching goal in developing an energy-efficient data center is to remove the obstacles to effective airflow and cooling capability within the facility itself. Among the elements required to achieve this objective are:
Hot Aisle/Cold Aisle - The hot aisle/cold aisle configuration is an effective way to balance the hot and cold air input and output within a facility. Using this design allows the hot aisles to act as heat exhausts and the cooling system to supply cold air only to the designated cold aisles.
Operating Temperature - Operating a data center at the proper temperature can also dramatically decrease power consumption and corresponding electrical bills. ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) specifications specify a data center operating temperature of 72 degreesF (22 degreesC) as opposed to 68 degreesF (20 degreesC). Although four degrees seems like a small variance, when you consider your space requirements multiplied by 7-by-24, 365-day operating environment, the decrease in power usage can deliver cost savings ranging from thousands to tens of thousands annually.
Building Management System (BMS) - The use of a building management system (BMS) provides the insight and control of data center operations required to deliver energy efficiency.
Focus on Flooring - The proper distribution of the perforated tiles of the raised floor is a simple, yet effective way to reduce the heat in a data center and the load on cooling components. By ensuring that the floor is properly sealed and that the perforated tiles are not blocked or covered by equipment increases the overall flow of cool air throughout the data center.
An Energy-Efficient ProviderSelecting the right provider is an important, albeit commonly overlooked, element of operating an energy-efficient data center. Identifying a provider that can help optimize the power usage within a data center requires that that they explain their overall approach and commitment to energy efficiency. Key issues are:
Do they provide the tools necessary to efficiently manage energy usage?
The components of an energy management toolset can vary between providers but must include power metering. This should be a standard of any move-in ready facility. By offering customers metered power, a provider bills the customer only for the power that they use as opposed as a regular fixed amount. The data center operator should also inquire about its provider’s plans for adding graphical energy usage tracking capability. This capability enables customers to visually view their data center energy usage over time to identify key trends that may then be optimized to enhance efficiency and reduce costs. At Digital Realty Trust this capability will be implemented within our CFM system in the third quarter of 2008.
How do they purchase power?
In a data center environment it is important to remember that the provider is the “energy utility.” They are the ones who will invoice for contracted power so it is important to understand what they are doing to maximize their purchasing effectiveness. For example, the price of power typically rises and falls throughout the year due to fluctuating usage (high levels of air conditioner utilization in the summer for example). Does the provider attempt to take advantage of these fluctuations by purchasing power during non-peak times to lock in lower rates? By utilizing this buying strategy a provider is able to procure power at a lower average rate for the year and pass those savings along to their data center customers.
SummaryDue to the continued acceleration of corporate computing requirements, the power needs of data centers will only continue to escalate over time. This trend has given rise to the need for increased data center energy efficiency through the achievement of three specific goals:
- Minimizing of the overall power needs of the data center
- Maximizing the ratio of power used by the IT equipment itself
- Minimizing the amount of power required by non-IT equipment