Figure 1. Infrared illustration of a server room showing heat dissipation. Thermal image courtesy of Electronics Environment Corp


 

Greenpeace, the environmental campaign group, recently estimated that the world’s data centers would use just under 2,000 billion kilowatt hours (kWh) of electricity by 2020, more than the energy currently consumed by France, Germany, Canada, and Brazil combined. There are two approximately equal sources of data center power consumption-the data processing equipment itself and the cooling systems required to remove the heat. For the foreseeable future, data processing energy consumption is projected to rise steadily while the power per square foot in individual data centers will increase with computing power and server density. Currently, design heat density in large data centers is typically 200 to 400 watts per square foot (W/sq ft) with higher density systems achieving 1,000 W/sq ft and higher. Therefore the greatest opportunity to save power in data centers is in their cooling systems, which account for approximately 50 percent of data center power.

The ASHRAE standard 2008 Environmental Guidelines for Datacom Equipment was recently updated to incorporate wider temperature and humidity limits in response to current, more tolerant server designs, while reducing energy consumption as a direct result. ASHRAE 90.1 was recently updated to include various economizers for the purpose of lowering energy costs in commercial buildings-especially data centers. Cooling the overall data center has been the traditional approach for many years, but this design philosophy is now being challenged based on the changing face of the industry. Managing the heat removal path from the servers to the external air has become an increasingly critical part of data center design. Heat densities vary widely from rack to rack while the distribution of the heat load within data centers is subject to dynamic change, especially in co-location sites. This means in-room cooling solutions must lend themselves to rapid change at minimal cost to the user in response to system upgrades.

Three main drivers influence thermal management and the in-room cooling equipment selection process:

  • The operational requirement to provide a controlled ambient temperature within the server racks in addition to the white space
     
  • The capital costs associated with new build and upgraded infrastructure development
     
  • The ongoing energy costs and environmental impact of running the facility


Table 1. Chart of various cities showing hours<75F for free cooling

 

The data center thermal management problem is here to stay and will become more severe in the future. Computer room air conditioning (CRAC) units in data center cooling have been the norm for many years but may not satisfy the three critical requirements listed above in a modern, energy-conscious data processing environment.
 

The CRAC Unit Approach

Traditional CRAC systems were designed when they were intended to maintain an average room air temperature of 72F and 50 percent relative humidity. Chilled-water CRAC units require a supply of chilled water/glycol at approximately 45F in order to provide the maximum cooling capacity and optimum discharge air temperature around 60F. It is important to note a significant energy premium of around 33 percent is paid to produce chilled water/glycol at 45F compared to 65F.
 

Figure 3. Diagram illustrates airflow in a facility cooled by a chilled-water CRAC system

 

When mainframe computers were the norm they usually required underfloor air from CRAC units at around 60F. The typical blend of hot air from the mainframe and circulated room air at provided an average return air temperature to the CRAC units of 72F. The latest CRAC development in use by most manufacturers is the use of direct-drive electronically commutated (EC) plug fans located below the CRAC units. These fans can show an improvement of 20-30 percent in fan power over belt-drive centrifugal fans at full speed and more at reduced speed according to the Emerson whitepaper “Using EC Plug Fans to Improve Energy Efficiency of Chilled Water Cooling Systems in Large Data Centers.” However, the fan power in chilled water CRAC units is typically less than 20 percent of the energy consumed by the chiller serving them. Therefore a 30 percent fan-power savings in chilled water CRAC units is only 5 or 6 percent power savings of the total chilled water cooling system. It is important to keep this scale of energy savings in mind when comparing the operating efficiency of different data center cooling systems.

Industry-standard server fans circulate room air through racks, and unlike mainframes, do not require a raised floor for this purpose. The air temperature-rise across some high-density servers can reach 60F, so 75F entering room air can be returned to the room up to 135F. The predictable emergence of data center hot spots has demanded a cooling solution but simply increasing under-floor airflow has limited value and requires more fan power. Aisle containment systems can improve the flow of air through racks, but CRAC systems are still left with the traditional and inefficient sequence of exhausting hot air into the white space and then processing it within the room before transferring the heat out of the building via a chilled water loop. This can now be viewed as a costly process and not entirely suitable for the current requirement. Therefore it can be reasonably concluded that CRAC-based cooling alone cannot effectively deal with today’s evolving data thermal management issues, even if energy consumption could be totally disregarded.
 

Rack Cooling

The most logical approach to data center thermal management is to remove the heat directly at its source and channel it to the outside atmosphere without the intermediate stage of using in-room CRAC units or other air handlers. Three types of rack coolers can achieve this goal:
 

Figure 4. Diagram illustrates airflow in a facility cooled by an active rear-door rack cooling system

 

  • In-row rack coolers can be installed between server racks according to the heat load but are best applied to low- to medium-density applications. The current cooling limitation is approximately 30 kW per 12-in.-wide in-row cooler (not per rack). High-density sites require more in-row coolers with aisle containment systems and supplementary CRAC units that occupy data center floor space and restrict future modification. The chilled water-cooling coil of a standard 12-in.-wide in-row cooler typically has a double pass airflow design that requires increased fan power versus a single pass airflow pattern. Every two 12-in. in-row coolers occupy the floor space of a standard server rack.
     
  • Passive rear-door rack coolers fit industry standard 48U racks and offer nominal cooling capabilities up to a maximum of 20 to 25 kW per rack via a built-in chilled water coil. While this system fared well in a Case Study: Sun Microsystems Energy Efficient Modular Cooling Systems-Silicone Valley Leadership Group-2007, the cooling capacity is ultimately limited by the airflow provided by the servers, which is generally obstructed by rack cabling and channel ways. Conversely, server manufacturers are moving to increase their electrical power efficiency by reducing server fan power and airflow. As a result, reliance on server fans for effective rack cooling is not feasible in many cases when passive rear door coolers are employed.
     
  • Active rear-door rack coolers including EC (speed-controlled) fans are the latest development in high efficiency server rack cooling. This design utilizes a chilled water coil with a row of EC fans and a PLC controller in order to maximize sensible cooling by using 65F chilled water. The hot exhaust air from the servers is cooled to within 7F of the chilled water temperature, resulting in air returning to the room at 75F and an overall heat-neutral effect on the data center. Additional efficiencies can be achieved when the integrated PLC modulates the chilled water flow to find the optimal combination of fan power and chilled water usage for each individual rack. Active rack coolers can all work independently, while providing live interactive feedback to a central monitoring system and controlling the overall room environment regardless of the diversity in load from rack to rack.


Is Chilled Water Cold Enough?

What is often overlooked is how to maximize the efficiency of all these systems by improving the efficiency of the chillers themselves. Chilled water is the most common method of removing heat from data centers today so finding ways to reduce energy consumption of chillers and then applying this inside the data center becomes of paramount importance. The cooling system design must be viewed in unison with the method of thermal management inside the white space so savings for the facility as a whole can be maximized. A traditional CRAC system typically uses entering chilled water/glycol at 45F, returning to the chiller at 55F, per the ASHRAE 90.1 standard design conditions. While the HVAC industry continues to push for increased efficiency standards, by simply increasing the design chilled water temperature from 45F to 65F an approximate energy saving of 33 percent can be achieved with any chiller in any location.

Additional chilled water energy savings can be achieved by utilizing any form of water-side economizer (free-cooling system) such as a cooling tower economizer loop with a water-cooled chiller or a dry-cooler economizer with an air cooled chiller. A chiller supplying elevated chilled water at 65F would typically receive return chilled water temperature around 75F assuming an industry standard 10F rise. Thirty-year historical weather data published by ASHRAE and national weather bureaus for different North American cities illustrate the number of hours where partial or 100 percent free cooling can be achieved (see table 1). For example, utilizing 65F water in lieu of 45F in Chicago will yield an annual energy saving of approximately 33 percent by lowering the specific energy consumption (kW/ton) of any chiller. However, adding a water-side economizer to the same system in Chicago (see chart) will save approximately 60 percent energy consumption in free-cooling hours for a total annual saving of around 93 percent (3,885 hours at 100- percent free cooling using only fan power in an air-cooled chiller plus 4,051 partial free cooling hours). These energy savings are both dramatic and compelling, utilizing proven technology with easily verified results. Additional weather bin data for other randomly selected cities are shown in the chart. Consider it is not feasible to supply 65F chilled water to CRAC units, which would then deliver underfloor air at around 80F. This is already beyond any acceptable space temperature before it has even collected the heat load from the servers. However, active rack coolers are designed to operate with up to 65F chilled water and effectively cool racks up to 45 kW while maintaining the room temperature at 75F. One hundred percent of the server heat can be captured at source and transferred to a chiller outside with a neutral effect on the data center space. Quite simply, finding ways to utilize the highest possible chilled water temperatures for rack cooling will maximize chiller efficiency and this energy saving is increased exponentially when water-side economizers (free-cooling systems) are utilized with the chillers.

It is clear from the recent and continuing growth in rack cooling systems that thermal management design in data centers is undergoing a major change from space cooling to server cooling. This change is driven by the requirement for more effective thermal management within the space and by the growing demand for energy conservation. While it appears likely that rack coolers will become the preferred method of choice for server-based data center cooling, it is most important that any decisions regarding rack cooling be made with a full understanding and study of the chilled water system that will be used to cool the racks. Particular attention should be paid to studying future density requirements for any facility, both as a complete entity and smaller subsections of a facility that might be used for high density cooling as it becomes more prevalent. By raising the required temperature of the chiller system and optimizing the efficiency of the rack cooling system, a data center owner can maximize data center efficiencies and still be positioned for flexible expansion as server-cooling requirements evolve over time.