Data centers today concentrate ever more computing power into a single rack. However, "virtualization" has not repealed the laws of physics, and it requires a lot of energy and cooling resources.

The plan truth about all computers is that they turn every watt of power directly into heat. In the mid to late '90s watts per rack ranged from 500-1000 or slightly higher. Today a typical 1U server draws 250-500 watts, and when 40 of them are stacked in a standard 42U rack they can draw 10-20 kilowatts (kW) requiring 3-6 tons of cooling per rack.

Blade servers provide even more space-saving benefits, but as a result have even higher power and cooling requirements. They can support dozens of multi-core processors but are only 8-10U high. In addition, virtualization is rapidly becoming the latest computing trend. Of course, virtualized environments usually involve new high performance, high-density servers. In and of itself, it is true that the server hardware does take less energy overall, since there are usually less of them. However in practice, the concentration of high-density servers in a smaller space has created real deployment problems.

Figure 1. The raised floor originated in the days of the mainframe, perforations and other changes made it an integral part of the cooling system for server-based data centers.

The downsides to virtualization include their high power and cooling requirements. Though virtualizing the environment may reduce overall power requirements, many existing power distribution systems cannot provide 20-30 kW per rack.

If, properly implemented, virtualization environments use less space and power by using fewer and denser servers, it follows that they need less cooling. Therefore virtualization should more be energy efficient overall and presumably greener.

This is where the virtualization efficiency conundrum first manifests itself. Data centers that were built only five years ago were not designed for greater than 10 kW per rack densities encountered in virtualized environments. As such, the cooling systems cannot efficiently remove that much heat from such a compact area.

If all the racks were configured at 20 kW per rack, the average power/cooling could exceed 500 watts per square foot (W/sq ft). Even some recently built Tier IV data centers are limited to 100-150 W/sq ft average for this reason. As a result many high-density projects have had to spread the servers across half empty racks in order to not overheat.

Traditional Raised-Floor Cooling

The "classic" data center harkens back to the days of the mainframe, which included a raised floor. The raised floor was used to distribute cold air from the computer room air conditioner (CRAC), and it also contained power and communications cabling. These mainframes were very large but averaged only 25-50 W/sq ft. Originally the rows of equipment faced the same way to make the data center look neat and organized. In many cases, the cold air entered the bottom of the equipment cabinets and the hot air exited the top. The floor generally had no perforated tiles. This actually was a relatively efficient method of cooling.

With the introduction of rack-mounted servers, average power levels began to rise, so the facing cabinet alignment became a problem, since hot air now exited out the back of one row of racks in to the front of the next row, leading to the adoption of hot aisle/cold aisle in the mid-to-late '90s. CRAC units remained mainly at the perimeter of the data center, but the floor tiles now had vents (or were perforated) in the cold aisles (see figure 1). These changes helped the cooling systems to keep up with the rising heat load.

Even with ever-deeper raised floors, this "time-tested and proven" methodology is effective only up to a certain power level. After this power level, it takes much more energy for the blower motors in the perimeter CRACs to push more air at higher velocities and pressures to deliver adequate cold air to the perforated tiles.

As a result of the poor cooling path efficiency at such high heat loads, the amount of power that is used for cooling of high-density server "farms" exceeds the power used by the servers themselves. In fact, in some cases for every dollar spent to power the servers, $2 or more is spent for cooling.

At one time, a raised floor was considered the only way to cool a "real" data center. Now some newer cooling systems do not require a raised floor, moving the cooling system close to the racks. This not only improves cooling performance, it also improves cooling efficiency. These new systems can be used with existing raised floor or non-raised floor systems. They can be used a complete solution or as an adjunct to an overtaxed cooling systems.

Figure 2. Newer techniques include hot and cold aisle containment to bring cooling closer to heat loads.

Several alternatives and enhancements have been made to this well-entrenched, but aging standard. Cooling manufacturers have developed systems that shorten the distance that the air has to travel from the racks to the cooling unit. Some systems are in-row, and others are overhead. They each offer a significant level of increase in the ability to cool racks at up to 20 kW per rack. In addition, if the supporting infrastructure is available to support these new systems, they should provide a significant lowering of cooling costs, since they are able to use much less power to move airflow to and from the racks and also minimize the mixing of the hot and cold air (see figure 2).

Hot-aisle containment is a refinement on in-row cooling in which the hot aisle is sealed, ensuring that all the heat is efficiently extracted directly into the cooling system over a short distance.

Cooling coils within a fully sealed rack can cool up to 30 kW in a single rack. Some major server and cooling vendors offer these configurations as part of complete blade-server support pack solution. This system offers the highest cooling density. Some major server manufacturers have even offered their own "fully enclosed" racks with a built-in cooling coil that totally contains the airflow within the cabinet. This represents one of the most effective high-density cooling solutions for air-cooled servers.

Today all servers use air to transfer heat out of the server. Several manufacturers are exploring building or modifying servers to use fluid-based cooling. Instead of fans that push air through the server chassis, these systems pump liquid that carry heat away from CPUs, power supplies, and other heat-producing components. Some of the new cooling technology can be added or retro-fitted to existing data centers to improve the overall existing cooling, or they can be used only for specific "islands" to provide additional high-density area cooling.

IT and facilities staff often do not see things the same way. They do work together. IT personnel almost always call facilities to address any cooling systems problems. The facilities department is usually primarily concerned with meeting the overall cooling requirements of the room; they just want to provide the raw cooling power to meet the entire heat load, usually without regard to the different levels rack density. Facilities staff's first response to a cooling problem is to add more of the same type of CRAC that is already installed (if there is space), which may partially address the problem, but not very efficiently. Clearly this facilities vs IT mentality can no long work; there needs to be some mutual understanding of the underlying issues so that both sides can cooperate and optimize the cooling systems to meet the rising high-density heat load with a more efficient solution.

The Bottom Line

There is no one best solution to address the cooling and efficiency challenge; however by carefully assessing the existing conditions facilities and IT personnel can identify a variety of solutions and optimization techniques that can substantially improve the cooling performance in a data center. Some solutions cost literally nothing to implement, while others have a nominal expense (see sidebar).

Whether in a 500 or 5,000 sq. ft. facility, many solutions can improve the energy efficiency of a legacy or virtualized data center. Employing these techniques will also increase the uptime since critical equipment will receive more cooling and cooling systems will not work as hard to provide the same cooling output. n

Input 1102 atwww.missioncriticalmagazine/instantproductinfo

Sidebar 12 Simple Solutions for Optimizing Cooling

Clearly the raised floor is not going to suddenly disappear from data center designs. Several low or no cost techniques can be implemented to improve the cooling efficiency of data centers dealing with high-density servers.

1. Blanking panels are by far the simplest, cost-effective, and most misunderstood item that can improve cooling efficiency. By ensuring that the warm air from the rear of the rack cannot flow back into the front of the rack via open rack spaces, blanking panels immediately improve the efficiency of the cooling system.

2. Cabling under the floor blocks and disrupts the cold airflow. Many larger data centers have 1-2 feet under the floor just for cabling. Cables should be run and tightly bundled so that it has minimal impact on the airflow.

3. The size, shape, position, and direction of floor vents and the flow rating of perforated tiles have great impact on how much cool air is delivered to where is needed most. A careful evaluation of the placement and the amount airflow in relation to the highest power racks can pay off as one of the best ways to maximize the cooling system efficiency by delivering the cool air where it is needed most and minimize the waste of cool air.

4. Cables normally enter the racks though holes cut into the floor tiles. This opening basically "wastes" cold air by allowing it to enter the back of the rack. More significantly, it lowers the static air pressure in the floor, which reduces the amount of cold airflow available for the vented tiles in front of the rack where it is needed. Every floor tile opening for cables should be surrounded by air containment devices - typically a "brush" style grommet collar which allows cables to enter but blocks the airflow.

5. Cold-aisle containment is best described as system of panels that spans the top of the cold aisle from the top edge of the racks. It can also be fitted with side doors to even further contain the cold air. This blocks the warm air from the hot aisle from mixing with the cold air.

6. It has always be the "rule" to use 68° to 70° F as a set point to maintaining the "correct" temperature in a data center. While each manufacturer is different, most servers will operate fine at 75° F at the intake. so long as there is adequate airflow.

7. Most CRACs maintain humidity by adding moisture and reheating the air. The typical target set point is 50 percent humidity, with the high-low values at 60 and 40 percent. Simply changing set points to 75-25 percent will save substantial energy.

8. In many installations each CRAC is not in communication with any other CRAC. Each unit simply bases it temperature and humidity setting on the temperature and humidity being sensed in the (warm) return air. Therefore it is possible, and even common, for one CRAC to be trying to cool or humidify the air while another CRAC is trying to de-humidify and or re-heat the air. The cooling system contractor can add a master control system or at least change the set points of the units to avoid or minimize the conflict.

9. A thermal survey may provide surprising results.

10. Location and climate can have a significant impact on cooling efficiency. A modern, large multi-megawatt, dedicated Tier IV data center is designed to be energy efficient. It typically uses large water chiller systems with built in-economizer functions (see below) as part of the chiller system. This provides the ability to shut down the compressors during the winter months and only uses the low exterior ambient air temperature to provide chilled water to the internal CRACs.

11. Economizers may not be an option in data centers that are located in high-rise office building or office parks. And in many such installations, the data center is limited in its ability to use efficient high-density cooling. Sometimes the IT department has no say in its design. As a result, the size and shape of a data center may not be ideal for rack and cooling layouts. When an organization is considering a new office location, the ability of the building to meet the requirements of the data center should also be considered.
12. Most smaller and older installations used a single type of CRAC cooling technology, usually involving a cooling coil cooled by a compressor within the unit. The compressor needed to run year around to cool the data center. A second cooling coil that connected by lines filled with water and antifreeze to an outside coil represented a significant improvement to this basic system. When the outside temperature was low (i.e., 50 F or less) "free cooling" was achieved since the compressor could be less used or totally stopped (below 35° F).