In recent years, power and cooling demands (along with energy costs) have risen exponentially in data centers. The cost of cooling alone can constitute up to 50 percent of total data center energy cost. Improving efficiency by adopting best practices can significantly reduce cooling costs; it also ensures that the major portion of total available power can be used for IT equipment. In most cases, facilities managers must balance between providing an acceptable environment to all servers and keeping cooling costs low while maximizing energy efficiency.
Separate Air StreamsAccording to ASHRAE thermal guidelines, 68 to 77 degrees F is the acceptable inlet air temperature range for Class I servers. The cooling system must ensure that inlet air temperatures remain within this range for all servers in the data center by providing energy-efficient and cost-effective cooling.
Air is the main carrier of heat and moisture in the data center. Therefore, properly managing the cold and hot air streams is the most important factor in optimizing the performance of the cooling system. Unintentionally mixing hot and cold air streams spoils the good cold air and can cause unacceptably high air supply temperatures to the servers, which can lead to hot spots. Furthermore, such unintentional mixing can lower the temperature of the return air, which can reduce the performance air conditioning units, preventing the units from operating at their highest-possible cooling capacities. Short-circuiting of the cold air back to the air conditioning without it passing through the servers can exacerbate the problem. In the case of raised-floor data centers with supply plenum and perforated tile arrangements, air leaks through cable cutouts and raised flooring can deprive the servers of sufficient cold air. These leaks also can result in dilution of the hot return air.
Several techniques improve data center cooling performance by completely separating of hot and cold streams: proper placement of air conditioning units taking into consideration the locations of heat loads in the room; properly distributing heat loads in the data center room based on the location of air conditioning units, the mode of cold air supply, the layout of servers within the racks along with location of racks in the room and the mode of removing hot air from the data center room. Optimizing all these factors can lead to an energy-efficient data center that also meets acceptable temperature ranges for all servers.
Finding the best cooling solution for a given data center is not easy. Most often, the best solution is situation specific, since the contributing factors are complex and often, by nature, mutually competitive. A trial-and-error approach to optimize all parameters requires tedious measurements of multiple parameters along with appropriate modifications in data center layout. These are not only labor intensive and expensive but, sometimes, even impossible. Moreover, trial-and-error approaches seldom provide insight into the root cause of poor cooling performance or ways to mitigating cooling problems such as hot spots. Comprehensive cooling audits of data centers through computational fluid dynamics (CFD) simulation provide an attractive and cost-effective option.
Computer SimulationThe science of computational fluid dynamics deals with computer simulation of fluid flow, heat transfer, and other similar transport processes. Today, CFD technology is commonly employed in several industry sectors, including aerospace, automotive, chemical, biomedical, semiconductor, and sports to improve and optimize designs or processes in a cost-effective manner. CFD especially helps in visualization of fluid flow patterns and heat distribution in complex and intricate situations. Visualization helps users gain better insight into how a process operates; ultimately it leads to better process (and product) design. In some cases, it is impossible to obtain in-depth process information and insight through physical testing or experimentation; therefore, CFD analysis becomes a very effective tool for optimization of design and process. It saves time and resources that would result from lengthy prototype testing and expensive trial-and-error iterations.
Two decades ago, CFD technology was available only for advanced engineering institutions and professionals; now it is accessible to data center professionals, who use it to help optimize cooling performance of data centers. Recent advances in computing power and numerical algorithms put easy-to-use, affordable CFD analysis tools within reach of data center professionals. Data center managers can easily employ these tools to obtain comprehensive cooling audit reports that provide comprehensive cooling performance analysis and offer insight into cooling operation.
Case StudyA CFD simulation study was performed on a small data center to demonstrate how air management in a data center affects cooling performance. The data center was approximately 850 square feet (sf) with a head load of 130 watt/sf of room footprint. The area was equipped with two CRAC units, each with 30 tons of cooling capacity. The average heat load of each rack was about 4 kilowatts.
In a base case scenario, hot air moved freely in the data center room and was captured by the CRAC units through “room return” (see figure 1a). In the modified scenario, a top ceiling plenum return directed hot air to ducted CRAC units thus taking the advantage of buoyancy (in which hot air naturally rises toward the ceiling). In addition, flow barriers were placed at each end of cold aisles as well as on top of the racks to avoid any infiltration of hot air into cold aisles (see figure 1b).
CFD simulations using software from ANSYS, Inc. helped in visualizing how cold and hot air streams flowed in the data center room. As shown in figure 2a, in the base case scenario, the hot air stream returning to the CRAC units takes a spiraling path and passes through several cabinets in its way, thus creating an unacceptable environment for racks that are close to the CRAC units. Figure 3a shows a resulting thermal map of inlet air temperature and its distribution over the inlets of all server racks. The maximum inlet temperature is 82 degrees F.
CFD analysis of the modified data center layout helped capture how better air management practices mitigate the above problems encountered in the base case. Figure 2b shows the path of hot air with ceiling plenum return. Unlike the base case, the rising hot air is captured directly behind the racks and directed up into the ceiling plenum return. This modified path for hot air avoids any mixing and recycling of air through multiple servers. As a result, the inlet air temperature at the servers was lowered. Figure 3b shows rack thermal maps for the modified case; it demonstrates how servers that experienced inlet air temperatures of 82 degrees F in the base case receive much colder air with 74 degree F inlet temperatures.
Figure 4 compares the effect of modified layout on the inlet air temperatures of various servers; all servers in the data center meet ASHRAE thermal guidelines of acceptable inlet air temperature, between 68 and 77 degrees F. CFD simulations indicated that the maximum temperature in the data center room was lowered from 95 to 87 degrees F, resulting in an environment with more comfortable working conditions. Such an improvement in cooling performance can be achieved without adding new CRAC units or increasing the supply of cold air - alternatives that might be intuitive and easy to implement but energy-intensive and expensive.