In recent years, power and cooling demands (along with energy costs) have risen exponentially in data centers. The cost of cooling alone can constitute up to 50 percent of total data center energy cost. Improving efficiency by adopting best practices can significantly reduce cooling costs; it also ensures that the major portion of total available power can be used for IT equipment. In most cases, facilities managers must balance between providing an acceptable environment to all servers and keeping cooling costs low while maximizing energy efficiency.
Separate Air StreamsAccording to ASHRAE thermal guidelines, 68 to 77 degrees F is the acceptable inlet air temperature range for Class I servers. The cooling system must ensure that inlet air temperatures remain within this range for all servers in the data center by providing energy-efficient and cost-effective cooling.
Air is the main carrier of heat and moisture in the data center. Therefore, properly managing the cold and hot air streams is the most important factor in optimizing the performance of the cooling system. Unintentionally mixing hot and cold air streams spoils the good cold air and can cause unacceptably high air supply temperatures to the servers, which can lead to hot spots. Furthermore, such unintentional mixing can lower the temperature of the return air, which can reduce the performance air conditioning units, preventing the units from operating at their highest-possible cooling capacities. Short-circuiting of the cold air back to the air conditioning without it passing through the servers can exacerbate the problem. In the case of raised-floor data centers with supply plenum and perforated tile arrangements, air leaks through cable cutouts and raised flooring can deprive the servers of sufficient cold air. These leaks also can result in dilution of the hot return air.
Finding the best cooling solution for a given data center is not easy. Most often, the best solution is situation specific, since the contributing factors are complex and often, by nature, mutually competitive. A trial-and-error approach to optimize all parameters requires tedious measurements of multiple parameters along with appropriate modifications in data center layout. These are not only labor intensive and expensive but, sometimes, even impossible. Moreover, trial-and-error approaches seldom provide insight into the root cause of poor cooling performance or ways to mitigating cooling problems such as hot spots. Comprehensive cooling audits of data centers through computational fluid dynamics (CFD) simulation provide an attractive and cost-effective option.
Computer SimulationThe science of computational fluid dynamics deals with computer simulation of fluid flow, heat transfer, and other similar transport processes. Today, CFD technology is commonly employed in several industry sectors, including aerospace, automotive, chemical, biomedical, semiconductor, and sports to improve and optimize designs or processes in a cost-effective manner. CFD especially helps in visualization of fluid flow patterns and heat distribution in complex and intricate situations. Visualization helps users gain better insight into how a process operates; ultimately it leads to better process (and product) design. In some cases, it is impossible to obtain in-depth process information and insight through physical testing or experimentation; therefore, CFD analysis becomes a very effective tool for optimization of design and process. It saves time and resources that would result from lengthy prototype testing and expensive trial-and-error iterations.
Case StudyA CFD simulation study was performed on a small data center to demonstrate how air management in a data center affects cooling performance. The data center was approximately 850 square feet (sf) with a head load of 130 watt/sf of room footprint. The area was equipped with two CRAC units, each with 30 tons of cooling capacity. The average heat load of each rack was about 4 kilowatts.
In a base case scenario, hot air moved freely in the data center room and was captured by the CRAC units through “room return” (see figure 1a). In the modified scenario, a top ceiling plenum return directed hot air to ducted CRAC units thus taking the advantage of buoyancy (in which hot air naturally rises toward the ceiling). In addition, flow barriers were placed at each end of cold aisles as well as on top of the racks to avoid any infiltration of hot air into cold aisles (see figure 1b).
CFD analysis of the modified data center layout helped capture how better air management practices mitigate the above problems encountered in the base case. Figure 2b shows the path of hot air with ceiling plenum return. Unlike the base case, the rising hot air is captured directly behind the racks and directed up into the ceiling plenum return. This modified path for hot air avoids any mixing and recycling of air through multiple servers. As a result, the inlet air temperature at the servers was lowered. Figure 3b shows rack thermal maps for the modified case; it demonstrates how servers that experienced inlet air temperatures of 82 degrees F in the base case receive much colder air with 74 degree F inlet temperatures.