The increased adoption of cloud computing and the advent of 5G network connectivity are continuing to drive demand for data centers. 

As power densities of racks rapidly increase due to limited space, cooling efficiency is top of mind for companies in the industry. Cooling systems are very energy intensive, using as much (or more) energy as the computers they are created for, and they are expected to run continuously with the same effectiveness. This creates a need for optimization by choosing the most appropriate cooling system and implementing layout modifications in an attempt to maintain performance standards and reduce energy consumption.

Role of Simulation in Data Center Cooling

This optimization can be done by using computational fluid dynamics (CFD) simulation, which enables engineers to virtually test different data center designs, gain insights into airflow patterns, and discover “hot spots.”

The technology can be used to test different enclosure and cooling configurations, such as cold-aisle containment, hot-aisle containment, overhead supply, underfloor supply, and rack-centered cooling solutions. Simulation is also used by design teams in testing innovative cooling strategies, such as liquid cooling and cold plates.

The following case study project is openly available and can be copied and used as a template for other simulations. Its goal was to improve the effectiveness of a cooling rack and reduce the cooling cost of a typical data center design. The CFD analysis was set up to investigate and calculate the flow conditions, such as intake temperatures for the server racks — the most important parameter for cooling effectiveness.

Project

Temperature management is vital to keep equipment running, and while cooling systems tend to be very energy-intensive, a good design could significantly reduce their energy consumption.

Moreover, in order for IT equipment to operate reliably, an adequate air intake temperature range must be reached and maintained at all times. The operational life span and safety of hardware are at stake when the air intake temperature is too high. On the other side of the spectrum, the energy cost significantly rises when the air intake temperature is too low.

ASHRAE’s 2008 Thermal Guidelines set the recommended air intake temperature lower limits at 64°F; values below this threshold would result in significant financial waste. The high limit is set to 80°C,  where any air intake temperature beyond this would result is an adverse impact on the reliability of the equipment. The allowable range, however, is 59° to 90°. 

The rack cooling index (RCI) measures compliance with these ASHRAE guidelines expressed as a percentage.

The first index, RCIHI, indicates whether temperatures exceed the maximum recommended value. The maximum value of 100% means that no air intake temperature is above the maximum recommended. If it is less than 100%, it means that some equipment experiences intake temperatures higher than the maximum allowable.

The second index, RCILO, indicates whether temperatures dip below the minimum recommended value. The maximum value of 100% means that no air intake temperature is below the minimum recommended. If it is less than 100%, it means that some equipment experiences intake temperatures lower than the minimum allowable.

For this case study, an engineer ran a CFD analysis to predict the efficiency of a data center using the temperature results at the server rack air intake to calculate RCIHI and RCILO.

The CAD model of the data center includes four rows of 13 server racks each, for a total of 52 server racks. This means that 52 values of air intake temperature will be used.

Design Parameters

The key design parameters evaluated were supply temperature, supply airflow rate, RCI, and cooling cost function.

Within the project, 16 combinations of supply temperature and supply airflow rates were tested for impact on cooling effectiveness and cost. The supply temperature considered was 55° to 70°, and the supply airflow rate will range from 80% to 140% of the total rack airflow rate. The results helped the engineers involved in the project identify the best combination of the two parameters to optimize the overall cooling system configuration. This will allow optimum data center power consumption and reliable operation of the IT hardware.

Results

One of the 16 simulations ran at 120% of flow rate capacity (68,016 m³/h ) with a 61° supply air temperature. 

Server intake temperature distribution
Figure 1. Server intake temperature distribution.
Photo courtesy of SimScale

 

Figure 1 shows the server intake temperature distribution, showing the differences across the different racks.

Server intake temperature distribution
The velocity streamlines showing the airflow path and temperature evolution. Photo courtesy of SimScale

 

This is a clear representation of the challenge that data center cooling represents. The cooling isn’t uniform as hotter spots are present in the lower and center racks at each row, varying from 60° to 90°.

With the obtained results, extrapolating the mean temperature oft the rack intakes values for each of the 16 configurations will determine the best performing one. Using the formulas for the RCIHI and RCILO coefficients, which reflect the operational conditions for the system, a value at or above 96% is a sign of a good design.

RCI Coefficients
Based on RCIHI and RCILO coefficients, a value at or above 96% is a sign of a good design.

 

With that in mind, we can see that when it comes to RCIHI, all operational conditions produce satisfactory results with the exception of the low-volume air supply. When we look at the RCILO coefficients, only the configuration with 69° would satisfy our requirements. When we combine the two coefficients, we are left with two possible configurations: 69°/120% and 69°/140%.

Figure 2. Using the cost of $33,150 for 55°/100% as an arbitrary reference, we can conclude that switching to the 69°C/120% would allow us to save 11% of the cooling costs.

 

As a final step, we calculated how much energy we can save by choosing one of the two cooling system designs. Using the cost of $33,150 for 55°/100% as an arbitrary reference, we can conclude that switching to the 69°C/120% would allow us to save 11% of the cooling costs (see Figure 2).

Conclusion

Innovative technologies and strategies are required in order to cool data centers, especially with the new generation of high-performance processors. In tackling the challenge of thermal management, CFD simulation, together with an iterative design approach, plays a crucial role in identifying potential issues as well as optimization potential for more performant, cost-effective, and energy-efficient data centers.