Data center designs continue to evolve, and, recently, more facilities are built with slab floors and overhead cabling, often in line with Open Compute Project (OCP) recommendations. Rather than install expensive, inflexible ducting to supply cooling from overhead diffuser vents, engineers are seeing efficiency gains by flooding the room with cold supply air from either perimeter cooling units, CRAC/CRAH galleries, or other cooling methods (rooftop cooling units, fan walls, etc.).
Hot-aisle containment (HAC) then separates the cold supply air from the hot exhaust air, and a plenum ceiling returns the exhaust air back to the cooling units. As such, this design is also gaining popularity due to its simplicity and flexibility.
An optimized containment system is designed to provide a complete cooling solution with a sleek supporting structure that serves as the infrastructure carrier for the busway, cable tray, and fiber. Such a system should be completely ground-supported by a simple, flat, slab floor.
The goal of any containment system is to improve the intake air temperatures and deliver cooling efficiently to the IT equipment, thereby creating an environment where changes can be made that will lower operating costs and increase cooling capacity. Ideally, the containment system should easily accomplish this while allowing both existing and new facilities, including large, hyperscale data centers, to build and scale their infrastructure quickly and efficiently.
Traditional methods for supporting data center infrastructures, such as containment, power distribution, and cable routing, can be costly and time-consuming. They require multiple trades working on top of each other to accomplish their work. An optimized containment structure provides a simple platform for rapid deployment of infrastructure support and aisle containment. For example, all cable pathways and the busways can be installed at the same time as the containment, allowing the electricians to energize the busway when needed, such as when as the IT equipment gets installed or as the IT footprint expands.
The containment system should also give the end user the ability to deploy small, standardized, replicable pods. This helps to limit the amount of upfront capital spent compared with building out entire data halls by providing all the infrastructure necessary, while allowing for almost limitless scaling should the situation require it.
When selecting a containment solution, the seal or leakage performance (typically a percentage) of the system is essential. It’s often stated that leakage is the nemesis of all containment systems. Users should reasonably expect a containment solution to have no more than approximately 2% leakage. This reduces and practically eliminates both bypass air and hot recirculation air that raises server inlet temperatures on IT equipment — the result being superior efficiency of the cooling system.
There’s another important element to this design: the plenum ceiling return. The ceiling and grid system chosen should have minimum leakage to reduce and even eliminate bypass air where cold supply air enters the plenum ceiling return instead of contributing to the cooling of the IT equipment.
Maximize energy efficiency and sustainability
We’ve mentioned the importance of maximizing energy efficiency and sustainability. Flooding the data center with cold supply air for the IT equipment and containing the hot aisles so that hot exhaust air returns to the cooling units (or is rejected by some other method) is a simple, easy, and flexible design. All new data centers should consider this for future deployments.
Another benefit of this (and most HAC designs) is that it’s easy to achieve airflow and cooling optimization. In a perfect world, we would simply match our total cooling capacity (supply airflow) to our IT load (demand airflow) and increase cooling unit set points as high as possible. However, there’s inherent leakage in any design, including within the IT racks. The goal is to minimize the leakage as much as possible, which is why the containment and ceiling structure is crucial.
The lower the overall leakage, the less cold supply air is needed. Therefore, to maximize energy efficiency, we want to use as little cold supply air as possible while still maintaining positive pressure from the cold aisle(s) to the hot aisle(s). When this is achieved, there will be consistent supply temperatures across the server inlets on all racks throughout the data center.
Because HAC is used, the data center is essentially one large cold aisle, so the total sum of cold supply airflow should only be slightly higher than the total sum of demand airflow (10%-15% should be the goal). This percentage is easily attainable if leakage is kept to a minimum by using a quality containment and ceiling solution, along with good airflow management practices, such as installing blanking panels and sealing the rack rails.
To drive further efficiencies, operators can raise the cooling set points while maintaining server inlet temperatures at or below ASHRAE-recommended specifications for cooling IT equipment (80.6°F/27°C). This also results in higher equipment reliability and lower mean time between failures (MTBF).
It's been said that the best energy saved is the energy not consumed, and that’s especially true in the data center industry — even more so as we continue to progress toward our goal to become more sustainable by lowering our carbon footprint.
The data center industry is constantly evolving, and so should our designs. Energy efficiency should continue to be a top concern for data center operators, both now and in the future. Data center designers and owners should carefully evaluate all options rather than just relying on or selecting from old projects. Doing things because that’s the way they have been done no longer works for the mission critical industry.
Further, flooding the data center with cold supply air and utilizing a containment system, regardless of the cooling system, results in a simple, flexible design that’s both energy efficient and sustainable.
Report Abusive Comment