Are There Gophers In Your Data Center?
I have been pontificating about cooling system energy efficiency and water usage lately. In my last column, I discovered that a single hole on a golf course can use 2.8 million gallons of water a year just to stay “green.” Since I am not a golfer, my impression of golf courses is based on the 1980’s comedy Caddyshack. The film was based on a golf course that had a clever gopher that liked to dig holes, despite best (or worst) efforts of the groundskeeper. This crafty creature ultimately costs the club money and lost customers. While gophers are not usually a problem in most data centers, it turns out that a hole in the raised floor for cabling can be quite costly as well.
So let’s examine the issue of raised floors and cable openings, since it seems the world will continue to use and build traditional raised floor data centers, despite all the paradigm shifts in the data center design from Google, Facebook, Open Compute, Yahoo’s Chicken Coop, etc. The classic raised floor data center with underfloor cabling may be slowly fading, but it is far from gone. Here it is, approximately 20 years after the hot aisle/cold aisle concept was introduced and yet there are still many basic airflow issues that continue to plague these data centers.
The classic raised floor design serves two primary purposes; a supply air plenum and a place to hide the power and network cables. On face value the design is relatively straightforward; just have downflow cooling units (CRAC/CRAH) blow cold supply air into an underfloor plenum and distribute it through perforated tiles of floor grates in the cold aisle so it is available to be drawn into the front intakes of the IT equipment in the cabinets. Then the hot exhaust air IT equipment in back in the cabinets blow into the hot aisle and (perhaps magically) find its way back to the return of the cooling units.
If only it were that simple. In actual practice, a myriad of issues seem to get in the way of this design’s simple concept, especially when applied to higher density cabinets. These generally fall into two categories; wasted cold “bypass” airflow and hot “recirculated” airflow. Let’s first look at the definition of bypass air; any cold supply airflow that did not get the intake of IT equipment. Bypass airflow occurs in any opening in the raised floor, such as cable cutouts and miscellaneous leakage areas, spaces along the perimeter where it meets the walls and other openings like PDU cabinets or other equipment, are common examples
Data center managers have begun to pay more attention to this and are trying to address it wherever possible. Proper sealing of the gaps where the raised floor meets the walls is a good start. The other area, and the worst offender, is the cable cut-out under every cabinet. Moreover, these openings range in size, from a small 4- x 4-in. notch at the edge of a tile, to half or even a whole tile! If left open, a substantial portion of the supply air becomes bypass air. This results in several problems including lower static pressure, which lowers the airflow where it belongs through the perforated tiles or grills, causing wasted fan energy. In addition, when the bypass air mixes with the warmed IT exhaust air it lowers the return air temperature to the cooling unit, lowering its cooling capacity and energy efficiency. To address bypass air, the brushed style cable grommet was developed over a decade ago. However, only more recently has it moved toward more widespread use. Yet many data centers still have not addressed this issue.
As for recirculation, it is when the warm IT exhaust air re-enters the IT equipment (either from the same server or any other server), which typically results in “hot spots.” This is a more complex problem to solve, but the first line of defense is installing blanking plates in the racks to minimize back-to-front recirculation within the same cabinet. On a broader scale, aisle containment systems prevent over the top, end-of-aisle, and aisle-to-aisle recirculation, as well as bypass air, but are more costly and more difficult to retrofit in existing facilities.
This past July an ASHRAE white paper reviewed this issue (Plenum-Leakage Bypass Airflow in Raised-Floor Data Centers by James R. Fink, P.E.). However, while the white paper discussed bypass air and related issues as a general problem, it cited cable openings as the majority cause of floor related bypass airflow. To quantify the issue, a specially constructed test fixture was created that allowed accurate measurements of leakage. In addition, to simulate real world conditions, they used seven test conditions that varied the number of network and power cables, as well as their positioning in the collar.
The overall finding of the paper noted that 50% or more of underfloor supply air leakage is typically wasted by those cable cutouts without any form of bypass air control. It also took the relatively unusual step of analyzing and comparing different brands of cable grommets with brush collars. While visually the brush collars appeared generally similar, a study showed a huge variation between the best device and the worst performing device. In order to make accurate comparisons, the author created a sealed test chamber which used a controlled static pressure of 0.05 in. w.c. (12.5 Pa) to simulate the typical underfloor pressure. However, in practice this will vary and more recently higher pressures are being used to achieve greater airflow rates through perforated floor tiles and grates to try to meet the challenge of higher density racks. In those cases, waste from cable cutouts and the savings from the brush collar grommet is even greater.
THE BOTTOM LINE
So how much is that hole for each cable opening costing? According the report it is an astounding $480 per year (compared to the raw opening without any grommet). The whitepaper used a cost of $0.13 per kWh (averaged over 10 years) as a basis to calculate projected savings.
The paper stated, “Installation of grommets to seal cable cut-out holes is nothing short of an outstanding investment. The relative performance differential among several popular tested grommets is significant and worthy of consideration.” Moreover it noted that the vastly differing performance of various brands had a huge impact on projected savings “Between the best and the worst-performing grommets, there is a significant difference in ten-year savings. In the hypothetical 1MW data center with 200 equipment racks and one grommet per rack this difference is nominally $72,000.” It summarized the highly detailed results declaring “… given the almost negligible cost of grommets relative to obtainable savings, there is little reason not to choose the best-performing grommet.”
There have been many methods to save energy and improve cooling performance that have been developed over the last decade. Some are simple and cost nothing to implement, such as raising the supply air temperature, while others may requires some cost and effort and require economic justification. In today’s highly competitive, efficiency driven data center market, an obvious, but overlooked problem that can be easily addressed with quick ROI is a rare find. The savings cited in the ASHRAE whitepaper are very clear. Moreover, brush style grommets are easily installed and are operationally non-intrusive and also can be implemented over time, as resources permit. So if you have not already done so, start sealing those cable cutout openings using the grommets with the best performance, and in case there seems to be some new, odd looking holes, better check for gophers.