It never ceases to amaze me how many vendors tout a “one size fits all” solution.  Even in a certain sector or company, the needs are varied.  Across hundreds of data centers the needs are even more varied.  If we look at typical Watts per cabinet, the international average is 5.5-6.0.  This is certainly not what we would deem high density.  Many colocation and hosted facilities are designed to these lower numbers as well.  If you need more power, you simply rent more space.  If you talk to any colo provider, they will be the first to tell you that managing power across all customers is a full time job as none of the cages require the same power requirements.

When you increase power density, you need to increase cooling capacity as well.  Cooling capacity comes at a cost and shouldn’t be implemented across the board if it isn’t needed.  There are a plethora of cooling options for data centers these days.  For those lucky enough, “free” air cooling can provide a significant savings.  I say “free” because the systems certainly isn’t, the filters are not, the only free part about the system is the outside air.  This solution won’t work everywhere.

The next option is to use close coupled cooling or as some call it in row cooling. This can be a great solution for high density areas, but the floor space required every other cabinet (typically) can be a prohibiting factor in existing spaces. Other units work overhead to force cold air down and draw in air from the hot aisle in the rear.  The downside to some of the close coupled cooling is the noise.

Another solution is gaining in popularity known as rear door heat exchangers.  These work basically like a radiator.  As the equipment fans pass the hot air to the doors, the heat is picked up by the cooling in the doors and moved to a cooling distribution unit where it is removed.  Most of these systems as well as the close coupled solutions in the paragraph above require some type of chilled water piping or other refrigerant.  These lines are nicely hidden under a raised floor, if the data center has one.    The advantage of the rear door heat exchanger is that it doesn’t require the footprint of the other options.

Good ole CRACs and CRAHs have been around for a long time and have the benefit of being the most mature technology.  The majority of data centers still design using CRACs, in part because of their maturity, and in part because the M&E firms understand them well.  Of course there are some firms that design the same exact data center over and over, I’m not talking about those guys. 

Whatever solution you chose, or mixture of solutions, remember, you may not need high density everywhere.  In some cases, simply scattering your hottest equipment around will keep you from needing any supplemental cooling.  A good CFD analysis of YOUR space and YOUR requirements will assure you get the most bang for your bucks.  Remember you may very well end up with a few solutions.  It is also important to evaluate how well a solution is working  before you just add more because it is “what you have.”