The most important question about data center design is, “Just what do I need to consider when designing and implementing a high density data center?” The key issues, in no particular order, continue to be power availability, heat removal capacity, cost factors (TCO, ROI, etc.), security (both physical and electronic), and green concerns. These issues are all vitally important and need to be addressed in any facility design.

For the past few years, the focus has been on high-density cooling solutions at either the enclosure or row level. These solutions are available in a variety of configurations such as active air, supplemental row cooling, and close-coupled and enclosure-based liquid-cooling systems. This wide range of products can support a variety of installations. Each has its own strengths and weaknesses, and each matches up differently to individual site criteria.

These solutions offered a good starting point for cooling data centers, but now additional products need to be considered to complement these heat-removal components. No matter the industry, the same criteria will need to be considered. The driving forces for this new world of IT facilities are being driven by a small but critical number of factors:
  • The number of servers and storage devices being installed
  • Adequate power supply and heat removal
  • Process automation and virtualization at any level and in any industry
  • A network that takes full advantage of new systems
  • “Keeping Up With the Jones’”
  • The move from convenience to “Mission Critical”
  • And the fact that the world is truly 24 by 7 by 365.
Today, high-density installations require that additional thought be given to the other systems found in the facility, including power distribution, security, and cable routing, etc. Today’s data center must be viewed as a whole and not as a sum of its various subsystems. The trend is to think of the space as an ecosystem - a closed environment where all components must function together to maintain a desired level of performance. These components must operate with minimal impact on one another and where any changes or additions can be incorporated quickly and effectively, again without significant disturbance to existing components. Figure 1 shows this stylized facility.



Figure 1 – Components of the data center

Legend
A - Floor Systems (And under floor components)
B - Room Power Distribution
C - Room Climate Control
D - Cable Routing Systems (Overhead or under floor)
E - Equipment Enclosures
F - In-Row High Density Cooling
G - In-Row Power Distribution


Thinking About Design

The need to build new data centers or expand existing sites will only continue to grow. One Rittal customer recently said, “We can not build data centers fast enough.” Almost all end users clearly stated that high-density installations will be a key part of any new construction. The rush to build means that planning for a future data center must include considering a number of variables, any of which can undermine the best design intentions. For some applications, it may be practical to expand an existing site, floor space permitting. But even this simplest of approaches may still limit any future growth. So, most customers consider at least a partial deployment of high-density devices with their associated installation requirements.

Installations can be divided into two very generic categories: Updating an existing facility or building a new site. And while each has some project-specific criteria, there are several key, common considerations. Table 1 (on the next page) provides the starting checklist.

Once these general criteria are reviewed, the end user must develop a specific plan of action, one designed to meet a specific set of goals. These user-defined goals should include a desired level of growth to be maintained. Other objectives may be providing improved or enhanced levels of service to a customer base, bringing new products or services to market, maintaining a specific budget, or just handling all the new devices and applications being deployed.

With these goals established, specific values can be assigned, providing benchmarks that can be measured and successes achieved. Targets should be realistic and reflect the importance of the concerns listed above. Flexibility must be included, because change is inevitable.

Of the categories listed in table 1, space availability, power availability (including UPS capacity), and cooling capacity are the three most critical. Of course, there must be sufficient space to support existing and proposed hardware. This not only includes the IT components, but also supporting infrastructure, i.e., electrical distribution panels, CRAC/CRAH units, etc. Consolidation of components may be the only solution, if there is insufficient floor space to expand. But power and cooling requirements in the room and on per enclosure will rise with consolidation, driving the installation to a high-density configuration.

And even with enough room, it may be practical to designate an area within the facility to support high-density loads, segregated from the remainder of the installation. This area would include extra power with greater circuit load capacity and chilled water or refrigerant piping for high-density cooling solutions to support the increased loads. It may even prove feasible to reduce overall floor space usage by consolidating IT loads, thus saving on building costs, floor space costs, and ancillary costs for lighting, power, room cooling, etc.

For most sites, enclosures, or, at the least, large footprint open frames are recommended for installation of IT hardware. While it may be tempting to install a smaller size product, the largest product footprint (frame or enclosure) is a better choice. These frames will provide sufficient volume to accommodate larger components, whether deeper server or storage chassis or high cable volume products such as multi-slot switches, utilizing high performance copper (Category 6 and higher) or fiber optic cable. The larger volume will provide sufficient room to support the components, in-cabinet power distribution products (power strips, PDUs), connectivity cables and related patch panels, and associated ancillary hardware. And, the bigger footprint just might save aggravation in the long run, as it may not be necessary to add depth extenders, remove doors or split up components required for future applications.

The design must also provide sufficient overall power capacity, spare capacity to handle future growth, provisions for system redundancy, and enough UPS and generator capacity to support not only critical IT loads but also site infrastructure components such as chillers, air handlers, and related hardware. The circuits to each enclosure footprint must feed sufficient power to support high-density loads. This may require an upgrade from 30 to 60 amp circuits and from single-phase to three-phase supply.

Planning for cooling must take the same approach. If there is enough power to meet the server loads, there also must be enough cooling to remove all the heat. Again, guidelines should be set for overall room capacity (still using a watts/square foot model) as well as per enclosure footprint--still using watts/square foot but with much higher loads (in some cases in excess of 2500 watts per square foot.)
The design must also provide redundancy and scalability as well as web-based monitoring and control capabilities. And as with power, the entire installation must be considered. Beyond the in-room CRAC/CRAH units, chillers, economizers, cooling towers, and all related hardware have to be part of the final design to ensure sufficient capacity is available for both normal and emergency conditions.

Figure 2. A simple in-row data center solution

Legend
A - In-Row Power Distribution (UPS and Battery Back-up)
B - In-Row Cooling (Closed Loop, Close Coupled)
C - Equipment Enclosures


Product Selection

With guidelines, parameters, and installation goals set, specifying and installing selected products remains as the only tasks. All products selected must be compatible with each other as well as supporting end-user devices. For enclosure or open-frame-based installations, common row depth and height should be maintained. In-row power or cooling products should be able to be installed seamlessly into existing rows or included as part of a solution for new construction. Room for expansion as well as the ability to handle moves, adds and changes should be built into the installation, allowing an end user to rapidly deploy upgrades and new products while minimizing detrimental impact to installed operational systems. Figures 2 shows an example of comprehensive high-density rows.


High-Density Review

Planning, setting of project goals, communications, ongoing reviews, and product selection are all vital to a successful high-density installation. While no two installations will be identical, they all share many common elements that must be considered and addressed. While not all inclusive, table 2 lists some tasks that should be a part of any project, whether to an existing facility or new construction.

It has been previously stated: The focus must shift from “How?” to “Why?” The end-user community should no longer consider whether high-density installations are feasible or practical but rather to the benefits gained. As more systems are brought on line, more practical experience on component installation, system operations, costs savings and environmental impact will be gathered – all to the benefit of the IT community.




Table 2. Some tasks that should be a part of any project

  1. Establish communications between all parties: Engineers, end users, facilities
  2. Conduct regular and ongoing project status meetings
  3. Determine realistic cost and lead time values
  4. Set aside time AND money for unforeseen circumstances
  5. Clearly document existing facility environment
  6. Establish scalability and redundancy requirements
  7. Develop and publish test and certification procedures for specific components as required
  8. Develop a comprehensive labeling and documentation programHave maintenance contracts in place prior to system startups.
  9. And Finally – TRUST BUT VERIFY – Check everything, review all documentation, and know what you are getting