Capacity Planning In Colocation Data Centers
Having a properly master planned site is key.
Data centers are more critical to the operation of our businesses than they have ever been in the past, and the criticality increases every day. In fact, they are not just critical to our businesses but to our daily way of life. It is also understood that the power densities in data centers continue to increase on a scale never seen before. Which is why, similar to the electrical distribution systems, properly designing, building, and controlling the HVAC cooling systems in the data center white space today is more critical than ever. The following is intended to be a high-level description of the HVAC control systems in data centers.
Multi-tenant colocation data centers face unique challenges in the data center marketplace. They need to be able to accommodate a wide variety of clientele and do so in a very cost competitive market. This presents a unique set of challenges in order to manage cost vs. availability. The colocation providers being considered are retail providers instead of wholesale providers. Wholesale providers try to lease large amounts of space or entire centers to single tenants whereas retail data center providers “carve” up the space to allow for a variety and multitude of tenants.
Capacity planning for colocation data centers is market and scale dependent but can be addressed using some simple principles. Most colocation providers want to balance available stock with the ability to quickly buildout existing space. There is also a need to remain flexible to client needs and be able to adapt to specific client power densities and IT deployment requirements. A popular design option is to provide a modular approach to both the mechanical and electrical system infrastructure design topology.
Multi-tenant colocation data centers operate in a number of different ways. Companies operate in whichever way best fits their business model. Some of the most common lease types for colocation providers are to lease space to customers based on complete data halls, individual caged sections within a data hall, and individual racks. Some providers will host individual servers, however, for the purposes of space planning we will consider the previous methods and not drill down to individual servers.
The size of the “block” for a multi-tenant colocation data center can change based on many factors. It can be based on uninterruptible power supply (UPS) selection and deployment, HVAC selection and deployment, and unit substation selection and deployment amongst other factors. The electrical utility needs to be engaged to determine the available capacity and voltages at the start of the design to determine the overall power available for IT and mechanical cooling loads. The configuration and number of incoming feeders will also help determine the redundancy available to the site. Some sites are single utility while others opt for multiple feeders from different substations to allow for increased system reliability.
From an electrical standpoint, using watts per square foot over a given footprint is a common metric to start with when designing the overall capacity design of a data center. Assigning an average kW per cabinet is another method that is helpful in determining overall system capacity. This will help to determine the overall facility IT power requirement and by applying the anticipated worst case power usage effectiveness (PUE) value, the overall site electrical requirements can be determined. From this starting point, individual data halls or “blocks” of IT load can be established within the facility.
These blocks are commonly made up of electrical power distribution equipment, UPS, IT distribution, standby power generations, and mechanical cooling solutions. One approach is to combine mechanical and electrical power distribution on a single unit substation and then provide redundancy between the substations. For example, a 2.5 MW generator may supply a low voltage bus that carries 1 MW of UPS systems and 1 MW of power for mechanical cooling and other non-critical loads such as lighting, office spaces, etc. This would be considered a “1 MW” block as it supports 1 MW of critical load. These blocks can then be deployed in different topologies in order to achieve the desired redundancy and risk avoidance that the colocation provider is trying to sell. Another common method is to have dedicated critical IT power busses with UPS systems and dedicated generators. The mechanical and other non-critical systems would be supported by a separate “back of house” system to support these loads. The redundancy would also need to be determined but the block type would depend on the size of the generator and the associated low voltage power distribution system.
By utilizing IT blocks, the owner can deploy over time and build in smaller power increments to reduce upfront cost until a tenant is acquired. The facility may be master planned for 24 MW based on the available utility power but it may not be practical for the owner to procure and build 24 MW of IT infrastructure and associated mechanical support. By building the shell of the facility and planning for expansion, the deployment of smaller IT power increments will allow the owner to deploy power as the tenants require it without having to maintain and operate equipment that is not necessary in the early stages of the development.
The IT blocks will then need to be adapted to each individual client lease. Most colocation providers assign a base kW rating to an overall footprint such as 1.5 MW across 10,000 sq ft of white space. This equates to an average load of 150 kW/sq ft or 4.5 kW per cabinet (using 30 sq ft total space per cabinet). This works for some customers but may be too high for some or too low for others. As stated before, this would be an average. The cooling and power distribution within the overall white space floor plate can be adjusted to serve the client.
Another way to manage capacity is to “mock-up” or simulate different IT deployments. In the case of a colocation data center, they may have clients that deploy typical types of IT equipment in common configurations. This may be as simple as a “storage,” “transport,” “compute,” and “high-performance compute,” with each deployment having a specific rack count and kW/rack power value assigned. This can then be used to fit into the provider’s space. The provider can then determine the best place in the data center to accommodate the client loads in relation to where they have rented space. The high and low power loads can be interspersed in some instances to allow for an average load across the space to avoid “hot spots” as it relates to the cooling system.
The choice of cooling system can also affect how IT load capacity is planned and equipment is deployed. When power densities reach a certain level, containment (hot or cold aisle) becomes more of a necessity than an optional requirement. Raised floor vs. slab on grade also come into play as the individual cabinet power levels increase. As individual cabinets get powered at 20 kW or above, it may be necessary to provide cabinet level cooling solutions to accommodate these types of loads. A computational fluid dynamics (CFD) model of the airflow should be performed either as typical for the space during initial design or as equipment is added to confirm that the proper amount of cooling can be delivered to the appropriate space. With a raised floor system, this will help to determine the overall height of the raised floor and location of vented tiles to release the cold air into the proper locations. With a slab on grade cooling system, the model will help to determine the hot and cold aisle distances to allow the correct amount of cold air to reach the middle of the rows. The type of containment and ceiling system can also be analyzed using the CFD model and would be appropriate for both raised floor and slab on grade cooling systems.
Coordinate the electrical and mechanical systems such that the electrical system provides an equal level of redundancy to that of the mechanical systems. Colocation data centers range from single source to multiple system designs that include concurrent maintainability and fault tolerance. The key is to make sure that the electrical power distribution system is coordinated to properly power both the critical IT systems and the corresponding mechanical and non-critical support systems in the same level of deployment.
Colocation providers operate under different philosophies, but most consider a few basic principles: speed to market, flexibility, and client requirements. By having a properly master planned site, it is possible to plan for capacity upgrades overtime and adapt to client requirements as they arise. By allowing space for its blocks, it is possible to adapt the distribution to individual clients or combine clients as required in a common data center.