Figure 1. Rendering of the containerized modular data centers concept from Microsoft. Courtesy, IDC Architects part of CH2MHILL.


The Merriam-Webster Dictionary defines modularity as “of, relating to, or based on a module or a modulus; constructed with standardized units or dimensions for flexibility and variety in use.”

The advent of modular design for cooling data centers raises new issues of effectiveness, efficiency, and flexibility and changes the economics of data centers applications. 

Figure 2. Containerized cooling plant module from McQuay International


Modularity, when applied to data center infrastructure cooling, means that the cooling plant is optimized in modules rather than optimizing the infrastructure based on its full load. Figures 1 through 5 illustrate several containerized modules used in data-center applications.

Figure 3. Containerized cooling plant module from McQuay International


Internationally, modular cooling solutions have been used extensively for district cooling applications in the Middle East. It has also been used to provide on-site concrete cooling as pat of large construction projects in the hot and humid climate of the Persian Gulf region. In such applications though, the drivers behind modular cooling were somewhat different than for mission critical applications.

When used in gas-turbine, inlet air-cooling applications, modular systems can be manufactured offsite and then brought online in the power plant. The installation of the coils or evaporative coolers in the air inlet filters of the gas turbines along with hooking-up the cooling modules typically takes 40 percent less time on site compared to field fabrication. This enabled power plant operators bring their turbines online faster.

Figure 4. Example of the IT module in containerized modular data centers design


District cooling plants in the Middle East found modular cooling appealing also because it offers fast-construction turnaround, better control of workmanship, and single-source responsibility.

In North American construction, there have been some niche applications through the years. In some jurisdictions, the building code limits the height of any building to the height of a certain monument, like the Washington Monument in Washington, DC. Housing the mechanical room in the building gives away premium rentable space. Housing it on the roof in a penthouse reduces the height of the rentable floors below to meet the height code. However, a modular cooling plant in a container on the roof is classified as “equipment” that is not part of the building. So owners can save indoor rentable space, maximize their floor height, have their full chiller plant, while meeting the city height code. A unique situation!

Figure 5. Modular unit is lifted into place.


In data centers, infrastructure and energy can total more than the cost of the server itself. For example, figure 6 compares the cost of a fully configured, 500-watt (W), 1U server to its annualized energy costs and associated infrastructure costs. The chart reflects that the price of a 1U server has remained relatively stable. However, combined infrastructure and energy costs exceeded server cost in 2001, and by the infrastructure cost alone exceeded server cost in 2004. Findings like this have caused a paradigm shift away from strategies that focus on driving down the cost of IT equipment as a primary means to control data-center costs. Instead, energy and infrastructure costs have become the main concern.

Figure 6. Annual amortized cost of a fully configured 1U server in a mission-critical (Tier IV) data center. The combined cost of energy and infrastructure surpassed the server cost in 2001 as reported in Electronics Cooling 2007.


Modular cooling can be installed and brought online quickly, which is a very attractive feature in mission-critical applications.

In the minds of data-center facilities managers, availability remains a much higher priority than saving energy as shown in figure 7. The enormous cost associated with downtime explains this emphasis on availability. Figure 8 shows the estimated service downtime costs in the different industries.

These reasons explain why modular cooling plants are especially attractive in mission-critical infrastructure applications. The fact that they can be plugged off and on, and disconnected and connected very rapidly plays very well with the enhanced reliability requirements of mission-critical applications.

Figure 7. The priority for data center managers remains squarely focused on availability as reported by Emerson Network Power.


Over-sizing is one of the largest drivers of electrical waste, but it is the most difficult for users to understand or address. Over-sizing of power and cooling equipment occurs whenever the design value of the power and cooling system exceeds the IT load. All data centers have an efficiency that varies with the IT load. At lower IT loads the efficiency always declines and is equal to zero when there is no IT load. The shape of this curve is remarkably consistent across data centers. An example is shown in figure 9.

Figure 8. Costs by industry for one hour of downtime in millions of dollars as reported by Vision Solutions.


When the IT load is well below the design value for the data center, efficiency degrades and the data center is over-sized for that IT load. Many data centers operate in this condition-sometimes for years-typically because they are constructed for a hypothetical future IT load that has not yet been installed. To correct this problem, power and cooling equipment should be scaled over time to meet the IT load requirement as reported by Schneider Electric.

Figure 9. Data center efficiency as a function of IT load comparing modular vs. non-modular designs as reported by Schneider Electric.


Research shows that the typical data center is only utilized to 30 percent of its capacity. While some data centers are utilized to 90 percent capacity or more, there are also facilities utilized to only 10 percent of capacity. Furthermore, the utilization of a data center varies during its lifetime according to a relatively consistent pattern. Figure 10 is a typical model for the utilization fraction of data-center power infrastructure over a lifetime as reported by American Power Conversion. With such a profile, it makes more sense to also gradually ramp up the infrastructure to support the growth of the data center utilization rather than starting with a design for the fully utilized load from the beginning.

Deferring this investment also has another inherent benefit: the target power density for a future zone in a data center can also be deferred until future IT deployment. So the infrastructure will match the future increased heat load of the state-of-the-art equipment being installed rather than being designed based on obsolete heat loads at the beginning of the project.

Figure 10. Utilization fraction of data center power infrastructure over lifetime


In its report to Congress on Public Law 109-431, the U.S. Environmental Protection Agency (EPA) reported some interesting growth projections. According to the report, the sector consumed about 61 billion kilowatt-hours (kWh) in 2006; an estimated 1.5 percent of the total U.S. electricity consumption at $4.5 billion. Out of that, the federal sector consumed approximately 6 billion kWh at a cost of $450 million. The sector was projected to increase to 100 billion kWh in 2011, which equates to around 2.5 percent of total U.S. electricity consumption ($7.4 billion). This trending increase in power consumption is due to the ever-increasing heat loads of the IT equipment as shown in figure 11.

This ever-increasing energy demand and the associated environmental effects are by no means limited to the U.S. A recent study indicated that data center electricity consumption global is almost 0.5 percent of world production and that the average data center consumes energy equivalent to 25,000 households. Figure 12 illustrates the carbon emissions from data centers in comparison to whole countries as reported by McKinsey & Company.

Initial cost is another basis for comparing containerized modular designs and site built facilities. To quantify this portion, figures 13 and 14 show the initial cost of the containerized design vs. site built cooling plants based on actual data. The model indicates that containerized modular design is more economical on a first-cost basis compared to site-built central plants up to about 4700 tons.

The same also goes for energy cost. Since each module can be optimized individually, some containerized installations can be optimized to down to 0.4 kW/ton putting them among the best-in-class chilling plants as reported in HPAC in 2006.

Figure 11. ASHRAE projected density loads for IT equipment as reported in ASHRAE’s Datacom Equipment Power Trends and Cooling Applications.


Financing a modular cooling plant is typically more advantageous to the owner compared to a traditional site-built plant. This statement is contingent upon the cost of money. A modular cooling plant, accounted for as a piece of equipment, does not incur property tax nor does its provider do progress billings. In other words, typically, the entire piece of equipment gets billed when shipped, not progress billed like a traditional site-built plant. Also since it is accounted for as a piece of equipment vs. a “building,” an owner can typically depreciate the cost of the entire modular plant, not just chillers and other equipment in a shorter time (seven-years depreciation for cooling “equipment” vs. 30 years for a cooling “plant”). So with a modular design the capital and operating costs can be deferred until needed.

There are also the financing loan or lease advantages. Again, since a modular plant is accounted for as a piece of equipment compared to a building in case of site-built plant, it is treated differently. Financing a piece of equipment or leasing it can command a lower interest rate as the financing entity can more easily quantify its scope, liabilities associated with its boundaries, and more importantly, can more easily exercise a lien on it in case of delinquency. This is much more complicated in case of a building, which is why the finance charges are typically higher.

The same also goes for insurance premiums. Due to limited exposure of modular design where most of the assembly work is under one source and is done in a controlled factory environment, all this insurance liability is no longer under the scope of the owner/contractor. When the modular plant is installed on site, the insurance premium again is lower as it is equipment-type insurance vs. building-type insurance.

Figure 12. Data centers carbon emissions vs. countries (Mt CO2 p.a.)


With the wide spread of cloud computing and software-as-a-service (SaaS), virtualization seems to be the way of the future. Virtualization refers to a concept in which access to a single underlying piece of hardware, like a server, is coordinated so that multiple guest operating systems can share that single piece of hardware, with no guest operating system being aware that it is actually sharing anything at all as defined by B. Golden in Virtualization for Dummies. Virtualization can be on the application level, desktop level, storage level or the server level.

Figure 13. Total cooling plant cost comparison.


IT data centers are increasingly looking to virtualization technology and techniques to consolidate servers and storage resources in many environments to boost  resource utilization and contain costs. The primary focus of enabling virtualization technologies across different IT resources is to boost overall effectiveness while improving application service delivery (performance, availability, responsiveness, security) to sustain business growth in an economic and environmentally friendly manner. Some of the main challenges facing data center in this setup is maximizing power, cooling, floor space and environmental issues (PCEE) as well as scaling existing and new applications with stability and cost effectively as described by G. Schulz in The Green and Virtual Data Center. A modular design that matches the IT side with the cooling infrastructure side plans extremely well in such circumstances. There may also be some rebates or incentives available towards this. 

Figure 14. Cooling plant cost comparison per ton of cooling.


Like everything else, containerized modular cooling design has its limitations. There are cases were going with the traditional site-built concept make more sense. Some cases where the containerized modular concept may not be suitable may include an expansion of existing data centers that are housed in a tight urban location. In this case, the chiller cooling plant may be housed in the basement and it may make more sense to add an additional chiller or replace the existing ones rather than scrapping the entire plant. Generally also, the containerized modular cooling system requires outdoor space (unless it is an indoor skid installation), so if there is not enough outdoor space, a traditional indoor cooling plant may be the most viable option.

Placing the containers on the roof of a building may also be a challenge structurally if the roof enforcement cannot withstand the added weight of the containerized cooling modules.

As the economic analysis also has shown, the law of diminishing returns comes to play when the size of the cooling plant exceeds certain limits. Plants that are larger than 5,000 tons (17.6 MW) that do not require phasing may not be the best candidates for such modular infrastructure design as well.