Figure 1. A deceptively simple facility hosts eight data center containers.

Since the first data center, designers, builders, and owner/operators have been dealing with boxes (units) as a way of containing a system or a process made up of multiple modules having specific purposes. The containerized data center, the latest incarnation of the basic unit, should come as no surprise.

Over the last decade, IT departments and data center designers have accused hardware manufacturers of building hardware that data centers could not support. Of course, a multitude of solutions (and vendors) appeared to solve this problem. Higher voltage to the rack, dc power distribution, in-row cooling (top, bottom, sides and rear of the rack), centralized cooling fans, and more all came about in the last decade as solutions to the challenges of higher-density computing equipment.

Now manufacturers are taking a proactive approach at providing an engineered solution to power and cooling integrated with the hardware environmental requirements. The approach results in a new unit, the containerized or modular data center, where the container or module (used interchangeably) unit is manufactured to specific performance criteria to fully support the hardware’s environmental needs.

The containers are intended to eliminate the need to adjust perforated floor tiles or under-floor cable dams for airflow. The containers come complete with balanced branch power circuits, ready for easy-to-deploy hardware and software to be put in place. No more months of server deployments.

Just order the box, have it delivered in 6-12 weeks, make the main power, cooling and communications connections, and go live.  On day two, when the new servers show up on the loading dock, one work order can get them delivered to the container, mounted in an open rack, and turned on. No more waiting for facilities to run those redundant circuits you forgot (or didn’t know) to order. On paper, it could not be any closer to Plug & Play than this.

The reality, of course, is a little messier.

At least seven manufactures have already offered containers as standard product offerings, and more are following to create a burgeoning data center containerization industry (DCCI).

The containers simplify much of what is time consuming about data center construction
or refresh.

History and Perception:

Early DCCI offerings are hampered by the perception that the DCCI solution is a disaster recovery (DR) solution or a solution to the unique needs of Internet class data centers. The DR association may have been driven home by the extensive marketing campaigns of early vendors. With catchy names like APC’s “Data Center on Wheels” this single-function use has been driven home for years with few ever considering or marketing DCCI for anything else.

Then Microsoft and Google began looking for a solution to their unique needs. In many articles and interviews Microsoft shared much about the challenges and solutions they faced building, populating, and starting up hundreds of thousands of square feet of data centers per year along with deploying tens of thousands of servers per month.

Theirs was a logistics nightmare. How big of an army of technicians would they need? How many years after construction would it take to populate the data center? Even shipping fully populated racks would take too long. They needed a better way.

As a result the big Internet class owners saw the benefits of pre-populated computer racks, and took the next logical step by simply changed the basic unit from cabinets to containers.

Given the economic times and the need for creative solutions, DCCI is migrating from the DR application to more general application.

Three Basic Designs

Data centers are no longer corporate show pieces. They are now functional industrial tools that drive a company’s profitability. For many, the data center no longer just drives the factory floor, it is the factory floor. These facilities need a data center that is reliable, rugged, secure, and readily expandable as well as economical to build and operate.

The vendor manufactured/populated containers or modular data centers currently come in two basic sizes:

a 20-ft long container

a 40-ft long container

Some vendors offer additional infrastructure support containers that house UPS, chillers and/or emergency generators to complete the package. These support containers are usually OEM or can be secured from any one of the numerous vendors that have been packaging this support equipment for years.

Within the containers, there are as many power and cooling solutions as there are in more traditional data centers. The containers, however, are engineered to specific and repeatable performance criteria. Each vendor has its own unique approach to power and cooling.

Manufacturers tend to quote a specific power utilization effectiveness (PUE) performance for their container as a way to describe energy use. These PUEs range from 1.25 to 1.6, which is much better than other data centers. Though these efficiencies are appealing, each defines its PUE differently.

View inside a single Sun container.

DCCI Building Design

Just as in more familiar facilities, DCCIs still need security and accessibility in poor weather, and they must provide high levels of reliability. Three basic DCCI designs meet these requirements.

The terminal design, which is the least-cost concept, offers the simplicity and flexibility of a trucking terminal. It is simply a one- story building (See figure 1) with bay after bay of loading docks. Containers can be installed on one side or both sides of the terminal. The terminal’s capacity is a function of its length.

Outside, the containers are placed on a series of rails that allow alignment to the overhead type terminal doors and provide a means of securing the containers in-place. An inexpensive shed or fabric roof could be installed over the containers for shade from the sun and extra weatherproofing.

For support infrastructure, the terminal is the most flexible of all the designs. It can be set up for a centralized power/cooling plant, or each container could have a dedicated UPS, chiller and/or generator containers stacked on top of them. With the stacking of the support equipment, the IT department could have the primary online systems in one container with full back-up, while the adjacent container is populated with test and development equipment or archived data protected with minimal power conditioning and no cooling redundancy or generator backup. If properly thought out, the ability to improve the data center’s PUE is well into double digits by installing only what you need today. It takes only a few weeks to expand.

The terminal building can be a simple concrete block building or more elaborate or sturdier for hurricanes, tornados, or earthquake protection as needed.

The furniture warehouse design DCCI’s are planned out with 50-ft tall industrial grade shelving and narrow aisles. The DCCI furniture warehouse design is an industrial version of what you typically see at a big-box home store where pallets of product are stacked on shelves that go to the roof. The DCCI furniture warehouse design uses super sized and strength shelves to accept 60,000-pound containers. Scissor lifts slide the container onto its shelf much the way baggage containers are handled in loading jumbo jets.

Each level of shelving has concrete catwalks to provide access to every container. An elevator services each level. A service platform between each container provides access to the rear doors of the container as well as the connections for power, chilled water, and communications. 

The open shelving provides plenty of opportunities to run the power/communications conduits as well as the chilled water piping as needed.

The mail slot (single side) or hotel (both sides) design offers the greatest density and simplifies accessibility. Simply put, it is a stacked terminal design that is all indoors. Each container is inserted at one end of the shelf and slid to a service platform at the other end of the shelf. This allows for the tightest nesting of the containers and creates one continuous service platform on each level.

But why stop there, with this single-sided service platform. The typical interstate hotel has long hallways and rooms on both sides. The same concept can be applied to DCCI containers.


In early and current advertising, HP describes its 40-ft POD as having capacity equivalent to a traditional 4,000 sq ft data center. Used as a conversion metric with a hypothetical 100-ft by 100-ft by 50-ft (l by w by h) clear-span building on grade with a mail slot design four by ten containers yields 40 containers, or the equivalent of 160,000 sq ft of traditional white space on a 10,000 sq ft footprint. The hotel design yields the equivalent of 320,000 sq ft of white space in a building that is less than 20,000 sq ft in size. The net result is a 150,000 to 300,000 sq ft of savings in foundations, roofing, and structure alone.

Other facility benefits include:
  • More easily assimilates into urban environments where land is at a premium but power and communications services tend to be more readily available.
  • Lends itself to highly efficient central plant UPS and chiller systems
  • Significantly reduces power distribution and concomitant capital expense (cap ex) charges
  • Significantly reduces chilled water pipe distribution (more cap ex savings)
  • Uses non-welded pipe as all field piping is outside the sealed box (more cap ex savings)
  • Requires less volume for gaseous fire suppression (more cap ex savings plus operating expense savings)
  • Avoids need for pre-action sprinklers (more cap ex savings)


In today’s economy money is tight, so CFOs and CEOs are looking for financial options. For years the industry has sought ways to better match capital investment with actual compute needs. While DCCI is not a perfect solution it can go a long way towards improving both capital and operating budgets.

The containerized solution takes many data center capital costs and converts that investment into removable/re-usable assets defined as personal property and makes it available for rapid depreciation in line with that of the computer equipment they house.

Equally important is that DCCIs are now an attractive new product for leasing companies. Doing a 3-5 year lease on a fully loaded container would be no different than leasing a mainframe or a car with all the options. Once CIOs recognize this, they will quickly learn that they can almost guarantee a technology refresh at the end of each lease term.

The costs of the containers vary widely depending upon the server technology. It is generally accepted that the containers are running between $550k and $650k without the electronics. These numbers can be expected to firm up as more products hit the marketplace, but as with any technology purchase the final prices can vary widely due to market conditions and the vendor’s desire to land (or keep) an account.

Expanded Internal Designs

Many of the containers on the market have been built around homogenous server technology, but as independent container suppliers are coming into the marketplace more designs accept a mix of technology, even mainframes. These expanded options will no doubt be attractive to the CIOs that want to avoid being locked into a vendor-proprietary solution.

DCCI is evolving rapidly. As it does enhanced facility designs beyond the three concepts outline here will evolve. At the same time the internal container designs are becoming more performance guaranteed.