Above: Container solutions typically have the same footprint.


 

In his “Zinc Whiskers” column in the January/February edition of Mission Critical, Bruce Myatt discussed the recent history of containerized-modular data centers (C-MDC), the major players in the industry, the benefits of standardization, and the industry skeptics who remain unconvinced. Elsewhere in this issue (see p. 32), Ramez Naguib of Harsco Industrial talks about modular cooling plants, the ability to more confidently match capacity to loads, and some of the financial benefits of going C-MDC. Both industry experts discuss the growing number of applications for C-MDC solutions.

Some of the factors that explain the growing popularity of C-MDC include quality control, increased efficiency, superior monitoring, cost, and schedule. It’s apparent that C-MDC solutions will be part of the landscape, at least for the immediate future, and it’s equally obvious that data center owners/operators must know what to look for when considering a C-MDC solution.

Ever since the first data centers were built, the challenge of coordinating multiple trades, facility groups, and IT groups has hampered quality control and speed to delivery. Making matters worse, the trade and facility groups often lacked understanding of the technology, and IT did not understand power and cooling.

Much has changed in the last two decades. Increasingly IT and facilities have developed a better understanding of today’s technology needs-in many cases evolving into a single group. C-MDC is the next step in this evolution as IT containers or modules can be delivered complete (single sourced) with power, cooling, communications cabling, hardware, and software all built in a factory-controlled environment. Gone are the days of split responsibilities, confused integration, and uncertain performance expectations. Factory-built systems have the advantages of repeatability, proven performance metrics, standardization of design sections, and significantly less field labor. 
 

Above: Different module shapes and configurations have emerged as the industry gains more experience with C-MDC.

 

With both the cost and demand for power increasing, energy efficiency is all the rage in data centers today. Manufacturers are providing repeatable tested performance metrics on their C-MDC products so that IT and management can reliably calculate their PUEs and carbon footprints and calculate how it will change. By contrast, in many traditional data centers, operators are still figuring out where to measure performance and what system to install to record it.

Over the years, the industry has spent millions on monitoring. Often data center monitoring is not addressed until the design is done and even then is usually “valued engineered” later to bring the project in on budget. Further, there is always great debate as to what to monitor, how to monitor it, and even whether anyone will ever look at the data. Historically these monitoring systems have run under vendor proprietary software and only recently have become more cross-vendor friendly. Most C-MDC vendors offer a C-MDC package complete with hardware/software and graphic displays that present real-time values for every operating component. Their systems are fully compatible with all the major IT and facilities systems, are IP addressable, and provide live operating metrics like PUE and carbon footprints.

There is much debate about the cost effectiveness of C-MDC. How expensive are they compared to a traditional data center? How does one compare the cost of a C-MDC solution to a more traditional solution?

Some argue that C-MDC more closely matches costs with immediate needs; that is, all the expansion space does not have to be built day one. C-MDC units can be added as needed in the future. So unrealized projected growth doesn’t cost anything. The traditionalists, however, counter that in the end C-MDC costs more, compared to a traditional data center that is rapidly populated.

Technology continues to change. A C-MDC unit rated for 5 kilowatts/cabinet (kW) might suffice today until an enterprise needs to upgrade to 15-kW/cabinet units in five years, deferring the purchase of high-powered infrastructure until that time and avoiding five years of cost associated with excess capacity. Predictions about capacity needs in data centers never seem to be adequate. What can be stated about C-MDC costs is that they have come down dramatically as production and competition ramps up.

Schedule is where C-MDC usually shines, particularly when considering total schedule and not just the construction schedule. A traditional data center cycle sequence includes a planning-and-design cycle, construction, cabling, hardware deployment, software, and data burn-in. The planning-and-design cycle of a C-MDC is shortened because standard products have known repeatable requirements. 
 

In addition to the standard 40-ft length, 10- and 20-ft long containers are also available.

 

How much the construction cycle can be reduced is a function of the site conditions and how much of the support infrastructure is also pre-manufactured and delivered to the site as fully functional packages. Where the maximum savings in schedule comes from is at the tail end of the project. In a traditional build, the construction has to be completed and commissioned before cabling can start, which needs to be done before equipment deployment, software, and data are all sequentially populated. In the world of C-MDC however, all or most of the white space, commissioning, hardware, software, and data deployment occur at the factory simultaneously with the construction schedule. This will usually cut the full schedule for an “operational” data center by six months or more. At the May 4th, 2011, 7x24 Exchange Houston Chapter meeting, Joseba Calvo, executive vice president of AST Containers presented an video of the full assembly of a C-MDC with power, cooling, and generation. The assembly took just eight hours.
 

OTHER BENEFITS

C-MDC brings with it many other benefits as well. Bill Mazzetti, Jr., senior vice president at Rosendin Electric, said, “Right now the market has excess capacity to deliver new traditional data center designs but that will change in the next 12 months as the market rebounds.” This capacity issue is especially important in tight labor markets

Construction of the cooling systems is now outside the box, so the use of welded chilled water piping is of lesser concern. When installed inside buildings, C-MDC is usually installed either in lower-cost warehouse sites or on open fenced lots as the C-MDC is secure and weather tight. Financially, leasing companies are taking a growing interest, which will help companies with capital constraints. There are also tax benefits where C-MDC units are considered personal property, which then permits rapid depreciation and lower real estate tax values.

For companies that have relatively homogenous environments or sub-environments, a technology refresh can be done with relative ease. When refreshing a C-MDC environment, the next generation can be ordered up in a new C-MDC unit and dropped in place. This refresh can be a swap out if the software and data are pre-populated or it could be placed on-site to run in parallel with the old system while any IT bugs with the upgrade are worked out.
 

CONFUSED MARKETING

Ironically, one of the biggest obstacles the industry has to overcome is its own early marketing. APC’s “Data Center on Wheels” was initially promoted as a disaster recovery solution. Other early introductions were described as a disposable solution for the mega sites that could afford the “loss up to 20 percent” of the C-MDC’s unit processors before replacing an entire C-MDC unit. Then computer vendors tried to market their containers as a total single-source solution and to lock in clients to buying more of the vendor’s hardware. This worked for clients where the vendor was the dominant hardware supplier but not for the marketplace in general. These strong marketing campaigns have all focused on a limited use vision that many have interpreted as limitations of the C-MDC concept. According to Fred Stack, vice president of marketing, Liebert Precision Cooling and a manufacturer of C-MDCs, “their highest demand continues to be for remote locations but that is noticeably changing.”
 

Modularity allows data center owners to make better use of available space. Here, a protected rooftop space is put to good use.

 

The barrier to further penetration of these solutions is the need to convince the enterprise and colocation environments that C/MDCs work for their day-to-day needs as well. Over the last 18 months there has been a steady interest in C-MDCs from universities, government, health care, financial, and other industries including colocation operations.

Universities and government have been interested in deploying multiple containers to address the often-competing needs of different departments and organizational silos. Health-care facilities like the size of the containers and the simplified specification/construction process because health-care organizations usually have only limited experience building data centers. In addition, a C-MDC can be specified in a way that helps meet the compliance requirements of many health-care organizations.

Additionally, financials have been interested in the rapid depreciation and built-in technology refresh rates and the availability of a low-risk solution to today’s high power and cooling requirements of leading edge technology.
 

 

Colo operators like the more customized client solutions that are available and reduced up-front capital costs as clients will tend to own the IT C-MDC unit and the colocation provider will provide the industrial space and infrastructure to support it.

Lastly, all markets are interested in the rapid deployment that C-MDC provides. There are now colocation operators offering C-MDC services coast to coast. One of the oldest is Dock-IT with operations in Gainesville and Ashburn, VA. I/O data centers recently announced an 800,000-square-foot facility for New Jersey. On the West coast, Pelio Modular Data Center is kicking off in Santa Clara, and in Chicago the Grand Avenue Data Center is readying several hundred thousand square feet for C-MDC.

The C-MDC industry is rapidly approaching adolescence with second, third, and fourth design generations coming to market to address the limitations of the earlier generations and usually with a zeal to support ever higher densities.

Steve Holland, senior vice president at the Grand Avenue Data Center, LLC has a more practical approach. Grand Avenue's research shows that most clients are still in the 5-kW/cabinet range so they have invested in systems that support 5-kW/cabinet as a baseline but which are flexible to grow with client needs
 

 

“Why should a client have to pay for 20-kW/cabinet if they are never going to use it” says Holland. 
 

SELLING C-MDC INTERNALLY

The C-Suite is hearing about all these ways of spending less CapEx and having the OpEx more closely track the corporate needs. Yet, even if C-DMC is a C-Suite idea, C-level executives still expect their managers to put together a convincing argument to move away from the “glass house” and into an industrial “utilitarian” data center environment.
 

 

C-MDC has many of the benefits of cloud computing services without giving up the control of the process, the security or the possession of the data. C/MDC in effect, is data center capacity that can be purchased/leased from a supplier as an operating cost or rapidly depreciable asset that can be scaled up to meet a rising business need. Conversely in a declining business environment C-MDC can be sold or returned to the leasing company. So while C/MDC yields the financial flexibility desired by the C-suite, it also appeals to middle managers who have to maintain data integrity, operational control of the applications, and the reliability of the process.

C-MDC suppliers can say how much it will cost to add capacity for a new initiative. In a worst-case budgeting exercise most C-MDC vendors will even provide a total turnkey budget including the infrastructure components.  
 

Sidebar: Some 22 Current Vendors

Hardware Vendors

  • Cisco, DELL, HP, IBM, SUN

Independents

  • APC by Schneider , AST, BladeRoom, Bull, Cirrascale (Verari), COLT, Emerson, I/O data centers, Lee Technologies, Liebert, Nextgen Modular, Pacific Voice & Data (MCIE), PDI, SilverLinings, SGI/Rackable, Telehouse, Telenetix, Wipro


Sidebar: What To Look For

Here is a basic checklist of what to know and look at when shopping for a C-MDC unit:

  • Verify advertised kilowatt/cabinet against the feeder sizing at full load
  • Most rate their capacity as single-sourced internal distribution<
  •  For “A” & “B” sources to the rack most de-rate their stated electrical capacity by 50 percent<
  • Watch the voltage ratings
  • HVAC. Too numerous to compare, what works for your site?
  • Chilled water or refrigerant?
  • Condensation
  • Pipes
  • Temperature differentials
  • Location and size of entry points
  • Climate/weather. Exterior insulation prevents condensation or heat gain
  • Vendor neutrality. Does it support non-standard equipment cabinets?
  • Replacement of internal components w/o affecting operations
  • Continuing services: Does the supplier offer maintenance services?


Sidebar: Feature Set Decisions

  • IT Hardware
  • Fewer vendors are restricting what IT hardware they will install
  • Cabinet Size – hardware limitations
  • IT Cabling
  • Pre-installed network cabling and/or switch
  • Cabling capacity
  • Access
  • Staff accessibility
  • Weather related issues (snow/rain/wind)
  • Maintenance Access
  • HVAC
  • Electrical
  • Similar systems (fire, monitoring, lighting, etc.)