In the past I have written about a myriad of topics and have participated on the speaking circuit discussing “The Cloud has Walls,” “The New Role of the CIO,” and total cost of ownership (TCO). In 2017 I am focusing on third generation (3G) data centers and how they support the Internet of Things (IoT), hyperscale, and data center-as-a-service (DCaaS). As technology has evolved and grown dramatically within the last two years, the 3G data center also needs to adapt to both rack level installations to large hyperscale installations.
In recent publications, I described G1 data centers as the initial colocation facilities that were created around supporting a small cluster of racks. This type of colocation data center dates back to the late ’90s. Then came G2 data centers which focused on the wholesale product, offering larger installations built to the PDU level in which tenants then managed their own white space environment. G3 data centers of today need to accommodate both rack level tenants supporting IoT on a DCaaS level, and have the inventory 3 MW+ to support large hyperscale compute tenants.
TODAY’S HYPERSCALE TENANT
First, hyperscale tenants should not be confused with high performance compute (HPC) tenants supporting exascale processing. The two differ in several ways. The HPC tenant does not typically require UPS or generator whereas the hyperscale compute does support UPS and generators. Their processing needs are very different. Some of the key components of the hyperscale user include (but not limited to):
Very large central campuses, supporting 24 MW of utility power up to as high as 240 MW of utility power.
Scalability. Hyperscale tenants need to be scalable at a minimum of 1.5 MW to 3 MW blocks which need to be deployed within a four to six month period (built to rack ready).
Hyperscale tenants require a robust and redundant network.
Hyperscale tenants typically support their internal users as DCaaS and commonly support an SLA internally.
Some of the hyperscale tenants utilize a central processing campus in conjunction with a “data center cache” scenario in which the smaller cache data centers are located near the populist.
SO HOW DOES TODAY’S COLO/WHOLESALE PROVIDER SUPPORT HYPERSCALE?
In many cases it’s not feasible to build out 36 MW of data center day one, or even have 6 MW built day one inventory to support a “prospective tenant.” Therefore, today’s colo/wholesale provider needs to develop a plan/program that supports hyperscale while still reducing stranded capital. Here are some of the ways that the colo/wholesale provider can begin to support a rising hyperscale market.
Develop a speed-to-market supply chain. One of the most competitive aspects of retaining hyperscale tenants is the ability to build quickly. A supply chain program that creates a managed inventory either internally or by vendor reduces construction time up to two to three months in many cases.
Utility power. Utilities act slowly. In order for a hyperscale tenant to be serious about partnering with a prospective tenant, that tenant’s campus must have a substantial amount of utility already at the site. This would include at least 24 MW of utility power day one. This may be a substantial investment, but one that is required to actively market hyperscale tenants.
Hyperscale program. A planned program and master plan, which also includes both scalability, speed-to-market procurement, and aggressive construction schedules.
Land. A colo/wholesale provider must have land to show master planned expansion capabilities.
First, hyperscale tenants should not be confused with high performance compute (HPC) tenants supporting exascale processing. The two differ in several ways. The HPC tenant does not typically require UPS or generator whereas the hyperscale compute does support UPS and generators. Their processing needs are very different.
DON’T FORGET ABOUT IOT
While many colo/wholesale providers are focused on hyperscale, IoT is becoming more and more prevalent and can be supported by the colo provider as built today. IoT processing typically supports “apps” related to physical product. As technology integrates IP over new products, the support of processing can be as low as a two rack installation and as high as 1.5 MW. The ability to support IoT often requires “hands on” DCaaS which also includes disaster recovery-as-a-service (DRaaS). Managed services are a profitable means to support IoT, and rarely play into a hyperscale scenario. Having a broad network within the portfolio also helps today’s IoT users.
THE FORK IN THE ROAD
Hyperscale requirements and IoT requirements are very different in scale. The ability to support both are dramatically different, and require business processes that do not accommodate a common program. G1 colocation companies are pretty well set up with broad networks to accommodate IoT, but often have older infrastructures that need updating. G2 data centers are geared towards bigger installations (including on site utility power) but need to invest heavily in speculative data center build outs (which in the past got REITs in trouble). Therefore, for the G1 owner, there’s a big shift in offering (and the infrastructure needed) to support hyperscale tenants. For the G2 owner the question becomes how to support the smaller enterprise IoT tenants. In 2017 we will begin to see the separation of both models or acquisitions that drive towards either solution.