Has there been a better time than today for the occupants of mission critical environments? With the evolving wholesale, retail, and cloud landscape in today’s colocation world, occupants are able to take advantage of the last 20 years of aggressive innovation and best practices like they could never achieve in their own data center spaces. And they have more choices about how to organize and operate their technology than ever before imagined.

There are many aspects of a technology strategy that shape your overall plan and direction. But, none are arguably more important than a thorough understanding of the market economics at play and how these dynamics will support the successes of your business case. So before you prepare your annual technology strategy and your next business case, ask yourself some questions.

Are you in search of the perfect world solution that offers extreme uptime, high-performance proven environments with a business partner you trust understands your business needs? Are you really in touch with what’s driving your technology solutions and what goals must be achieved to support the success of your corporate strategy?

This article sets out to compare and contrast factors in today’s colocation and cloud marketplace, that may very well impact your technology team’s execution strategy this year.



Globally, over the past two years, cloud service providers (CSPs) have largely replaced colocation providers1 as the largest consumers of multi-tenant wholesale data-center space in core data-center markets.2 In the U.S., absorption in 2016 was on pace to the previous year’s record occupancy gains, though a large portion of this activity dates back several quarters to pre-leasing of yet-to-be-completed projects. Leasing volume has showed signs of slowing in the second half of 2016, which sets the sector up for potential oversupply in 2017.

There will be a lot riding on the accuracy of hyperscale CSP’s3 internal forecasts of customer demand, and how they have provisioned capacity to align with these projections. With most hyperscale providers leasing space while also building their own facilities, there is a short-term risk that they will pivot away from multi-tenant leasing in the near-term if their demand forecasts prove to have been too high. If projections prove to be accurate, subdued leasing volume will likely result in less absorption than the past two years’ historically high levels. With added scrutiny from investors who are new to evaluating and understanding the fundamentals of the asset class and data center real estate investment trusts (REITs), there is a risk that such a slowdown would be perceived as a pullback — potentially affecting access to capital and investment opportunities.

“For enterprise users evaluating their data center requirements in 2017, the evolution of pricing models and contract terms should be a key consideration in any strategic decision. Rental rates may have stabilized, but they are as low as they’ve ever been. More importantly, pricing is no longer simply a function of rent-based space and power; tenants’ ability to incorporate flexible contract options into their data center spend — which could include cloud, additional services, and even the ability to expand or contract their footprint — will prove to be a significant opportunity for occupiers,” said Pat Lynch, managing director, CBRE Data Center Solutions.



In many markets, more speculative data center supply is slated for delivery in 2017 than we have seen for several years. However, several core markets — Silicon Valley, Ashburn/ Northern Virginia, and Chicago in particular — are currently extremely supply constrained, with vacancy rates for existing/commissioned capacity ranging from 4% to 6%. While demand from enterprise users is unlikely to match the recent size of CSP deployments, the commissioning of new high-quality facilities will likely help facilitate market activity and increase deal flow in many markets that recently have been supply-constrained or underserved by existing inventory.

In certain markets, the next 12 months will also see an increase in legacy corporate data centers becoming available at or below replacement costs. While these assets don’t pose any imminent supply-side risk to the multi-tenant market, the appetite for them during 2017 could go a long way toward predicting momentum in data center investment trends over the next few years. There is already enormous demand for sale-leaseback opportunities in facilities with in-place tenants and longer lease terms; however, to date, interest in legacy assets in non-strategic locations or with high vacancy has been limited. The effort and cost to re-engineer such highly specialized facilities often prove too high to justify.



One of the biggest wildcards of 2017 is the evolving size and scope of traditional enterprise users’ third-party requirements. For the foreseeable future, corporate demand for computing power and information storage will continue to grow at nearly double-digit rates annually. However, a transformative shift is underway that will have a meaningful impact on demand in the sector.

Historically, enterprise computing and storage needs were satisfied in facilities owned and operated by the user. Today, most enterprise users are shutting down their owned facilities and migrating their requirements to cloud and third-party colocation providers, who can handle considerations like security, compliance standards, and physical proximity and access to network and cloud providers with greater cost efficiency. Typical enterprise demand will likely evolve to require smaller, hybrid solutions that incorporate elements of wholesale and retail data center leasing as well as public and private (on-premise) cloud solutions, and will prove to be a strong, steady growth channel for data center operators going forward.

Demand is also being re-shaped by a growing need among large enterprises and content and cloud providers to locate some IT infrastructure as near as possible to “the edge” of the network or to endusers in order to reduce latency, better manage data traffic, and provide the best user experiences. As a consequence, interconnectivity is emerging as a key growth engine, and data center operators developing their network/connectivity and cloud services offerings are poised to capture significant demand. Moreover, with the increased adoption of latency — sensitive, data-intensive technologies — running the gamut from mobility (devices, wireless networks, etc.), the Internet of Things (IoT), and content delivery/distribution to future technologies like self-driving cars and augmented/virtual reality applications — well-connected real estate near critical population centers is poised to enjoy above-average demand and pricing.



Despite the perception of slowing demand in the second half of last year, net occupancy gains across major data center markets in the U.S. nearly reached the record-highs established in 2015; leasing volume was dominated by a flurry of hyperscale CSP requirements that sometimes reached in excess of 25+ megawatts (MW) each. Additionally, the rapid enterprise adoption of network-dependent technologies — mobile devices and networks, IoT, cloud services, content delivery, etc. — is putting a heightened importance on connectivity and bolstering demand in primary markets.

For last year, net absorption totaled approximately 195 MW across the major data center markets as tracked by CBRE, slightly below the 200+ MW of absorption in 2015. By a significant margin, the largest data center markets in the U.S. continue to grow at the fastest pace, with demand driven largely by latency requirements, access to interconnection points, and proximity to cloud hubs and large population centers.

As you can see in the Table 1, wholesale rental rates can vary by as much as 50% from geographical market to market, and vary with the available inventories and the vacancy rates in each region. The range in each market is due to the features and benefits offered by each facility, such as location, availability, power density, energy efficiency, future proofing, flexibility, options to acquire additional space, and related service-level agreements. LEED®, EnergyStar, and UTI reliability certifications also influence costs and provide value to users that appreciate industry standards.

More information is available at the CBRE data center web site at https://www.cbre.com/real-estate-services/real-estate-industries/data-center-solutions.



Retail services providers usually offer internet connectivity at many data center locations often in both major metropolitan areas and diverse geographical “edge” locations. They provide IT features such as network redundancy, bandwidth, and carrier neutrality, as well as applications scalability, continuity, and security.

The most competitive retail services providers also offer “Compliance as a Service” and monitor your operations for you to ensure that you comply with the most challenging of IT regulatory requirements including IT security. Compliance to SEC, HPPA, ISO 27001, AICPA, SOC2, DCC, and PCI are just a few of the standards that are offered by retail services providers today.

As you can imagine, pricing for retail services can easily double or triple the rental rate depending on what services are provided by the colocation provider. Rates of $500 to $2,000/rack/month, or more, are not uncommon in a well-established retail services operation.



Pricing strategies for cloud services are still in the development phase with as many variables in the mix as in the colocation retail services market. Three different operating models have emerged that allow a cloud user to efficiently utilize the resources needed to achieve their compute and storage goals. They are most commonly referred to by their self-defining names — “public,” “private,” and “hybrid” cloud — and each presents a unique set of challenges to economic analysis. So let’s look at each operating environment and each economic model to compare and better understand the benefits of each.

  • Public cloud is a self-service, on-demand compute and storage resource available to anyone typically structured in a “pay as you go” model with monthly invoicing for resources provisioned during each billing period, and typically counted in hourly increments. Public cloud resources can be increased or decreased at any time, in real-time, with the ability to “scale up” resources with (effectively) no limit.

Public cloud pricing models charge users for the number of instances of running Windows and Linux, for compute time and data transfer rates, for bandwidth used, and for idle time, and it all adds up quickly. So “paying as you go” they say, is the only way to survive. But, comparing economic models and pricing strategies is a challenge to all but the best of cloud analysts.

AWS, IBM, and Microsoft are all becoming more user friendly and are providing online services that allow you to learn how to use their cloud less expensively on your own. However, each public cloud operator uses unique algorithms to calculate usage, which makes it very difficult to compare costs between clouds. When you are ready to dive into the process, take a look at these web sites for starters:

  • Microsoft Azure: http://bit.ly/2jj3JLT

  • Amazon Web Services: http://amzn.to/2xWm3wR

  • IBM Softlayer: https://ibm.co/1N3ROol

  • Private cloud is a self-service, on-demand compute and storage resource available exclusively to one particular organization. Built on dedicated hardware and typically with a fixed monthly cost for the entire hardware. However, no ability to scale beyond the dedicated hardware is provided. Private clouds can be built in different ways; namely in-house/on-premise or hosted and managed by a third party.

The Open Compute Hosted Private Cloud operates within the confines of a user’s compute environment and provides the highest level of control and security. Hosted private cloud based on Open Compute hardware and open source cloud software is the most cost effective solution over time for predictable workloads. However, the private environment greatly limits the speed, capacity, and flexibility of your operations when compared to the open environment of a public cloud. Private cloud services providers such as Oracle and OpenStack are catching up with advanced middleware, virtualization, and similar tools to make private cloud as user friendly and efficient as currently possible. See these links to find out more:

  • Oracle Private Cloud: http://bit.ly/2jiZ90g

  • OpenStack: http://bit.ly/21MDOX3

  • Public vs private cloud “economic models” are fundamentally different because they consider the implications of very different cloud “delivery models.” For the new user, the public cloud is the most expensive environment to operate in continuously, and very few have the appetite or the budget to operate that way. However, if managed properly, the public cloud can be the least expensive way to operate.

In both public and private clouds, we pay money in exchange for the ability to run workload within the cloud. At the most basic level, it appears that we are purchasing the same thing in either situation, namely computing resources. However, the method by which these resources are allocated and consumed are different in public and private clouds and make an extraordinary difference in operating and cost efficiencies.

It is often said that with public cloud, we pay only for what we use, but that isn’t exactly accurate. More specifically, when we launch a virtual machine we begin paying for it immediately. The amount we pay for that virtual machine is the same hour-by-hour, regardless of how much we actually use the computing resources provided by the particular virtual machine. In essence, we are paying for a reservation (or allocation) of computing capacity and not the usage of that capacity or the performance derived from it.

In a private cloud, we are paying for the full, overall (fixed) capacity of the entire private cloud, regardless of how many virtual machines are provisioned on it, and (similar to public cloud) regardless of how much we utilize those virtual machines. Because the computing, storage, and network hardware are all fully dedicated to the organization using the private cloud, in essence we are paying for the capacity of the private cloud, and for the performance it provides.

The distinction between paying for a reservation or allocation of computing resources in a public cloud vs. paying for specific capacity and its associated performance in a private cloud is an important one. It is this very distinction that enables public cloud providers to maintain gross margins in an otherwise highly commoditized space.

  • Hybrid cloud is simply an integrated combination of private and public clouds, with seamless burst and workload portability between the two that offers more flexibility and opportunity to operate more efficiently and more cost effectively.

The model that has emerged as the most popular cloud solution is the hybrid cloud that generally operates within a legacy vendor hosted private cloud, but pushes work out to a public cloud when needed. This often takes the form of connecting legacy colocation and enterprise data centers with high-speed connections to major public cloud platforms. Data center cloud services providers such as Equinix and Digital Realty are earning an increasing amount of revenue every year by hosting enterprise infrastructure in their data centers with connectivity to AWS, Microsoft Azure, IBM Softlayer, and the Google Cloud Platform.

A good hybrid cloud services provider should provide a user with a multitude of technology agnostic services to facilitate your cost savings, easy provisioning, and applications enabling. First of all they should help you find a way to push only your short-duration, high utilization “spiky” workloads to the public cloud to be as cost effective as possible. This is often most easily achieved with a controlled “flex-spend” budget that allows a user to pay as he goes using public or private cloud on a fixed monthly spend. In general, the cost savings of private cloud over public cloud increases in size and in percent of overall spend, as the user environment scales and migrates from public to hybrid cloud solutions. A services provider should provide you with a roadmap to efficiently design and deploy your migration to a hybrid cloud, and deliver a fully managed service for your operational expenditures (OPEX) model.



The Critical Facilities RoundTable (CFRT) is a non-profit organization based in Silicon Valley that is dedicated to the discussion and resolution of industry issues regarding mission-critical facilities, their engineering and design, and their maintenance. Please visit our website at www.cfroundtable.org or contact us at 415-748-0515 for more information.



1. Data centers that provide their customers with space, power, cooling, and physical security for server and networking equipment, while connecting them to telecommunications and network service providers.

2. CBRE considers Atlanta, Chicago, Dallas, New York/New Jersey, Northern Virginia, Phoenix, and Silicon Valley to be the core data center markets in the U.S.

3. Hyperscale data centers use a specialized software-based architecture to scale efficiently; expansion is usually just a matter of adding “nodes”— small, inexpensive, off-the-shelf servers.