The data center sector has changed dramatically over the last decade, with hyperscale and platform providers driving the most recent building boom. According to new data published by Synergy Research Group, the number of large data centers operated by hyperscale providers rose by 11% in 2018, reaching 430 by the end of the year; another 132 remain in the pipeline.

What’s driving the surge? The major public cloud service providers increasingly attract the migration of ever-expanding enterprise workloads. The leading social networks have a business model founded on consumers eagerly providing them with troves of invaluable personal data. And the raison d’être of over the top (OTTs, content providers that distribute streaming media as a standalone product) is filling the bulk of our free time with seemingly infinite libraries of entertainment.

Business at this scale appears to have reached not so much an upward trend as an irresistible gravitational force; but with success comes challenges. Among the most prominent tests these digital giants face is to strategically manage compute, network, and storage capacity effectively and efficiently to keep up with surging demand.

Use of hyperscale platforms and services is critical to daily business operations. For example, consider how the multinational enterprise has evolved from siloed approaches of production to workflows based on new tools and technology platforms offering increased collaboration. However, with digital transformation comes the on-demand consumer, who will not tolerate even a transient blip in service delivery — an issue that oftentimes results from constraints of resources and infrastructure.

Equally, if not more significant, has been the veritable data tsunami that continues to engulf cloud and platform providers every day. To provide a useful reference point, according to IBM, more than 2.5 quintillion bytes of data were created daily in 2016. By estimates provided by IDC, data creation since the dawn of the internet has doubled in size every two years, and by 2025, the amount of data created and copied annually will grow to 163 zettabytes (ZB), or one trillion gigabytes. That equates to ten times the 16.1 ZB of data generated just three years ago.

As the capacity needs of hyperscale giants, platform and cloud providers, and enterprises using high-density computing continue to swell, they must continually roll out scalable infrastructure that supports their expansion strategically, while simultaneously maintaining the same quality of service expected of them, for an increasingly larger and geographically diverse user base.


The unpredictability of future capacity

In the decade leading up to 2016, hyperscalers built their own 100 MW-plus data centers in remote geographies, an ideal location for data center development, complete with relatively affordable green energy, enticing tax incentives, and the most stable geological and meteorological conditions. These off-the-beaten-path projects took shape in locales such as Prineville, Oregon, and Luleå, Sweden — sites that were selected as cost-effective havens and desirable core processing locations for data center developers. As demand grew, however, the need quickly arose for data centers closer to major population centers in order to mitigate latency.

Over the subsequent two years, hyperscalers complemented the massive remote build-outs with smaller, but still nothing to sneeze at, 20 to 80 MW availability zones. These were situated at the edge, and were much closer to the “eye balls” consuming services in and around major metropolitan areas, setting a new standard for seamless service delivery. Hyperscalers had the stomach for building core facilities in uncrowded areas themselves, but had difficulty navigating the tedious regulatory and logistical hurdles presented by major markets. These edge nodes were subsequently deployed with multi-tenant data center or wholesale colocation providers. Fast forward to 2019, and hyperscalers continue to expand on both fronts in a just-in-time (JIT) fashion.

Unpredictable usage and growth models have become table stakes for many high-growth data center customers. At any given time, they must balance promised availability, while managing the potential risk of overbuilding. Due to the unpredictable nature of future needs, some may risk overprovisioning. Conversely, some may have also found themselves in predicaments where they simply cannot get their hands on enough capacity where and when they need it, which serves as a major obstacle to revenue-generating initiatives.

Unfortunately, after the Great Recession, most wholesale colocation providers, who typically require around 18 months to build a new data center, stopped rolling out capacity speculatively. Their stakeholders tightened the purse strings, mandating they also grow only as-needed. But, there’s a fundamental misalignment between buyer and seller: building-to-suit for a wholesale colocation company often takes longer than a year, whereas for a hyperscale and other high-growth entities, that timeframe shrinks down closer to four months.

When thinking of data center constraints, space, power, and cooling likely come to mind. For hyperscalers, though, a pressing variable is time. All too often there is no way inventory could be made available for them by the time they realize it is needed. Consequently, hyperscalers must be extremely shrewd when selecting third-party data center partners that will rise to meet massive and unpredictable requirements.


Power and cooling that support tomorrow's workloads

To quickly prove the accuracy of Moore’s Law, look no further than a hyperscale data center. They have become impressively dense over time, and it is not uncommon for deployments to require as much as 50 kW per rack. This stands in stark contrast with the low, single-digit power densities pulled by typical users. As these forward-thinking platforms look to deliver Big Data analytics and Fourth Industrial Revolution technologies, such as artificial intelligence (AI), virtual reality (VR), and the Internet of Things (IoT) applications, considerably more compute will be packed into each device and, naturally, additional heat will be generated. It is therefore necessary that hyperscalers, cloud and platform providers, and enterprises with high-density computing select a data center provider that can accommodate tomorrow’s power densities, while cost-effectively supporting the workloads of today.

Not many providers can support high and low power densities in the same environment. However, as hyperscalers expand their product portfolios, and the supporting infrastructure becomes more varied, data center providers will have to design sites that can granularly shift their power and cooling densities, in real-time, on a rack-to-rack basis.

At the end of the day, over a third of hyperscalers’ wholesale colocation bill can be attributed to variable power usage beyond their fixed rate. Since we can certainly expect significantly more cooling will be required to guarantee equipment safety and performance, it is critical hyperscalers select data center providers that employ cutting edge cooling technologies that utilize incrementally less power to cool warmer environments.

In the ubiquitously employed metered gross pricing model, variable power usage is multiplied by a contractually derived power usage effectiveness (PUE) metric, which represents the inherent efficiency of the data center. Thus, the more efficient the data center, the more cost effective the solution for the hyperscaler. A contractual PUE below 1.2, which would equate to a 20% increase on variable power usage, is ideal for the deployments of this scale. A provider would not be able to sustain operations contracting with such a low PUE unless they can actually deliver with the utmost efficiency.

Customers often look for a strategic partner in their vendors, which is a platitude used to imply that the supplier takes ownership of improving the customer’s bottom line. A data center provider is truly a partner when they look to more creative solutions, beyond standard chilled water or compression technologies, to drive cooling efficiencies. What lowers energy and water consumption will ultimately lower the hyperscaler’s cost burden. Technologies that hyperscalers ought to look out for in a multi-tenant data center include heat rejection technology at the source.


Flexibility and scalability in every direction

When data centers were populated primarily by back-office applications, consumption patterns were stable and predictable. Now, in a world that streams practically everything, and where everyone aggressively flocks to the latest SaaS platform, game, device or movie the minute it is launched, compute loads are significantly more dynamic, with complex variations and surges.

Beyond providing them a secure environment with uninterrupted power and cooling, hyperscalers ought to seek wholesale colocation providers that will also take the reins with capacity planning. Strategic providers leverage sophisticated data center infrastructure management (DCIM) technology that helps deliver a comprehensive understanding of compute patterns and projects future capacity needs. These insights significantly help providers prepare for the hyperscalers’ often unforeseen requirements.

Since the need for expansion will likely come in short order — considering the ever-increasing volatility in compute patterns — it’s only prudent that data center providers seize efficiencies obtainable through prefabrication. By developing critical electrical and mechanical rooms away from active operations in factories off-site, providers can save considerable time when rolling out new infrastructure. Prefabrication also reduces expansion timelines for multi-megawatt deployments from over a year to as short as 16 weeks, so they are right in sync with hyperscale demand. The optimal wholesale provider also establishes a robust supply chain that can deliver rapid infrastructure expansions, increasing investments in existing sites, adjacent land parcels, and established data center markets.

Looking at hyperscalers’ bimodal approach to data center expansion — giant speculative builds in remote areas, contrasted with rapidly deployed edge zones — it’s clear the fundamental challenge is balancing flexibility and scalability, while minimizing financial risk. The ideal data center partner has deep financial resources whose leadership does not fear building at scale to ensure availability for the future.

The next-generation hyperscale data center must be able to provide power density profiles and cooling technologies that are continually transforming to meet with spiking capacity demand. Especially as we enter the Fourth Industrial Revolution, the new era data center provider will deliver infrastructure that is not only future-proofed to accommodate growth, but also offer models that enable hyperscalers to pay for what they are using, when they are using it.