Beyond cost inefficiency, underutilization is inefficient from a growth management perspective as well. When data center utilization isn’t optimized, enterprises end up having to build out new capacity sooner than they otherwise would to accommodate growth — at hefty capital cost.

The average data center is utilized at 56% capacity from a power perspective, according to a 2015 enterprise data center survey by 451 Research. So on average, 44% of the power that enterprises are allocated — and paying for — is unused.

Those are well-known facts. Less well known is what causes underutilization — and what can be done about it.

 

Overprovisioning As The Rational Response To The Realities Of Data Center Management

Underutilization is caused by overprovisioning — that is, overestimating capacity demand and provisioning based on that overestimate. Yet to date, overprovisioning has been a perfectly rational response to the realities of data center management.

 

A Lack Of Alignment Between It And Lines Of Business

At many organizations, the consumers of data center capacity — lines of business — are typically far removed from the provider of data center capacity. And the provider of data center capacity is siloed from the procurer of hardware. Lines of business put in a request for compute or storage capacity to meet a certain need, IT buys the hardware needed to fulfill that request, and facilities provisions the data center space and power — without much communication or collaboration and without shared incentives.

In the past, both IT and facilities were incentivized to do whatever it took to meet the needs of the business. Given the relatively low (and continually declining) cost of IT hardware, it was easier to err on the side of overprovisioning rather than risk being unable to meet the business’s demand. But as data center capacity becomes increasingly scarce, and power and cooling increasingly expensive, incentives are shifting, and data center managers have to meet the business’s needs more efficiently.

 

A Lack Of Visibility Into Actual Utilization Relative To Capacity

In many data centers, facilities managers don’t have the ability to assess true capacity utilization. Even if they wanted to increase average utilization rates, that’s simply not possible without visibility of infrastructure capacity including critical power and cooling systems through to the plug-level. Newer data centers, of course, come online with the latest DCIM tools, many of which can provide capacity visibility. But older data centers have in many cases simply gone without, given the relatively high cost of bolting a DCIM tool onto older infrastructure in a legacy facility.

 

A Lack Of Flexibility To Do Anything About It

Even when facilities managers are incentivized to raise the utilization rate, and have visibility into actual capacity utilization, many data centers are simply not equipped to deliver new capacity fast enough for facilities managers to comfortably run at higher rates of utilization. That requires the ability to 1) predict when new capacity will be needed; and 2) deploy that new capacity fast when the time comes.

Facing these realities, most data center managers mitigate the risk of not delivering the capacity the business needs by provisioning according to the hardware’s nameplate rating — that is, its maximum power draw. Even when the data center manager knows that actual utilization will never reach the maximum draw, given the realities of lack of alignment, lack of visibility, and lack of flexibility, nameplate provisioning is really the only way to mitigate risk — not a good way, but often the only way.

But those realities are changing. There’s an increasing awareness of the importance of collaboration between lines of business, IT, and facilities. In new facilities, sophisticated DCIM tools enable visibility and predictive analytics. And new data center models enable timely responsiveness to capacity demands.

At the same time as those realities are changing, organizations understand that they’re overspending on data center power, cooling, and space. Given that those are the largest operational expenses in a data center — and they’re rising — the pressure is on to find ways to be smarter about the data center investment and deliver capacity more efficiently. The ultimate goal is to understand the true demand, and to maximize capital expense (CAPEX) and operating expense (OPEX) investment on each piece of data center infrastructure.

 

How The Data Center Can Support Efficient Capacity Planning

When you have rack-level and infrastructure visibility, monitoring, and predictive analytics, and the data center is flexible enough to deliver new capacity on demand, then overprovisioning isn’t necessary to mitigate risk. Rather than hedging their bets against worst-case scenarios, data center operators can provision based on actual utilization while keeping a close eye on growth and deploy new capacity as it’s needed. There are five prerequisites for a data center to make that possible.

  1. Communication and collaboration. Capacity planning begins with an onboarding process during which the data center manager learns as much as possible about the customer’s physical hardware, nameplate assumptions, maximum capacity, historical data, utilization benchmarks — in general, the methodology customers use to determine capacity needs. Often, it’s the data center manager’s job to help his customer think about capacity needs differently — about setting more realistic assumptions of capacity needs.
     
  2. Plug-level monitoring and reporting. The actual vs. provisioned capacity conversation should be ongoing as the data center manager takes actual utilization data and compares it to the customer’s demand profile. Especially when a charge-back model is in place so that the line of business has some skin in the game relative to operational expenses, showing the customer how low actual usage is relative to what he’s paying for can go a long way to incentivizing higher levels of utilization.

    That kind of dialogue of course depends on the data center operator’s ability to monitor utilization down to the plug level. Monitoring at that level of depth, inexpensively, is essential to the kind of capacity planning we’re talking about. That goes beyond DCIM. The tool has to be able to monitor millions of points at pennies a point. And it has to translate all that information into actionable insight.
     
  3. Predictive analytics. Predictive analytics allows data center operators to be proactive rather than reactive — and gives customers the confidence they need to run at higher levels of utilization. The data center manager and the customer are continuously reassessing actual versus provisioned capacity to optimize utilization.

    Part of that dialogue is risk mitigation. But instead of mitigating risk by nameplate provisioning, data center managers help customers mitigate risk by analyzing historical trends, setting thresholds, and creating alerts. When utilization approaches the threshold, an alert is triggered so the data center operator and customer can have a conversation about whether new capacity is needed.
     
  4. Plug-and-play infrastructure deployable just in time. Even with plug-level visibility and predictive analytics, enabling higher rates of capacity utilization requires the data center provider to be able to respond when a customer needs new capacity. The supply chain has to be comprised of electrical and mechanical system partners that are equipped to facilitate the deployment of new data center capacity, at any scale, in a fraction of the time of traditional providers. Additionally, the infrastructure has to be plug-and- play; the ability to add new capacity with a simple connection significantly increases the speed by which organizations can bring new applications and systems online.
     
  5. Consumption-based pricing. The kind of data center that enables communication and collaboration, that delivers plug-level monitoring, reporting, and predictive analytics, with plug-and-play, just-in-time infrastructure makes possible a new financial model.

 

In a traditional model, a customer consuming 300 kW of capacity today and predicting 1 MW of future demand would be forced to commit to deploying the 1 MW at some point according to a fixed ramp schedule. The problem is that often the ramp schedule doesn’t align with the customer’s actual capacity needs, and he ends up paying for capacity he’s not using. That’s no fault of his — given the rate of technological change it’s really hard to predict how much power he’ll need in the future. What applications will be migrated to the cloud? Divested altogether? What new products will the business launch that will need capacity? It’s extremely hard to predict, yet the traditional model forces the customer to predict.

In the new model, the customer doesn’t have to commit to a fixed ramp schedule. The new model is more flexible, allowing him to take (and pay for) the capacity he needs only when he needs it. For customers that require contiguous control of capacity, the new model provides flexible control over expansion capacity without forcing the customer to overcommit. And the data center operator monitors utilization over time, so if it happens that utilization doesn’t increase as rapidly as the customer initially thought, there’s no waste; he doesn’t have to provision the extra capacity until he needs it. Then he not only defers the operational expense of power and cooling for that extra capacity, but the capital expense of the hardware as well.

In short, the new model — made possible by the kind of data center that enables communication and collaboration, that delivers plug-level monitoring, reporting, and predictive analytics, with plug-and-play, just-in-time infrastructure — provides capacity at a much lower cost than traditional providers while giving customers the flexibility they need to grow.

 

Bottomline

As data center operations costs rise, business leaders are more closely scrutinizing how efficiently data center capacity is utilized. Mitigating the risk of capacity constraints is still essential, of course, but it’s no longer the sole focus of capacity management. Data center managers can achieve both — more efficient utilization and risk mitigation — with a data center providing: 1) communication and collaboration; 2) plug-level monitoring and reporting; 3) predictive analytics; 4) plug-and-play infrastructure deployable just in time; and 5) consumption-based pricing.

 

1. “Corporate Datacenter Trends — The Search for Capacity: Enterprise Facilities Expect Increased Use of Cloud and Colo.” 451 Global Digital Infrastructure Alliance. June 2015.