$9,000 per minute.

That’s what security research firm Ponemon Institute says average data center outage costs have risen to — almost a 61% increase since 2010.

But those are just averages. The reality for different companies varies widely.

A 12-hour Apple store outage cost them $25 million.

A five-hour outage at a Delta Airlines operation center caused 2,000 canceled flights and an estimated $150 million loss.

A 14-hour outage cost Facebook $90 million.

Industries most vulnerable to data center outages, like banking and finance, government, health care, manufacturing, media and communications, retail, and transportation/utilities, have average downtime costs of over $5 million per hour.

And those are the big companies with deep pockets that can financially weather such losses.

Average loss estimates for small businesses are smaller, with estimates of $137 to $427 per minute. But, while smaller companies may face smaller losses, those smaller numbers can have an even bigger effect on their bottom line.

And revenue loss is not even the biggest risk according to the Ponemon Institute. The biggest costs are “reputational damage and customer churn.” 

Revenue losses actually came in second according to Ponemon’s research.

Preventing Outages and Future-Proofing Data Centers

The top three causes of downtime are:

  1. Power outages (33%)
  2. Network failures (30%)
  3. Software errors (28%)

Now you cannot prevent lightning strikes or other unexpected weather events, like what happened to the Texas power grid this past winter. But you can deploy redundancies through partners and sustainable energy systems to minimize or eliminate the risks faced from power outages.

With the risks presented by foreign and domestic malware/ransomware to the public electrical grid, the high likelihood of increasing electricity prices, and increasingly erratic weather events precipitated by climate change, having an internal, and, if possible, geographically diversified, highly robust and redundant backup energy system is essential to avoid the No. 1 driver of Data Center failures.

This backup energy system should also encompass backups — geographically diversified, if possible — for cooling systems, data, and the entire facility.

Data centers with fully redundant, geographically diversified, mirrored systems typically experience one-third fewer outages than data centers that do not have them.

Planning and constant testing for potential weak spots and failures is an essential and ongoing process too. There was a GitHub outage not that long ago that happened during routine physical maintenance of the system. Fixing the physical problem took just a few minutes. But it took 24 hours to get the data correctly synchronized.

Data centers need to “be like water,” as Bruce Lee famously said, and incorporate flexibility into their processes that allow them to make adjustments to quickly and easily add capacity or move resources to meet changing needs. This flexibility should not add complexity. Look for solutions that can automate changes in response to variances within your data center environment to help you identify and resolve issues in real time

Having flexible solutions that can adjust to current conditions within your data center will ultimately help you deliver the ongoing performance and reliability you need.

The future of data centers is also all about their ability to offer increasing levels of speed to their customers. Like Mick said to Rocky when he was training him to fight Apollo Creed, “We gotta get speed. Demon speed. We need greasy, fast speed!” 

The ability to quickly provision storage resources, push updates, execute code, and run applications and hosts are all easily facilitated with virtualized environments. Virtualization supports a wide range of use cases, such as edge computing, virtual desktop infrastructure, and test environments.

With tools, such as Hyper-V or VMware, businesses can empower their data centers to be able to offer the speed that future customers will require. 

Data centers should have robust security measures in place to protect the privacy and integrity of all of their communications and data. This requires a comprehensive approach to security that is capable of mitigating risks through the implementation of controls at every step.

Data centers should employ “interoperability” to keep their options open, so they can incorporate new technology and converged infrastructures as they become available. Focusing on solutions that do not close you off to innovation is critical to keeping your data center environment current. You should look for solutions that can work, adapt to others, and steer clear of those that can only deliver an optimized experience if you deploy all “their” infrastructure.

Data centers should encourage carrier fiber diversity. Encouraging as many fiber service providers to enter a data center gives customers flexibility and greatly improves their redundancy options to maintain high availability within their customers network and changing requirements.

To recap, infrastructure redundancy, constant testing for potential failures, flexibility, speed, robust security measures, and interoperability are the keys to future-proofing your data center.

If you employ these criteria, you should always be able to take advantage of the latest data center trends, wherever they may lead, to guide your success in the future.