Those who make a living forecasting technology adoption pretty much agree the hybrid cloud environment will dominate enterprise IT over the next five years. Research and Markets estimates the hybrid cloud market to grow from $45 billion to $98 billion by 2023. While AWS and Microsoft Azure battle it out for control of the public cloud, a hybrid environment, mixing public/private cloud compute and storage with virtualized on-prem workloads, will be the new normal. 

Infrastructure-as-a-Service is a driver since it enables enterprises to move workloads from on-prem to the cloud to better allocate resources during peak demand, and works in a public or private cloud environment, according to Research and Markets.

This continuing shift to a hybrid environment can offer data center owners and operators more flexibility in storage, processing and distribution of data but it does pose challenges in meeting performance SLAs, and in monitoring this increasingly disparate, decentralized environment.

The Hybrid World: IT Dream or Nightmare?

Along with the evolution to a multi-compute environment is the sheer, dazzling amount of new data and applications flowing through data centers now. The growth of this digital infrastructure has created a huge amount of complexity in modern IT: multiple moving parts, any of which could fail at any moment. These disparate systems don’t talk to each other, and they frequently fail. The challenge of managing this is compounded by the fact that many modern systems have components outside the control of the enterprise: just think of a typical cloud set-up.

That makes effective monitoring vital to spot the early warning signs of issues, keep systems up and running, and keep customers happy. Enterprise standards today are more stringent, driven, in part, by rigorous compliance regulations, including GDPR, and by the punishing costs of data breaches and disruptive events causing downtime. IT professionals now work in a world where there is very little margin for error.

Speaking of costs, enterprises increase their risks of a costly outage if they are trying to make do with a legacy IT infrastructure and/or have inadequate monitoring systems and processes in place. Gartner estimates that IT downtime costs $300,000 per hour. An outage at U.S. airline Southwest due to router failure, led to over 2,000 cancelled flights and an estimated cost of $54M to $82M in lost revenue. Besides the impact on the balance sheet, enterprises suffer damage from customer attrition and diminished value to stakeholders.

Yesterday’s Monitoring Doesn’t Work Today

Data center management today is a far cry from the early days of in-house operations. Third party providers and as-a-service providers have taken some of the ‘hands-on’ aspect of IT out of enterprise staff. That being said, the buck stops on the desk of IT should there be inadequate response to a potential disruptive threat. The sheer amount of workloads to be managed and monitored is also vastly larger than early versions of a data center. 

Today, ensuring workflow productivity and mitigating risk has to go beyond what even the most talented IT staff can do. Humans simply can’t sufficiently monitor the modern hybrid environment — there are too many applications, multiple clouds and providers, possibly containers, and big data workloads. Ponemon Institute reports 22% of system failure events and business downtime can be traced to human error.

Automation needs to be called in to provide the required level of consistent performance. An automated approach to monitoring ensures that infrastructure is well maintained, drives down cost, and the organization can hire more skilled people to focus on strategic areas of the business which includes driving customer satisfaction and growth.

Outages can occur suddenly and without warning. In such cases, it’s vital to detect the failure quickly, and know the impacted systems. Once identified, organizations should have processes in place to rapidly mitigate the issue — reducing downtime and lost revenue. Automating these processes is vital to containing any disruptive events before a costly widespread outage.

Going from Complexity to Consolidation

Enterprise Management Associates analyst research notes a vast number of organizations have more than 10 different monitoring tools and that it can take organizations between three to six hours to find the source of an IT performance issue. A key contributor to this lag in fixing any issue is ‘tool sprawl’ which has helped create IT silos in which various teams rely on often disparate views of monitoring and are unable to find common ground. Related to this, and perhaps even more damaging, is the fact that it delays mean time to repair (MTTR) by creating too many data points. It begs the question: who’s on first? The result is SLAs not being met, and no real across-the-enterprise alignment on which alerts and applications should be flagged as ‘mission critical.’

Often the tools in use today were designed only to monitor static, on-premise infrastructure of the past, rather than the modern, dynamic, cloud and virtual-based digital systems of the present. Most enterprises do not have the monitoring tools in place to know the current state of their systems and applications in anything close to real-time.

Regardless of where data lives, the solution is to get rid of unmanageable complexity and monitor an entire IT infrastructure from a single pane of glass. Enterprises need to shed the collection of tools and consolidate into one monitoring solution designed to support the modern hybrid environment.

By monitoring all aspects of the devices and systems — hardware and software, on-premise, and in the cloud — organizations will have the full picture of system health at all times. Further, when applications span many systems, hardware and cloud services, it is easier to isolate the cause of a problem when you can correlate multiple data in a single repository. In this way, detecting anomalies in data becomes much easier.

Visibility Adds Business Value

As hybrid cloud adoption continues, investing in a consolidated monitoring platform that can support the hybrid environment makes good business sense. Cloud monitoring, in particular, can give IT performance data for use in strategic planning of future assets. Cloud costs can spiral upward. More precise, organized monitoring will flag areas where performance may not be meeting SLAs, or where a provider is being under-utilized at high expense.

Further advantage is that consolidated monitoring enables IT to compare performance and metrics across the environment of on-prem, virtual and hybrid cloud. It will be a powerful means of collecting objective data that shows which networking and storage investments are realizing true business value and which are not performing optimally.