Over the last year or so, there has been a lot of publicity around cloud computing — both public and private. Most of the publicity seems to be centered around how businesses need to move away from their physical on-premises systems to a purely cloud-based architecture. However, that is not the case. Some functionality is better in an on-premises environment. The key to a successful architecture is knowing ahead of time where issues lie and addressing those potential issues upfront — before they become a service-impacting problem for you.

For instance, public cloud-based solutions are very flexible and allow administrators to spin applications up and down to match business needs. This helps to right-size costs. On the other hand, because of the architecture, IT personnel that have shifted their networks from a physical on-premises environment to a cloud environment have reported several unexpected performance problems, as well as cost savings that fall below original predictions.

When migrating to the cloud, there are three key elements that must be taken into consideration.

Network Security

Network security is a fundamental issue. Cloud solutions have experienced notable security data breaches. ln 2017 alone, there were 2.6 billion cloud data records breached. Many of your security options in the cloud are limited because you have limited control of the infrastructure. The cloud vendor owns that infrastructure, and most public cloud vendors do not allow customers access to their networks and system layers, as this can create a security risk to their whole network.


When it comes to on-premises solutions, you have full control of the infrastructure. This means you can deploy any inline security solution — intrusion prevention system (IPS), data loss prevention (DLP), data decryption appliance, or web application firewall (WAF) — that you want. You typically do not have full control of portions of your cloud solution infrastructure.


With regard to performance concerns, research from Dimensional Data in late 2017 showed that half or more of the companies surveyed experienced application performance problems for their cloud solutions. Additionally, 88% of companies surveyed experienced some sort of issue with their cloud environment due to a lack of visibility into their environment.

What does this mean?

What is lack of visibility? Once you migrate to the cloud, and during the migration process, you will not have clear visibility into the network layer. You will only be able to get information about the cloud network and some parts of the operating system from cloud-based service providers. They provide summarized metadata on cloud-centric information (network, compute, and storage). This includes high-level cloud data (CPU performance, memory consumption, etc.) and some log data. This is the information you need to diagnose the root cause of performance and security issues.

Cloud providers and other cloud tools do not provide network packet data. This data is absolutely necessary for security forensics and troubleshooting using root cause analysis. DLP tools and most application performance management (APM) tools are dependent upon the packet data for problem analysis. Typical cloud tools provide limited data that is often time-delayed, which can dramatically impact tool performance. For instance, tactical data loses 70% of its performance monitoring value after 30 minutes of time.

In addition, cloud providers do not provide user experience data or the ability to watch conversations. Specifically, this means you cannot accurately gauge customer quality of experience based upon cloud provider delivered data. In addition, the flow data provided lets you see who the talkers are but does not contain anything about the details of the conversation.

An easy remedy for the visibility issue is to add cloud-based monitoring data sensors (also called virtual taps) to your cloud network. These sensors can replicate copies of the desired data packets and send them to your troubleshooting, security, and performance tools. This gives your tools the data they need to perform their functions. One key factor, though, is that the data sensors need to have the ability to scale automatically as needed. As cloud instances get spun up, the sensors’ capability needs to be able to scale sufficiently as well.

The Best of Both Worlds

A potential option to mitigate all of these problems would be to use a hybrid architecture that allows you to marry the public cloud with on-premises solutions. With a hybrid architecture, you acquire the following five benefits.

  1. You have complete control of the on-premises infrastructure hardware, which allows you to optimize security for highly sensitive applications.
  2. For an existing system, the on-premises network is typically already built and reliable.
  3. Business applications can be moved to a public cloud so that you can spin applications up and down as necessary to realize cost savings.
  4. You can continue to use any existing security and monitoring tools and install cloud-specific, packet-level security and monitoring data to maximize your return on investment (ROI) for those tools.
  5. Cloud monitoring and performance data come together with physical on-premises monitoring to create a comprehensive view of your applications and data no matter where they reside.

What you need to thoroughly understand is what you are migrating and why. While this topic sounds simple, it represents a fundamental stumbling block for IT. Business operation is not just about spinning up apps as fast as possible. The cloud approach you choose to deploy, and how you choose to deploy it, will dictate your data visibility, how you can access data, and your long-term total cost of ownership. Many organizations gloss over these fundamental steps and make assumptions because they need to make decisions fast. This decision will make or break the project’s success.

In the end, a hybrid IT environment can give you the best both worlds.