Transitioning portions of IT applications and services into the cloud is something that’s starting to emerge in the minds of IT professionals across organizations of all sizes. But it’s a big jump from the traditional implementations where you could walk down the hall and put your hands on all of the equipment and cables used to provide those applications and services.

Moving to the cloud requires a thorough, planned approach with a much higher level of thought and consideration than perhaps existed in the past when dropping a server in a rack and plugging in a network cable was all that was required.

Hybrid Cloud Scenarios

There are, perhaps, three general scenarios that will involve cloud-based apps and services in conjunction with on-premises apps and services.

The first is an intentionally architected solution that uses components in both on-premises data centers as well as cloud service centers. One example that fits here regards the differences between application servers and data servers. Many times, the data within an application is sensitive in nature and requires special considerations. Or it may be that the dataset is simply too large to migrate into a cloud environment. Applications, however, need to be highly responsive to a number of clients connected through many different methodologies, and cloud implementations can provide that type of flexibility.

The second is a parallel implementation where an application is available in both an on-premises data center, but also a cloud environment. The on-premises data center would typically serve persons in the corporate office using high-speed connections, and the cloud-based implementation would serve mobile users. Data replication is a key consideration in this scenario, but replicating data among several distributed databases is passé these days.

The third is about scalability. An organization may have an on-premises application that performs just fine 95% of the time, but that other 5% of the time it finds itself resource constrained due to peak loads. The challenge for the organization, though, is that the 5% doesn’t justify the investment in additional on-premises hardware that will actually sit idle the other 95% of the time. Cloud is the ideal solution to this problem, as it allows for the rapid ramp-up of resources at a level sufficient to meet the peak demand, and the organization only has to pay for what is actually used that 5% of the time.

The key to successfully working through any one of these scenarios is in understanding the unique requirements of each  and planning ahead, both logistically and financially, to be able to deliver those requirements.

Key Strategies for Implementation

Scenario one is highly dependent on the communication pathways from client to application server, as well as from application server to data server. The first key objective here is ensuring that dedicated bandwidth exists between the cloud implementations and the on-premises data center. In addition, that bandwidth should be implemented across redundant circuits from different providers. It’s absolutely necessary that downtime between the cloud and on-premises data center is designed to be zero.

Active monitoring of the bandwidth consumption on the circuits should be implemented and alerting configured when that consumption reaches less than optimal levels. What those levels are will vary from implementation to implementation; every organization will need to determine what their own levels of tolerance actually are.

Scenario two also has a dependency on the communication between the cloud and the on-premises data center, but in a less critical way than in scenario one. The key here is how much delay in the data replication between the two parallel environments can be tolerated. If, for example, the cloud-based environment is primarily read-only, supporting field sales representatives, and the actual data changes are done from the central office, then the tolerance for replication delays may be fairly high. On the other hand, if the data transactions are coming from both sources, then this is pretty much the same issue that has existed with distributed databases historically.

Very important to the success of this implementation will be active monitoring of the data replication activities, and a proven contingency plan for when data replication is disrupted beyond acceptable tolerances.

One other potential area for consideration here is how to handle users that work both in the central office as well as mobile. Is it acceptable for those users to work from two different application environments? If so, this will also impact your tolerances for data replication. It may also be that these users can simply continue to use cloud services for the application, even though they may be physically present inside the walls of the office.

Scenario three is the most complicated to implement, and engaging with a qualified service provider who has experience in implementing an on-demand, scale-out type of environment is a good idea. The key aspect of this implementation is the transparency of inbound connections being rolled over, or rerouted, from the primary site in the on-premises data center to the supplemental resources in the cloud. This is the crossroads where organizations evolve into software defined data centers (SDDC) and software defined networking (SDN).

Many organizations will determine that a preferable alternative is to move the entire application to a cloud-based solution and then the scaling can be dealt with exclusively within the environs of the cloud service provider. However, some organizations may not have this choice due to considerations for physical location of data storage or the need to maintain local access to the application, perhaps due to usage volumes.

In Conclusion: Plan, Plan, Plan!

Regardless of which of these scenarios you might be contemplating, or possibly even a scenario not discussed here, it’s absolutely critical to have a plan, and involve both technology as well as business professionals in the development of that plan. Identify and analyze all contingencies, and define performance expectations for every aspect of the environment, from end-users’ perceptions, device performance and connectivity, through the public network connections, and all the way to the back-end servers and data stores.