Cloud capabilities have evolved considerably since entering the mainstream in the mid-2000s. As we enter the 2020s, recent advancements in cloud technology have ushered in what may be considered the third cloud age. Thanks to standardized containers, cloud-hosted applications are now ready for deployment in both private or public infrastructures straight out of the box, so to speak. But while containerization allows organizations to embrace all the economic and architectural benefits provided by hybrid cloud integration (while still maintaining control of the process), many companies are struggling to leverage containers because they’re simply taking the wrong approach to distributed integration for hybrid cloud models.

Defining the Cloud Ages

Nearly 15 years after the large-scale introduction of cloud technology, three major eras of cloud computing have emerged:

The First Cloud Age — In the early days of the cloud, enterprises began migrating their apps and services to IaaS providers, but cloud providers had not yet developed value-added features beyond on-demand OS images to establish competitive differentiation and steer customers down a path toward vendor lock-in.

The Second Cloud Age — Customers began converting to true cloud-native platforms using serverless stacks like lambdas and functions. This move toward cloud-native platforms created vendor lock-in with specific cloud platforms.

Vendor lock-in could be mitigated when other vendors emulated the API of proprietary cloud platforms. For example, the Hitachi Content Platform’s ability to support the Amazon S3 API. This allowed storage without the need to stand up a file server and gave rise to cloud-based, turn-key services like database services, storage services, functions services, etc.

The Third/Current Cloud Age — Standardized containers now enable cloud applications “in a box” that are ready for deployment in both private and public infrastructures. This era of cloud computing is delivering much more freedom of choice regarding cloud vendors and enabling deployment in many specific data jurisdictions to become economically feasible in a way that wasn’t possible before.

Some organizations ultimately want the benefits of the cloud without the risk or cost of being in the cloud, and container standardization will afford them both. Kubernetes allows the containerization of an entire application, including all its operational dynamics, and operates at a high enough level that cloud providers can offer services with many of the Kubernetes capabilities baked right in. So, these applications can be deployed on any Kubernetes-enabled framework, whether it be Amazon’s or IBM’s, or even (yes) an on-premises framework.

The Best Approach

Generational frameworks, like the on-premises to first cloud age to second cloud age to third cloud age sequence, can easily be misunderstood as good to better to best, which isn’t always the case. While subsequent cloud ages provide increasing flexibility to cloud users, that flexibility — including avoiding vendor lock-in — is not the only consideration.

Each era of cloud computing comes with its own new technologies and skills that must be mastered and leaves behind skills that are no longer needed. For example, to get to the first cloud age, hardware configuration and deployment skills were no longer needed. Instead, cloud infrastructure management tools and DevOps were the priority.

Similarly, not all organizations can easily make the jump from the first cloud age to the second, since this shift to cloud-native technologies requires refactoring or rebuilding applications in a new way (which often requires personnel recruitment, as the software engineering skills to build cloud-native applications differ significantly from traditional skills). As a result, not every potential rework project is worth it when it comes to making the jump to cloud native.

Third cloud age containerization and container management technologies can support a blend of traditional workloads and cloud-native workloads. The real workforce skills investments have to do with the container technologies and the “infrastructure as code” paradigm. Ironically, for some simple sample applications, there are more lines of code in the infrastructure configuration than in the application itself. Keep in mind that cloud technologies by and large have a bias toward Intel/Windows/Linux ecosystems. This means that traditional applications based on other platforms —IBM I, IBM Z, HPE Nonstop, and others — will most likely need to stay where they are — on-premises and out of the cloud.

Workloads that aren’t good candidates for containerization or deployment in a cloud infrastructure shouldn’t be ignored, however. Even if they aren’t in the cloud, they can evolve to be managed in a cloud-type fashion, leveraging some of the tools and skills that have evolved during the cloud ages.

The best approach, then, is driven by the specific needs of the business, not some sense of architectural elegance. Almost all enterprises will find that a distributed blend of cloud and non-cloud technologies is the best prescription for their organization. So, in addition to up-skilling the workforce for cloud technologies, enterprises will need to embrace integrating a disparate multi-cloud, multi-generation environment as a necessary skill.

Integrating

As most organizations will find that fully cloud-hosted technologies aren’t the best fit for their needs, they will need to identify a strategy that seamlessly integrates their cloud and on-premises systems for optimization. In theory, organizational integration strategies can evolve in two ways: inside-out integration, and outside-in integration.

Inside-out integration builds around businesses core applications and their integration needs. This strategy encapsulates all of the applications and services and uses a service bus approach to transfer data and high-level protocol communications between applications. The major challenge with an inside-out integration strategy is that there is often an assumption that the transferred messages are small and that the data stores in question are centralized (as in storage area networks [SANs] and databases [DBs]), quick, and consistent. Unfortunately, that is seldom the case.

The outside-in integration strategy — what we at Cleo refer to as “ecosystem” integration — works in from external partners and new applications and drives integration in toward businesses internal applications. This approach is especially useful when it comes to partner integration, when an organization typically doesn’t have unilateral control over external technology decisions. Ecosystem integration helps to solve this by introducing an adaptive layer at the edge of the organization’s framework, which facilitates establishing new connections quickly to support data transition/mediation, asynchronous communications, API governance, and policy enforcement between partners. Multi-enterprise and partner integration have always been multi-cloud in a way. So, many of the integration advantages provided by this outside-in approach also apply in a hybrid, multi-cloud environment.

In reality, inside-out and outside-in approaches tend to work in concert as two cooperating integration competencies emerge.

When it comes to distributed integration, it’s easier to think about each generation or each specific cloud (or on-premises system) as its own integration silo, as problems that can be solved entirely within a single silo are much easier to grasp and handle. When solutions need to span these silos, it is certainly possible to weave multiple integrations together with a piece of the integration puzzle in each zone, but a more successful approach is to view the integration solution holistically. Using an outside-in mindset enables the marshaling of distributed pieces of data to a central integration point, or even moving bits of integration logic closer to the data and reconciling the results. Regardless of how far in the cloud generation evolution an organization gets, almost all will end up with some level of distributed integration requirements.

Consider Distributed Integration Holistically

As enterprises develop a strategy for integrations, they will need to build solutions not as a distributed collection of integrations, each in its own cloud or technology stack, but instead as a single distributed integration spanning the technology silos. While containerization accelerates the migration to a hybrid-cloud model to accommodate both external and internal system integrations, many organizations are not holistically considering their distributed integration strategy and are viewing each distributed integration as a separate initiative.

Organizations instead need a strategy that straddles on-premises solutions and cloud-hosted solutions by weaving on-premises systems together to bind applications to an organization’s ecosystem. One of the headwinds for people moving to cloud is loss of control, so a unified distributed integration strategy mitigates this risk and helps to maintain control.

Enterprises that are planning on using a hybrid or multi-cloud setup should apply cloud-oriented principles to on-premises solutions, even if they will never be hosted in the cloud. Consider weaving on-premises and cloud solutions together into one system/platform. so you can bind applications to your ecosystem and not be trapped in a religious battle of cloud versus no cloud. To do this, enterprises need an integration strategy that straddles on-premises, cloud, and multi-cloud.