The technology industry is in such a constant state of evolution that we can never get too comfortable with a new innovation before it becomes obsolete. As a result, IT departments are successful if and only if they remain agile and constantly keep their organizations abreast of emerging technologies. While it’s unclear what the future brings, we can be certain that technology’s rapid rate of change will introduce new disruption to enterprise environments. In part one of my prediction series, we’ll look at the changes taking place in data center infrastructure management in particular, with special attention to what it means for the cloud and virtual environments.

VOLUME AND VELOCITY

Traditionally, technology density and performance have continued to increase, while their costs have continued to decline. With today’s advances in science and technology, one might conclude that this may actually lead to an increase in the speed of change experienced to date under Moore’s Law. The technological change driving today’s IT operations is resulting in a recognized sea change when it comes to data center infrastructure management.

This shift is causing an unprecedented increase in both volume and velocity, and this change has largely gone unrecognized because so few organizations measure these factors, and even fewer trend them. Business functionality driven by high levels of enduser access to web-based, stateless applications supported on a myriad of devices is generating more transactions at an ever-increasing level of granularity. And with the ability to provision virtual X86 platforms in seconds instead of months, the velocity of virtualized infrastructure deployment and associated volumes of physical and logical assets will only continue to increase dramatically.

THE CLOUD

There is often a complete misunderstanding around the capabilities of the cloud in all its forms. Legacy applications traditionally require colocation of storage and compute environments because of latency-induced fragility. Perceived security concerns and compliance issues impact application migration to a common pool of publicly shared resources for many organizations. The oft discussed hybrid cloud, where IT uses the public cloud to handle peak loadings at various times of the year, is still more discussed than implemented and is constrained by latency issues that will require an application architecture supportive of that deployment. The ability to architect suitable backup and disaster recovery solutions can be, and often is, overlooked.

VIRTUALIZATION AND ORCHESTRATION

Regardless, the cloud is here to stay, and in 2014 what needs to be determined by the forward-looking IT organization is where cloud functionality will reside (internal or external, or both), what it will utilize, (shared or dedicated resources) and what it will support in the way of service offerings (infrastructure, platform, compute, recovery etc., as a service). The cloud is, in essence, a deployment model that calls for consumer selection of service, automated provisioning of the service, and the subsequent billing of that service. This can only be achieved through automation of what was once an intensive, hands-on involvement by IT and a lengthy bureaucratic approval process of approval through the IT supply chain.

This cloud capability demands an infrastructure capable of responding in real time to service demand, and thus, the provisioning of hardware needs to be virtualized and not physical. The X86 platform has successfully been virtualized by two vendors that can manage each other’s environments through their orchestration capabilities and this will no doubt become more common and provide more integration. We can already see current virtualization capabilities transitioning into full orchestration capabilities that will support service selection and provisioning of consumer selected infrastructure. We should be expecting this orchestration capability to include management of licensing, asset management, and subsequent billing or show back capability as well as monitoring capability to support continuous improvement in service delivery and availability.

While many organizations have large virtualized environments for their X86 platforms — some even under reasonable control — the Big Box Unix arena presents its own challenges. These environments tend to support mission critical business functions that demand intense workloads with high throughput and high performance requirements on high-end databases. It’s not feasible to expect that cross-platform orchestration of these environments will consolidate, as did the X86 environment. And while it’s true that some organizations are aggressively moving from Big Box Unix to Linux, and that the X86 platform continues to grow in power, speed, and throughput, it’s unknown whether the X86 platform will grow as fast as the Big Box Unix application’s demand for volume and velocity in the typical OLTP environment.

In 2014, we can expect the two market leaders, Sun Sparc and IBM’s AIX, to continue developing their capabilities in virtualization to manage virtual Sparc and AIX engines, with virtual storage provided by enterprise level SAN. Orchestration on these platforms appears to lag behind the X86 platform by a substantial margin, as they are primarily focused on the Software-as-a-Service (SaaS) model rather than the Platform-as-a-Service (PaaS) model.  The need for velocity is substantially less in this environment as the lead time to roll out the level of application complexity typically found on the big Box Unix platform is usually longer than the web based front ends developed on the X86 platform.

On the X86 platform, I take a somewhat radical view that Infrastructure-as-a-Service (IaaS) has few potential clients because it’s not a functional platform for business applications, but rather a foundation for their development. PaaS has a larger audience though it’s typically limited to the developer community, both within and outside the IT organization. The functionality of the PaaS environment on the public cloud will continue to generate attraction to the pay-as-you-go model, particularly for the developer community.

Today, the cloud is inhabited by the SMB market place, the developer (dev/test/QA) environment, early adopters, and the highly sophisticated, self-healing application architectures like Netflix. In the future, expect to see new applications to move elegantly from the dev/test/QA stage to production, and we can already see glimmers of this with the “new” Dev/Ops approach. This will encourage transition from QA to production within the cloud for applications appropriately architected, and Dev/Ops will get the operations and developer teams together during development rather than at the end. The virtualized X86 platform supports this capability, as does the ability to facilitate the transition from QA to production, within a cloud environment.

As we move forward, the question will be not whether or not the cloud will be a factor in 2014 but rather, how and what, it will support. Organizations will be tasked with not only understanding what their own and public cloud infrastructures can support but also with understanding the sheer velocity and volume at which they’ll be required to do so.

BIG DATA

The science of Big Data analytics is rapidly maturing to the point that many organizations have some form of the technology running, either in production or in proof of concept. Big Data requires massive amounts of low-cost storage — often for temporary periods — which may be the ideal application for the public cloud, given that the processes applied to the data can run in that environment (and that the data itself is not sensitive). As analytics expand to become common stream, it is possible new storage structures will be required to support initial passes, parsing, consolidation, and execution. Will this take the place of conventional, in-house, low-end SATA2 disks? Will it be better served by public cloud or perhaps a hybrid situation for volumes that may suddenly expand? Such elastic solutions require applications to tolerate the placement of data both to the compute platform, and externally in a public cloud environment.

The nature of analytics may not demand long-term retention of the core data, but there is a critical need to understand that the very volume of this data will mitigate its timely and frequent movement across the network. Although data densities are growing and storage is becoming cheaper per unit, the management issue still remains; How to effectively monitor, manage, and develop continuous improvement strategies for the storage environment. The future may well bring significant enhancement in cross frame virtualization and even cross storage platform virtualization. Solid state disks are coming down in price but are still around $1,000 per TB. This is happening while the conventional disk is reaching physics driven limitations in rotation speed and it is probable we won’t see too much improvement on 15K drive speeds in the near future. All these factors may drive a resurgence in high end tape drives for mass storage.

DATA PROTECTION

As technology advances, the speed of light remains stubbornly static and immovable. This has huge implications, as the velocity of business and consequent increases in data volumes drive movements across the network. As business transactions transition to 7/24/365, protection requirements will call for ever tighter RPOs and RTOs. We are approaching a situation where the traditional protection provided by physical replication is no longer capable of supporting the data protection requirements of out-of-region, distanced, RPO, and RTO. Protection requirements will inevitably drive tight recovery requirements back into the application. Future state applications must be failure-aware and self-healing, and provide continuously available user functionality at dramatically increasing volumes. This means an application architecture dictating multiple data centers, using stateless front-end capabilities, where surviving data centers can continuously handle business loads supported by a back-end, capable of both protecting data across multiple data centers, while maintaining logical integrity and a failure-aware capability for automatic self-healing. Restart overheads of a failed X86 application will become increasingly intolerable as will the DBA time needed in many current recoveries to restore logical integrity.

SOFTWARE LICENSING

One of the key issues emerging from the increases in volume and velocity are software licensing management issues. With increased velocity available to provision virtual images of the X86 environment and the increased volumes driven by this functionality, it will be essential to ensure that license management is a key component of the provisioning technology, and that the provisioned environment automatically updates the organizations’ asset base (CMDB). Today’s volumes make yesterday’s feeble attempts at XLS-based inventory management futile. There is a distinct possibility that with increases in the volume and velocity of X86 platform provisioning — both arriving and departing — will result an unmanageable proliferation and consequent loss of control over both licensing and assets. An IT organization that cannot tell what is running where, on what, and using what licenses, is an infrastructure that is in jeopardy of compliance violations in software licensing, unforeseen downtime through unmanaged change, and wasted physical and logical resources.

NETWORK BANDWIDTH

Volume and velocity will also dramatically impact networking, and bandwidth to support the data transfers needed for data protection and recovery is growing apace with the actual data. In many organizations, there is concern as the growth of data outpaces revenue, and IT investments need to  increase as the need for data to be maintained, moved, and protected grows. Fortunately, the network is probably the most sophisticated component of the IT infrastructure in its native ability to manage, track, and trend the environment. Lead times required to implement additional bandwidth and the issues of “last mile” bandwidth are continuing to cause problems for organizations that fail to track and trend growth, particularly as both volumes and velocity of infrastructure changes increase.

SECURITY

Closely associated with network bandwidth challenges driven by volume and velocity is the emerging hydra headed monster of security.

While it is true that the Black Swan effect makes it impossible to predict with any certainty, one thing we can be sure of is the continuous security threat to IT infrastructure and its data. Not long ago, key security issues were driven by curious kids with too much time on their hands; now it’s obvious that the kids have been replaced with criminal enterprises seeking credit card and personal information and even nation/states seeking to access data are impacting data center operations. With increased data and the velocity of expansion, in both physical and virtual infrastructure, more opportunities exist for an ad hoc or laissez faire approach to security that can jeopardize an organization’s customers, its shareholders, or even the nation/economy itself. While the cloud has often been accused of security inadequacies, the reality is that today’s mainstream cloud providers have more security capabilities in place and executing than perhaps 95% of F5000 IT organizations.

DATA CENTER MIGRATION

One thing that never seems to change is the hope that consolidation will serve as a panacea for absence of basic disciplines in asset management, licensing management, service provisioning and change control, as the environment rapidly scales both up and out, driven by technology availability. Once the disciplines are built to manage the volume and velocity impacting the X86 server environment, the storage and network environment, as well as the security arena, there will be a return to devolvement driven simply by the velocity now available to meet enduser/client needs.

No matter which direction becomes the current trend, there will be an emerging requirement to migrate increasing volumes at increasing velocity between private and public clouds connected by ever expanding bandwidth. As VM builds out their public cloud, it is likely that they will include the ability to move data from the internal IT’s private cloud, or the less sophisticated VM environment to VM’s public cloud. This brings another dimension to future migration strategies, and that is the ability to move applications, their interdependencies, and their virtual infrastructure, across data centers and then across clouds, substantially transparent to the user community. The future will also see more attention placed on the use of back-out capabilities where cloud migration can provide the consumer with the ability to withdraw from a particular provider, should circumstances so dictate.

WORKSPACE AND VDI

The data center environment has progressed through two early stages and is now entering a third. The first stage was characterized by IT dictating to an uneducated user base. This resulted in IT taking the “build it and they will come” approach. While this philosophy still exists, more mature organizations have moved to stage two with IT services now built directly to address business needs defined by a newly literate user community.

In this third phase, already upon us, end users are selecting their own infrastructure and associated software outside of corporate IT, and often without IT’s knowledge. This has some significant implications in VDI and workspace management. The arrival of BYOD and the concurrent emergence of Google Docs and Microsoft’s Office 365 has created a sea change in enduser device strategies and could lead eventually to the use of BYODs as the device of choice with compute power being provided across the internet as needed. It is possible that in the next three to five years, that we will see dramatic reductions of laptop/desktop technology, particularly for the mobile workforce.  This may drive a lessened need for workspace and VDI solutions. We are already seeing this seismic shift occurring in the personal space, as consumers seize onto smartphones and tablets to the detriment of laptops and desktop purchases.

LOOKING AHEAD

At this time, certain areas will require considerably more care and attention by an IT team intent on riding the wave rather than being consumed by it. Future articles will cover each of these areas in more detail:

• Managing the sheer volume of assets and the velocity at which they are deployed

• Designing appropriate storage infrastructures (both internal and external) for Big Data

• Transitioning from vendor virtualization to cross-platform orchestration

• Virtualizing high performance Big Box Unix applications and their databases

• Dev/Ops methodologies driving new needs in PaaS offerings and service levels

• Looking at new architectures for distanced data protection for tighter RTO/RPO

• Tackling challenges in software licensing on the virtual X86 platform

• Ensuring asset management under high volume and high velocity elastic implementation

• Developing an elastic network bandwidth capability

• Maximizing security of the IT infrastructure and endpoint connectivity

• Migrating applications/infrastructure across data centers and clouds

• Seeing BYOD and web-based office functionality roll-out reducing the payback on VDI

By taking the time to carefully understand these challenges, we can equip ourselves to effectively adapt to the sea change, before we get frozen in the past.