Not a week goes by without some news story mentioning an outage for some high profile cloud-based service. It made me wonder why industry people, and now the major mainstream news media, are so focused on these outages. They seem to get the same level of attention as a utility blackout. There is apparently a widespread expectation that cloud services are held to a higher (loftier) level of availability than a data center securely planted on terra firma. Moreover, even general endusers are becoming more aware that even some “brand name” applications and services, such as Netflix, are actually hosted by cloud service providers, not their own dedicated data centers.

As each new technology emerges it has a development, learning, and maturity curve. Just as flying has become a safe commonplace service readily available to all, it took many years and many generations of development and operational experience to reach its present state. Ostensibly it all started in Kitty Hawk, NC, in 1903 with the “first sustained and controlled heavier-than-air powered flight” by the Wright brothers.

However, man has always wanted to fly and it is reflected in Greek mythology by the tale of Icarus, who “flew” by using wings constructed from feathers and wax. And in case you forgot how that turned out, according to the myth Icarus was able to fly, but instead of safely testing the capabilities and limitations of his equipment in stages near the ground, he was so enamored with the feeling of flying that he flew higher and higher until the sun melted the wax and he fell from the sky.

Cloud services (in its many nebulous definitions) may be a bit more developed than Icarus’s mythical flight, but it is still relatively early in the development as a universal computing service. There is no doubt that for many organizations it may eventually become their primary, if not only, computing resource. Nonetheless, be aware that completely abandoning all IT hardware and committing organizations entirely to cloud-based resources is still somewhat like being like Icarus.

We seem to be in a frenzy to create a utopian computing environment, one where we are no longer burdened by maintaining complex hardware, or where our imagination (or applications) are not inhibited by the mundane limits of the underlying computing, storage, and networking systems. Of course in utopia, it would also not require very much power to do nearly infinite computing.

The ideal virtual data center is the one without any physical limits or exposures and would offer unconstrained speed, storage, and absolute availability, (not just more “9s” ad nauseam, nothing less than 100.00000% available — there is no longer any “acceptable” downtime in the internet age). We now expect the same level of uninterrupted cloud services as our electric utility grid (of course while it is generally regarded as reliable overall, at least in the continental U.S., data centers still have back-up generators).

Cloud computing still requires a real data center, however it need not necessarily be designed the same way a “server hugger” data center is built today. The IT equipment may not even fit into the same racks we use today that took many years to become an accepted standard (when rack-mounted servers first came out not all servers fit interchangeably into every rack or used front-to rear airflow or were in hot and cold aisles). Even now, airflow issues persist with some networking equipment.

HYPER-SCALE COMPUTING SERVERS VS. TRADITIONAL SERVERS

It seems that sales of commodity servers are down. According to the International Data Corporation’s Worldwide Quarterly Server Tracker, server unit shipments decreased -1.2% year-over-year in 2Q13 to 2.0 million units, the third consecutive quarter where year-over-year server shipments have declined. On a year-over-year basis, volume systems experienced a -2.4% revenue decline. At the same time, demand for midrange and high-end systems experienced year-over-year revenue declines of -.22.3% and -9.5% respectively in 2Q13.

While some of this can be attributed to ongoing economic issues, it would seem clear that the virtualization and consolidation may play a role in the decline. Nonetheless, it would be reasonable to assume that the increased interest and utilization of cloud services to either augment or replace traditional hardware-based computing is contributing to the overall decline in traditional hardware.

Moreover, some in the industry jeer at those who are following, piloting, and perhaps even using the Open Compute Project’s hardware model (perhaps because it started with Facebook). The OCP model looks to bypass the existing OEM equipment vendors and even the existing rack standards, with their own “open source” hardware designs. And while some may scoff, it may represent the handwriting on the virtual wall. Even the Open Data Center Alliance (whose members are mostly major financial and large organizations), have forged a cooperative link with OCP. The major server OEMs such as HP and Dell are trying to adapt to this evolving paradigm shift by the introduction of much denser low-powered servers, based on ATOM and ARM chips to avoid losing market share for hyper-scale applications to low cost Open Compute server boards. As a reminder, ridicule was the common industry reaction when Linux was first introduced as an “open source” software alternative to UNIX and Windows operating systems, and now the Penguin is in most enterprises.

Cloud service providers are not required to use standard OEM hardware, especially the hyper-scale heavyweights such as Amazon and Microsoft. They are also not obligated to use traditional data center designs, such as those that an enterprise organization might expect, nor are they bound by the conventional “rules” related to tier ratings (either Uptime or TIA), since their only obligation is to deliver a virtualized “service,” which can be redundantly and synchronously replicated among two or more physical data centers (assuming the enduser is willing to pay for this redundancy).

That is not to say the traditional enterprise standards based OEM hardware and the physical “tier rated” type data center will suddenly disappear overnight. However, as the younger generation of people and companies who are less hardware inclined and more culturally oriented toward internet and mobile-based services, they will tend to expect that computing and storage is a mundane utility type function to be best handled by others, and expect to simply purchase it like electric power. They may not even know or care where the data centers are located or what they look like, any more than they know or care what or where the power plant is located (although they may care how green it is, presumably based on the underlying energy source).

Of course at the moment, cloud services are not quite like electric power since each vendors’ platform is relatively proprietary (each claiming unique advantages over the other), and as such you cannot readily shift from vendor to vendor (imagine developing an electric power industry if each utility used a different voltage and frequency).

From a high level point of view we would all like to be able to have “utility” computing made universally available and purchased as a commodity, just like electricity, fuel, water, etc. We do not really care where or how it is produced (when was the last time you visited your local utility’s power plant to make sure it met your expectations). Our only real concern would be its reliability and cost.

THE BOTTOM LINE

To those in the industry that have always operated in a standardized hardware oriented environment and are now considered “server huggers,” our days may be numbered in the long term. But in the interim, there are still many advantages to having total and direct control (presumably) over the hardware and the type of data center it resides in. Full disclosure, I am a server hugger by nature and like to see and control the hardware. However, while I may like to take a tour of a power plant out of intellectual curiosity, it is not something I feel compelled to do in order to feel that I will get reliable power. So unless you are a major mega-organization, do not expect to tour the cloud service provider’s data centers (either by helicopter or on foot).

So are you a ground-based server hugger, who likes to own and have direct control over the IT equipment or do you prefer to soar in the clouds? And while the future of computing may be in clouds, be aware that there are still some risks. Until cloud technology is more fully matured (and user’s implementation and expectations become more realistic), it may become universally adopted as a “utility” computing service to be purchased like electricity. Therefore, while in the long-term cloud services may become as commonplace and reliable as any other utility, consider the fate of Icarus when he flew too close to the sun.