There’s no question the cloud is growing. Industry analyst firm IDC predicts that in the next four years public IT cloud services will increase by more than 20 percent to become a $30 billion industry. What’s more, a 2010 IBM survey shows that 91 percent of 2,000 IT professionals surveyed believe cloud computing will be the primary IT delivery model by 2015.
Despite this immediate predicted growth, cloud computing is not new. But even so, there is still no consensus on what cloud computing really means. Perhaps the most accurate way to describe cloud computing is to compare it to the Industrial Revolution. Cloud computing is not a single technology, deployment model, or product, but rather a mix of disruptive elements including complex socio-economic factors, technologies, attitudes, and behaviors.
As this cloud revolution continues, today’s companies-much like their Industrial Revolution predecessors-need to consider how to evolve their businesses to prepare for and support this shift. For cloud service consumers, determining where to host their cloud environments is one of the most important decisions they must make. Should the cloud be kept private, using a new data center to host the organization’s cloud environment? Or should the company leverage a third-party facility? This enormously important decision affects how cloud end-users connect to cloud application providers and, in turn, how reliable the service will be. Before this decision can be made however, many factors need to be considered such as power, cooling, connectivity availability, investment costs, and more.
Companies starting from scratch have several options, beginning with private data centers. Building a new ground-up private enterprise data center facility-no matter the size-takes, on average, 18 months. This time allows for completing tasks such as assessment and planning, design and engineering, equipment sourcing and scheduling, construction and installation, and commissioning and production turnover. The cost of building such a data center can range from $10 million to $100 million per facility, according to Tier 1 Research. However, the investment per square foot is lower when compared to a small facility. That’s because the same processes to get the data center up and running still need to be put into place. Tier 1 also estimates that large-scale data centers may cost about $1,200 per square foot, while a smaller data center meeting the same needs can easily cost between $1,300 and $2,000 per square foot.
Despite the aforementioned cost differences, the power, cooling, infrastructure, and general total cost of ownership (TCO) to support large private enterprise data centers can be tremendous. In its Q2 2010 financial results Google's disclosed that it made $476 million in capital expenditures, the "majority of which was related to IT infrastructure investments, including data centers, servers, and networking equipment.”
In addition to such capital expenditures, private enterprise data centers are also harder to scale cost effectively. Bandwidth and processing power need to be instantly available for surges in demand. If over-provisioned, a company is prepared for peaks in demand, but at the same time it accepts some under-utilization. If under-provisioned, the company suffers either lost revenue, lost customers, or both. Considering that scalability, connectivity, and high-density power and cooling are three of the core components of cloud computing, it doesn’t seem like a large or small private enterprise facility would be an optimal environment for hosting.
This is where a shared, third-party carrier –neutral data center can really make a difference.
Shared data centers, also known as colocated data centers, provide businesses with the advantages of large facilities, but with a lower TCO, the added benefit of new business opportunities, and a more sustainable option. In fact, industry experts say that extreme growth is expected in the colocation market, as more and more companies acknowledge the capital efficiencies that colocation offers versus building their own data center. According to Ted Ritter at Nemertes Research Group, “If anything, the recession is driving CIOs to look more closely at colocation."
Cloud service providers seem to agree as well. “As a cloud provider, our requirements are highly specialized,” said Patrick Baillie, co-founder and CEO of CloudSigma AG, a Swiss-based IaaS platform that offers high availability, flexible cloud servers, and cloud hosting. “In order to run a true ‘high-availability cloud,’ our infrastructure service operations need to be housed in a high-tech data center facility that ensures high-density power supply, reliable uptime, flexible scalability, and connectivity to multiple networks.”
Much of that is attributed to the robust environment that carrier-neutral colocated facilities can support, all of which are critical for enabling and maintaining a cloud environment. These include:
Modular build outs. Shared data centers are designed using modular builds, enabling businesses to easily add capacity as needed. This means the colocation provider retains flexibility to adjust build outs as required by its customers. This is a tremendous benefit as it enables businesses to easily add additional capacity as needed, which as discussed previously, often isn’t possible in a corporate data center.
Scalable systems. Cloud computing demands higher elasticity levels than previous delivery models. Bandwidth and processing power need to be instantly available for surges in demand, with the added ability to reduce resources (whether server, storage or network) when peaks in traffic have passed. According to a recent press report, Twitter, for example, is expected to have gone the outsourcing route, moving into a colocated infrastructure to optimize service performance and give the company “extra runway” for improvements. With this move, Twitter is hoping to rid its need for the infamous fail whale.
High-density power. While most corporate servers average 15 percent utilization, virtualized servers can run at 60 to 80 percent. As a result, high-density power has become a near necessity. High-density power availability enables cloud service providers to deploy the very latest and most efficient equipment while minimizing the space required and, therefore, the cost to the business. Metadigm, a network and end-point security specialist, is one company that depends on high-density power and is leveraging colocated facilities to house its services. Located in Interxion’s Cloud Hub Community at its City of London data center, the Cloud Hub is providing Metadigm with high power densities, outstanding uptime and a robust infrastructure, all of which are critical for delivering its managed security services.
- Broad connectivity choices. As virtualized infrastructures and the demand for “anywhere, anytime” access continue to increase, connectivity has become as important as processing power for users, meaning a company’s implementation of cloud computing will succeed or fail based on the quality of the end-user connection. Maximum bandwidth and multiple connectivity options will drive adoption of the pay-per-use model as connectivity is so very critical to application performance and reliability in the cloud. This is why advanced carrier-neutral colocated data centers have received so much recognition for connectivity choices; they offer customers access to multiple connectivity options, including literally hundreds of carriers and ISPs, content delivery networks, internet exchanges, value-added networks, and Ethernet exchanges. Interxion, for example, offers connectivity to more than 350 carriers and ISPs, giving community participants the option to select first and second choices that best meet each company’s unique needs.
Physical and virtual security. The Ethernet-based cloud is not impenetrable or fail-safe and is certainly not immune to data loss. Organizations must identify operational and security risks associated with the cloud, namely data security, integrity and privacy.
- Established cloud communities. Carrier-neutral colocation provider Interxion has gone as far as to established cloud communities or hubs within its data centers, creating a unique combination of high availability, neutrality, and security. These communities are physical spaces within Interxion’s data centers where participants can come together to interconnect to each other and reduce latency as well as cost. These hubs are often referred to as the place where networks connect to each other. Cloud operators connect in order to increase their reach, reduce costs, increase revenue, and service providers connect their cloud services to customers, network operators, and other cloud providers.
Although the complexity of large enterprise data centers is high, they make a lot of economic sense in a shared environment, particularly for supporting cloud environments or services. And while the Industrial Revolution gave the world many new technologies that society relies on today to drive the world economy such as mechanized transportation, mass production techniques, the internal combustion engine, and electricity, the cloud revolution stands to change the way we leverage technology and information. At the heart of it all is the ability for that information to be constantly available and accessible, making data centers more important than ever.