Over the next year we will examine the cloud industry,colocationmarkets, and new data center developments that impact processing.

When we think of cloud we think of an “unknown place that is undefined to the user, providing applications and storage of data.” Most individual users of the cloud take for granted that the provider is reliable, putting trust in their ability to manage data and provide certain basic applications. The fact is, there is no magical cloud where data is stored; it’s a data center with walls and an infrastructure (that may or may not be reliable).

OUTSOURCING

Outsourcing became popular during the ’90s, especially among the financial institutions. Several outsourcing providers received large contracts for the complete operations of information technology (IT) departments, including the infrastructure. As time went on, the financial industry realized that IT processing is a core part of their business and downtime had significant financial ramifications. Outsourcing started to develop a bad reputation, and many of the clients went back to managing their own infrastructure.

Today, outsourcing exists but is packaged under different names. Managed service providers offer several different services that reflect a similar structure as outsourcing, with the exception of hiring the client’s internal IT staff. Over the last few years, managed services have been successful among companies with smaller IT requirements. The client rents space (or power) in which to install racks and the landlord provides certain reboots and installation services.

Isn’t the  cloud just another form of outsourcing? We trust someone else to provide applications, storage, and infrastructure management.

THE CIRCULAR EFFECT IN TECHNOLOGY OVER THE LAST 20  YEARS

In the early ’90s, data centers were designed to support processing via the mainframe. There were several mainframe providers such as IBM 3090 or ES9000, Hitachi Skyline System, CRAY, and a few others that had similar support requirements. The typical design requirements were 150 W/sq ft, and the uninterruptible power supply (UPS) infrastructure was often N+1. The two UPS designs most commonly used were paralleled redundant and isolated redundant configurations. Around 1995-96, 2N design appeared and then accelerated in the latter ’90s. Other design criteria for the mainframe was 150 lb/sq ft and hot aisle, cold isle configurations did not exist as of yet.

?

Most individual users of the cloud take for granted that the provider is reliable, putting trust in their ability to manage data and provide certain basic
applications. The fact is, there is no magical cloud where data is stored; it’s a data center with
walls and an infrastructure (that may or may not be reliable).

In 1995, the AS400 processor was introduced to the market. The AS400 had none of the mainframe requirements and was designed to be a distributed processing configuration allowing the user to install the AS400 in the IDF room. At the time, it was declared in the industry that the mainframe was dead. IBM closed its Raleigh Durham R&D site, and the data center design industry suffered a drought in activity. I remember getting a fax that had a homeless guy holding a sign saying, “Will build data centers for food.”

As users began to suffer unplanned outages, the AS400s were brought back into the data center environment and centralized processing resumed again.

After the AS400 died out, client server processing became the mainstream, and companies such as HP and Dell emerged. The design requirements for client server processing were far less (around 30 W/sq ft), and the cooling requirements lessoned. Additionally, we saw an increase of data center consolidations as the older data centers that were built for 150 W/sq ft could easily accommodate client server requirements.

Come Y2K and the data center had begun to support “pizza box” servers and eventually the blade Sservers. These processors were supported by a more robust design configuration and hot aisle cold aisle design became the standard (Figure 1).

In today’s cloud environment, the typical design criteria is back to 150 W/sq ft (or higher), and the structural load requirements are again at 150 lb/sq ft (in some cases higher). The infrastructure requirements for today’s cloud processing are extremely similar to those of the mainframe in the early ’90s. This is especially true if we need to bring chilled water to the rack.

GET TO KNOW THE CLOUD INFRASTRUCTURE PRIOR TO SUBSCRIBING

In a colocation environment (especially retail), the user installs racks and the power is often shared at the power distribution unit (PDU) level. With this said, other users in the data center may have an effect on outages or consuming valuable UPS kW that was previously identified within a growth model. Within the cloud model, not only are we sharing infrastructure, but processing and storage as well. When software glitches are added to the equation, we are then increasing levels of risk even more than just collocation infrastructure. Isn’t this considered to be the ultimate outsourcing model we saw earlier?

Most of the time, the subscribers to cloud look at the applications needed, speed of the network, online help desk functions, and overall cost. The fact of the matter is a cloud provider has a data center and that data center has an infrastructure that may or may not have risks associated with operations. When subscribing to the cloud it becomes more and more important to “look behind the curtain and see the Wizard.” n