Cloud computing is upon us this year in 2010, showing us how much better our data-center universe can be. Applications and processing technologies are advancing so quickly that they are difficult to keep up with and even more difficult to plan for. Only those companies that have lots of capital and a fast track to market will be able to effectively deploy these new platforms and lead us into the next generations of computing.

Clouds will accelerate the operating and energy efficiencies of the computing industry across the board. For starters, they allow a business to better manage and control IT demands resulting in the downsizing and more efficient deployment of their IT systems. According to Jason Stowe of Cycle Computing, the cloud-computing model enables three fundamental forms of efficiency:

  • Managing the Peaks: Organizations no longer have to purchase and power infrastructure to handle their peak needs for an application, only to see it sit idle.
  • Economies-of-Scale Efficiencies: By provisioning large-scale server capacity, used by multiple applications only when it is needed, and charging only for the resources an application uses, cloud helps drive costs down and make deployment more cost efficient.
  • Increased Reliability, Lower Costs: Properly handling disaster-recovery scenarios requires provisioning of multiple data centers and networking contracts, which increases costs and lowers efficiency. Clouds reduce this by removing complexity and enabling access to multiple compute and storage sources with large economies of scale for increased reliability and lower costs.

These forms of IT efficiency all result in reduced demand for lower power and cooling, and subsequently the downsizing and more efficient deployment of mechanical and electrical support systems, as well.

How exactly will these advances in applications technology change the way we design and build the networks and facilities that will support the cloud? And, what will they do to our now possibly outdated plans to virtualize, consolidate, and improve our operating and energy efficiencies? Do we already need to start planning for the move of our facilities into the next generation of cloud computing?

 

Different Strokes for Different Folks

The first thing that should become evident over the next year or so is the type of cloud computing that can succeed for different business models. Risks associated with the security and service delivery of cloud computing will quickly differentiate the operating environments acceptable for corporate enterprise computing from those for Internet search and networking. The risk tolerance defining the design basis of the networks and facilities for those two environments will differ as much as the cloud environments that they support. Internet search and network businesses will thrive on flexible and inexpensive architectures like the Amazon Elastic Compute Cloud EC2. This type of “public” cloud environment provides redundancy across a broad network of data centers and requires lower levels of redundancy and reliability in individual data centers. Because of that inter-regional network redundancy, the risks associated with the operation of individual servers are more tolerable. This allows for environmental conditions that approach extremes and the minimization of power and cooling systems redundancy. Yahoo’s chillerless designs and Google’s battery-on-board servers both help their data centers achieve PUEs of about 1.1 and are good examples of the efficiencies that can be achieved with “public” cloud architectures.The enterprise environment, however, will prove to be less tolerant of security and service delivery risks and will demand that cloud environments be restricted to “internal” clouds comprised of very secure private networks and individual facilities. And, although redundancies and efficiencies will be provided within the cloud, each data center will be need to be more fault tolerant and the reliability of power and cooling will continue to be of a high priority to the operating environment. On the other hand, the cloud redundancy may allow these facilities to take advantage of component efficiency improvements such as the newly developed “by-pass” UPS systems offered by Eaton and the now more commonly accepted data center air economizers, and allow the enterprise spaces to approach PUEs similar to those achieved by search engine data centers.
 

Today

Sun Microsystems manages much of its cloud operations out of the 1,000,000 square foot SuperNAP data center complex in Las Vegas, where its high-performance servers are housed in high-density computing pods known as a T-SCIFs (thermal separate compartment in facility). The racks are packed top to bottom with servers, creating a power load of up to 24 kilowatts per rack, all of which are cooled by outside air and sophisticated airflow controls that minimize cooling costs year round. The facility is destined to support the delivery of “complete, engineered, and integrated systems,” with an emphasis on the new Oracle stack that spans applications, middleware, database, and hardware. Microsoft’s new $500 million data center in Chicago is one of the largest independent data centers ever built, spanning more than 700,000 square feet. But it’s also one of the most unusual, with its garage-like lower level optimized for 40-foot shipping containers packed with web servers, while a second story houses traditional raised-floor data center space. Containers packed with servers and, in some cases, equipment to power and cool the servers occupy about a dozen parking spaces in their “container canyon.” Each row of containers plugs into a spine providing hookups on the lower level, with power distribution equipment on a mezzanine level. With double-stacked containers, portable stairs can be wheeled in to provide access for maintenance. The data center increases hardware utilization, reduces use of resources like water and electricity, and reduces waste material. The data center is the next evolutionary step in Microsoft’s commitment to thoughtfully building its cloud computing capacity and network infrastructure. IBM just recently announced the opening of a new data center designed to support cloud computing in order to help clients from around the world operate smarter businesses, organizations, and cities. The new data center reduces technology infrastructure costs and complexity for clients while improving quality and speeding the deployment of services--using only half the energy required of a similar facility its size. The data center will ultimately total 100,000 square feet at IBM’s Research Triangle Park (RTP) campus and is part of a $362 million investment by the corporation to build the new data center in North Carolina. IBM owns or operates more than 450 data centers worldwide. IBM has engineered the data center to help its clients use new Internet technologies and services to meet the business challenges of an environment marked by an exponential rise in computational power, a proliferation of connected devices and an imperative to manage energy costs. Path to the Cloud Amongst others, the U.S. Government is making similar commitments to cloud computing operations in a very big way. After receiving a $2.4 billion increase in its 2009 budget to improve IT systems and services, the National Aeronautics and Space Administration (NASA) just announced a halt of its New Enterprise Data Center (NEDC) plans for data center consolidation, an upgrade of its wide area fiber backbone, and the implementation of outsourced data center capabilities, which the agency hoped would improve the efficiency and security of its data center services. NASA has decided to rethink its strategy to better represent the philosophy of its new IT leadership as well as new federal requirements related to cloud computing, greening it, virtualization, and federal data center consolidation guidance. NASA reexamined the NEDC acquisition strategy and concluded it did not fully address future NASA enterprise requirements. Based on commercial best practices and other federal data center consolidation lessons learned, NASA intends to create a data center consolidation plan to incorporate all data centers, systems, and applications. The new data center plan will include a data center architecture and full enterprise assessment, which will allow NASA to design an infrastructure strategy to address all business requirements while taking advantage of opportunities to reduce energy costs and utilize innovations like cloud computing. The plan will also allow for optimal NASA data center consolidation and make appropriate decisions about utilization of data center facilities, modernizing existing NASA facilities or outsourcing.
 

Critical Facilities Round Table

In the first week of February 2010, the Critical Facilities Round Table (CFRT) supported the DOE Energy Star and The Green Grid at conferences in San Jose. Several CFRT members helped found The Green Grid and now sit on its Board of Directors. And, many other CFRT members have contributed operating data to DOE’s Server and Data Center Energy Star development programs.

CFRT is a non-profit organization based in the Silicon Valley that is dedicated to the open sharing of information and solutions amongst our members made up of critical facilities owners and operators. Please visit our Web site at www.cfroundtable.org or contact us at 415-748-0515 for more information.