The outlook for 2012 looks quite cloudy. Cloud-based services became a more mainstream computing alternative resource in 2011 and are gathering momentum for both commercial and government organizations. TheOpen Data Center Alliance“ODCA” which is a consortium of major global organizations is focused on standardizing cloud computing.

Nonetheless, we will still be building real data centers on Terra Firma, since it looks like we still need to put all the IT hardware somewhere. Moreover, earlier in 2011 Facebook broke the mold for everything in data center design with non-standard servers, racks, and even the building, with their Open Compute model. And much to everyone’s surprise, the ODCA formed an unlikely cooperative alliance with the Open Compute Project. And so with any further musings, here are the official Hot Aisle Insight predictions for 2012.

 

  •  The Hot Aisle will be Hotter – possibly 110°F or even higher. As more enterprise data centers are finally adopting and approaching the 2008 ASHRAE 9.9’s “recommended” upper temperature limit of 80.6°F for air intake to the IT equipment (i.e. the cold aisle) and bladeservers operating with 30-35°F Delta-Ts. The release of the 2011 ASHRAE Thermal Guidelines and the new A1-A4 equipment classes will even push this even further with “Allowable” intake temperatures of up to 113°F in the cold aisle (will that term become an oxymoron?)
  • The “Cloud” in the data center may become real, since some data centers will also allow the humidity to rise, as they begin to use more outside air to improve cooling efficiency, or in some cases no longer use any mechanical cooling, such as Yahoo’s Chicken Coop. The Hot Aisle may become more humid (or sometimes less) as the environmental envelope is stretched to the new extremes (8-90% RH) with 2011 ASHRAE 9.9 stated goals to end or minimize the need for mechanical cooling (fashion outlook: Safari Jackets and perhaps matching Rain Hats will be the new uniform).
  •    Colos will continue to absorb small and medium size enterprise data centers operations as the cost of building, operating and upgrading their own facilities becomes more expensive, and less of a strategic advantage.
  • Solid State Storage will begin to make more inroads as the prices continue to fall and storage capacities rise. Seagate has already started shipping their enterprise class “Pulsar 2” 800Gb SSD drives offered in the existing Hot-Pluggable 2.5-in. form factor, while using an average of only 5 watts, yet can operate at up to 140°F. However the spinning disk is not dead yet, since their capacities continue to increase whiles prices also drop to remain competitive and will still represent a substantial portion of large-scale commodity storage deployments.
  • The EPO will die very slowly, even though the 2011 NEC codes have eliminated it as a requirement. Local building inspectors and fire marshals will continue to require them for many years to come. 
  • Data Center Infrastructure Management “DCIM” will be big this year, as momentum builds to monitor and improve energy efficiency. The major players, such as Emerson and Schneider, have expanded their offerings and also have acquired some of the smaller vendors. More vendors will join in, but interoperability will still be an ongoing integration problem, since BMS systems, Modbus protocols, and IT SNMP are still apples and oranges.
  • Mobile data and delivery of movies from NetFlix and HD video from YouTube, Facebook and other similar social networks will incite the internet-peering bandwidth billing battleground and may force the end of unlimited internet fixed-price bandwidth for the consumer (land and mobile based). It will also drive storage vendors’ equipment sales to new heights.  This will also compel the need for more speeds and feeds for the network equipment manufacturers.
  • A High Fiber Diet, driven by the need for speed. Upgrading to 40Gb Ethernet (as well as 100Gb) will hasten the move to use even more fiber in the data center, despite the efforts by the cabling industry to promote expensive and physically larger copper cabling (Cat 6E and Cat 7, as well as shielded versions). Beside higher costs, these new copper cables are much larger in diameter than the older CAT 5E, if the newest cooper cables are implemented they will become a huge physical choke point for airflow, since it requires far larger cable trays. Nonetheless the myth that fiber is more expensive than copper will continue to be perpetuated by the copper cabling manufacturers. As a result, Top-of-Rack or End-of-Row switching with fiber uplinks will become the preferred network architecture. Fiber vendors will need to do a better job educating IT and data center decision makers as to the difference in size and performance of fiber vs copper cabling systems. Newer higher density fiber connectors such as the growing use of MPO multi-strand modular cable systems may help address the endless need for higher backbone speeds may accelerate the end of copper in the backbone. Attention IT equipment makers; if you offer lower cost fiber interfaces in bladeserver there will be more room (and budgets) for computing equipment and less room required for costly large copper cabling and patch panels. 
  • Densities will continue to rise and older cooling systems and airflow designs will become even more challenged as they try to keep up. The cooling designs will begin to diverge into two camps; liquid cooling is making inroads to meet ever higher densities, (ASHRAE released their Liquid Cooling Thermal Guidelines in September 2011) and IBM offered liquid cooling as an option in the latest “Z” series mainframe, while lower density build-outs may consider following ASHRAE’s recommendations for air-side economizers to avoid mechanical cooling. Cold aisle and Hot Aisle containment and other airflow management strategies will no longer be considered extreme leading edge methodologies. Close coupled cooling and containment will become more commonplace in new mid-high density designs.
  •  The EPA’s Energy Star Data Center programs will continue to expand and will cover virtually all areas of the data center, as the UPS, Network gear and Data Storage Arrays are added to the growing list of equipment covered by the program. More servers, as well as bladeservers will be part of version 2 of the program and the data center infrastrure efficiency will be subject to further scrutiny and possible local regulations by local building departments as the data center is now included in ASHRAE 90.1
  • The computing loads in data centers will become more dynamic instead of nearly flat, due to widespread virtualization and as older IT equipment such as commodity servers are upgraded to Energy Star rated products, which have much lower idle power states and mandatory power management. This will result in more “traveling hot spots” during peaks or overcooling during lulls in computing loads, especially in older data centers, which were not designed to handle these changing load conditions.
  • Modular and containerized data centers may finally begin to be more frequently considered and perhaps deployed in larger number, as HP, IBM and Dell continue to offer more variations, In addition, Eaton, Emerson, and Schneider, as well as many smaller specialty firms are adding their own solutions for modular computing as well as power and cooling building blocks. 
  • The Green Grid introduces PUE version 2 and even more metrics, such as the Carbon Usage Effectiveness “CUE” Metric, Water Usage Effectiveness “WUE” and several more. This should keep the corporate sustainability departments busy, as well as supplying the marketing department’s spin doctors more PR fodder.

 

So stay tuned and see how these predictions pan out, and have a Green, low-carbon Holiday Season and a Happy Sustainable New Year!

Until then, best wishes for the coming year from Julius here at Hot Aisle Insight.