So successful and busy was this week’s Uptime Institute event, The Disrupted Data Center: Cloud, Cost, Capacity and Carbon, that it is only now that I am flying home above the clouds that I have some time to reflect on the event and report back. It is more than fair to say that the 2011 event represented a major step forward from 2010 in terms of organization and coordination, which resulted in greater networking opportunities for registrants, a better exhibit environment for vendors, and more receptive audiences for speakers. Oddly enough, the improved organization tended to make Uptime Institute and 451 analysts and experts more available throughout the conference. <p>
Martin McCarthy, executive chairman of the Uptime Institute noted that the 2011 symposium was the first organized since the 451 Group purchased Uptime. He credited the improved environment to the greater staff organizational resources made possible by the scale of the combined larger firm. <p>
McCarthy expects the 451 Group will continue to grow, both through acquisition as evidenced by its recent purchases of ChangeWave Research and TheInfoPro and organically by expanding existing programs and starting new ones. McCarthy points to the Uptime Institute’s expanded overseas efforts as a good example.
Many participants I spoke to thought the conference lacked controversy. Perhaps by standards of years past it did. Ken Brill did not take to the stage to proclaim an economic meltdown nor did the event feature a controversial speaker like Amory Lovins who memorably challenged basic precepts of HVAC design in data centers at a previous event. No one suggest that the U.S. should be more like China, if only for a day.<p>
Instead, the conference hewed tightly to its theme, and speaker after speaker contributed something solid to chew on. Highlights include Schneider’s Neil Rasmussen’s day one presentation, Six reasons why modular power and cooling plants will make traditional data center designs obsolete. During this talk, he said that standardization is responsible for many of the benefits, modularity is the key enabler of standardization.<p>
The Uptime Institute itself revealed much remarked upon survey data survey in the opening session. Typical of this approach was a joint presentation by Emerson’s Peter Panfil and James Kennedy of RagingWire, during which they discussed how to build a data center to support cloud operations, using RagingWire as a case in point. Many of the suggestions applied to normal enterprise facilities, but both speakers emphasized the importance of monitoring. Panfil was more than slightly embarrassed at having been introduced as a legend, but it is certainly fair to say that he has been steadily working at making a difference in this industry for a long time.<p>
Panfil also shared the results of an Emerson Data Center User Group (DCUG) survey with me and spoke about some surprising findings in a newly released Ponemon Institute Survey. Panfil wants Ponemon to gather more data to see why facilities report that it takes longer to recover from a partial outage than a total outage but total outages are more costly. <p>
AOL’s Mike Manos managed to use the word donkey in an analogy about change in the data center. And Christian Belady (Microsoft) and eBay’s Dean Nelson ignited a debate over the distinction between containers and modules, during which Belady called out a panel for what he called “incorrect” use of technical terms. “And you,” he told panel moderator Matt Stansberry, ”are the worst. Stop it.” Belady noted that the term container denotes a standard size and form factor, when some modular products are already available in different form factors. <p>
The symposium also produced other sharp disagreements, notably in executive session luncheons sponsored by Panduit, Power Analytics, and Jones Lang Lasalle. These limited attendance luncheons included room-wide discussions facilitated by moderators that more closely approximated debates than the 7x24 pass the mic sessions that they resembled. One participant suggested that the facilities should disregard IT input when planning data center capacity because IT projections are so often so wrong as to be useless and not likely to improve because of the rate of change in IT. <p>
The very suggestion riled Belady, who turned the challenge back on the speaker, noting that building data centers without a planned purpose obscured that premise facilities exist to support IT and that the industry should be working to eliminate data centers. “Why do we need them?” he asked.<p>
Former PG&E executive Mark Bramfitt made a roomful of people uncomfortable on the last day of the conference when he asked why data centers did not participate in utility energy conservation programs, why they chose dirty diesel over cleaner sources for backup, and what would they do when they faced real-time electricity rates of more than $10/kWh. I seem to real-time prices in California reaching more than $100 during the state’s energy crisis in 2003. Rolling blackouts were part of the state’s economic landscape at the time. Rasmussen, Uptime’s Ken Brill, Cisco’s Rob Aldrich, Deb Grove, Megawatt Consulting’s K.C. Mares, Deutsche Bank’s Andrew Stokes, and others became embroiled in a debate that moderator Bramfitt likened to a nuclear reactor. Early in the discussion Stokes said that rising energy costs could cause data centers to flee the state.
Bramfitt managed to turn the conversation away from politics and back to technology with a series of questions that elicited comments about how enterprises might pursue efficiencies and lower costs by using the cloud to shift processing load to lower cost utilities. It wasn’t long before politics again entered the conversation when Mares noted that such low-cost locations might include Beijing, with its abundant coal power and dirty air. <p>
Expect the frank discussions to continue as others relay their experiences at the Symposium. Uptime has already released Matt Stansberry’s video interview with Greenpeace representative Matt Cook. In that interview, Cook explained Greenpeace’s strategy for engaging high-profile enterprises. He said that Greenpeace wanted Facebook and companies like it to pressure utilities in the same way they pressure other vendors to reduce prices. The Greenpeace representatives also appeared on a panel late Thursday, after most of the participants had left the venue. <p>
Uptime’s Pitt Turner estimated the total registrations at 1200, which was down from the previous year. But he explained that the 2011 numbers included many fewer single day expo-only participants and more who stayed multiple days or for the whole event. <p>
Similarly the number of exhibitors seemed down compared to 2011, and the show floor featured a number of areas that could have accommodated more booths. But exhibitors seemed pleased by the quantity and quality of the traffic. Positive work of mouth should help Uptime organizers fill additional exhibit space in 2012. <p>
The exhibits included many software offerings designed to help data center managers manage the increasing complexity of their facilities. Well known names like Emerson and Power Analytics talked about DCIM offerings but also a number of smaller companies such as Sentilla, nLyte, Agilent, and iE displayed new or updated products.<p>
Schneider also announced that it had jumped into the modular market with scale models of its new power and cooling units. Normally an announcement like a new container from such an influential vendor would appear higher in a story, But, not in the wake of this very successful event. More coverage of some these events and all the products will appear soon, elsewhere on this website.
Photos courtesy Laura Kudritzki Photography