DatacenterDynamics's very successful New York conference on March 3rd included a late in the day panel session, "An Industry Inflection Point, Looking Toward 2010," that proved very interesting and has already been mentioned in several blogs. During the session, Citicorp's Jack Glass said that PUE was calculated using two electrical measurements, power to the IT equipment and overall facility power, both given in kWh. The way to improve PUE, he said, was to address and improve the cooling system. In a separate development, IBM announced a significant breakthrough that could lead to the increased use of silicon rather than copper and light rather than electricity and would dramatically reduce the cooling challenge facing the industry.
The very next day at Future Facilities's annual users meeting, Liebert's Fred Stack said the very same thing, and so did Hassan Moezzi, director, Future Facilities. That all three men took the same perspective on PUE suggests that the industry is finally getting a grip of what PUE means in a data center. This was inevitable as PUE became part of the vocabulary.
In truth, the whole industry has become more adept at developing new solutions to familiar problems. Many vendors at the DatacenterDynamics conference displayed products intended to find and help identify power and cooling products. nLyte claimed that its software had helped one IT organization determine that one of the main server types in its data center used more energy than others in the same class from other manufacturers. Improved modeling tools are just one result of the rush to extract full utility of all the IT equipment in a data center while staying within the facility's power, cooling, and size constraints. The Future Facilities event includes presentations highlighting how organizations such as Dell, Cisco, Alcatel-Lucent, Oracle, Cundall, and Baycare Health Systems made use of 6SigmaDC to achieve cooling goals, including energy savings, avoiding hot spots, and increased utilization.
Moezzi sounded like Lord Kelvin when we talked about the strengths and weaknesses of modeling. First, he has a British accent. More importantly, he's emphatic that the data center industry has been approaching data centers in the wrong way. He says that products like the 6Sigma software give data center owners the ability to see the affect of changes in the data center and proposed solutions to problem. Of course, he's a strong proponent of the Future Facilities product, but Moezzi believes that data centers cannot be successful long term without using tools like CFD, modeling, and inventory control to control a facility and judge the effect of changes in the facility. He suggests that this represents a big change in how things are done in many data centers.
During the DatacenterDynamics panel discussion, Goldman Sachs Vice President David Schirmacher pointed out the illogic of building hard, long-term data center infrastructure to house and protect servers and other assets having short useful lives. Schirmacher believes that manufacturers would eventually address this disconnect by building servers having a certain amount of inherent resilience. As an example of how this might be accomplished, he suggested that on-board batteries would become a standard server feature. Of course, Google already implemented this strategy in one of its newer facilities, so it's possible to say that we are at the beginning of this trend.
HP's introduction of a 20-ft modular pod is another development that changes the relationship between facility assets and IT assets. As a container is really little more than a tin can housing servers, these PODs and similar products represent a move away from the permanent and expensive infrastructure we've come to associate with data centers.
Finally, in a little-noticed development, IBM reported that its scientists made a significant step toward using light as a media to communicate between tiny silicon circuits instead of electrical signals that communicate via copper wires. “This invention brings the vision of on-chip optical interconnections much closer to reality,” said Dr. T.C. Chen, vice president, Science and Technology, IBM Research. “With optical communications embedded into the processor chips, the prospect of building power-efficient computer systems with performance at the Exaflop level might not be a very distant future.” As reported in the recent issue of the scientific journal Nature, this is an important advancement in changing the way computer chips talk to each other. In addition to reducing the energy required per IT operation, the change to silicon circuits would reduce the use of rare and expensive raw materials in computer circuits. The report of this work, entitled “Reinventing Germanium Avalanche Photodetector for Nanophotonic On-chip Optical Interconnects,” by Solomon Assefa, Fengnian Xia, and Yurii Vlasov of IBM’s T.J. Watson Research Center in Yorktown Heights, N.Y. is published in the March 2010 issue of the scientific journal Nature.