Some of the world's largest data center operators are using their influence to propose radical changes in server design, according to a report from 451 Research.
The Open Compute Project (OCP), a user-led organization whose members include Facebook, Goldman Sachs and NTT Data, wants to see the core components of system design, including processor, motherboard and networking interconnects, "disaggregated" so they can be upgraded independently. The scheme is in marked contrast to the current industry trend of converged systems combining servers, storage and networking into a single system.
Convergence has gained some traction with customers in recent years, due to the relative ease and speed of deployment that pre-integrated systems enable. But there are trade-offs in terms of cost and vendor lock-in. For the largest datacenters, buying systems at a more granular component layer promises more flexibility, higher density and significant cost reductions.
"Current monolithic designs can't easily be customized to fit specific workload requirements or to maximize efficiency," said John Abbott, distinguished analyst at 451 Research. "And customers can't, for instance, take advantage of the latest high performance CPU without having to upgrade surrounding technologies that are still operating well."
Just how much of an opportunity or threat such developments pose to the traditional systems and storage vendors will be discussed in depth during a session at The 451 Group's upcoming Hosting and Cloud Transformation Summit in London on April 10 th: Converged IT infrastructure – Adoption and Impact.
451 Research's report follows the OCP's recent unveiling of two key projects to kick-start disaggregation: low-latency interconnects using silicon photonics for linking components at both the motherboard and the rack layer; and a new common slot architecture that should enable fully vendor-neutral motherboards to remain in use through multiple processor generations. Chip giant Intel has contributed its silicon photonics technology, and the Taiwanese systems maker Quanta has built a prototype to prove the concept out.
"It's a radical step, and a more granular level of standardization than the big system vendors have ever quite managed — or perhaps wanted — to implement on their own," said Abbott. "And it's already opening the door to a new set of system suppliers more accustomed to building systems to order and within a tight budget: the original design manufacturers (ODMs)."
Large datacenters could benefit from deploying their CPUs, I/O, memory and storage in separate racks, enabling upgrades to take place independently, eliminating performance bottlenecks and improving such operational aspects as reliability, utilization, footprint and energy efficiency. And over time, smaller customers could see similar benefits. But there's plenty of work to be done: standards must replace the current open hardware specifications, and these must then be married seamlessly with modular, interoperable and stable open software stacks, tying the disaggregated components back together again through systems management products.
Report Abusive Comment