Though the timbre of the discussion was cordial, the undercurrent was somewhat tense as the various panelists tiptoed around the real question of whose server was the most efficient. SeaMicro admitted that their device is really meant to be a web server and can’t accomplish all of the tasks that HP Blades, nor SuperMicro’s servers can produce. On the other hand, both HP and SuperMicro agreed that their servers are nowhere near as efficient from a power utilization perspective as Sea Micro. The question is, “What’s really behind the curtain?” How does one really analyze and understand, from an objective perspective, which is the most efficient server?
As I’ve qualified myself in prior blog postings, I am not by any means an engineer, chip designer, or specialist in the technical aspects of servers. In this instance, as in other cases, I’m relying on research (noted in the following bibliography) to provide insight to those of you that have an interest in this topic, who are similarly aligned, and creating a dialogue as well as gathering more information.
In addition to collecting a number of articles, I was fortunate enough to have spoken with Karl Freund, Vice President of Marketing for Calxeda, one of the leading manufacturers of microservers, who provided a treasure trove of information and additional research on the microserver market.
I was amazed to hear from Karl that only about 30% of the entire power budget of a server is consumed by the processor. So, the rest of the power is utilized by the other chips in the set (i/o, memory, networking) and everything else (i.e. fans) except for the storage drives. This begs the question, which Calxeda, SeaMicro, NVidia, Huawei and Tilera, among others, have started to answer, is how do we re-design the integrated circuit to mitigate this power loss and improve computing efficiencies?
To be sure, there are several divergent schools of thought and action on this topic and supporters of each are equally zealous. The goal of creating more effective methods of reducing power and increasing computing are the same, but the path to get there highlights the struggle between System-on-a-Chip (SoC) versus VLSI (Very-large-scale-Integration). It would be great to be able to simplify an equation that would compare traditional server technology to microserver technology in an apples-to-apples method and arrive at a single unit of efficiency in the same way that we do for data centers (PUE, CUE, WUE). According to Evercore Partners in its May 9, 2012 white paper entitled, The Core Guide to Low Power Servers, there is good reason to believe that “CPUs are in their third technology phase with the focus shifted to power consumption and dollar efficient performance – MIPS/$/Watt.” This is useful when comparing blade to blade or microserver to microserver, but not when comparing blade to microserver because these two different types of server are designed with different tasks in mind.
Until the technology behind microservers changes, the best utilization of these devices is for scaled-out workloads. That would include work as web servers, media serving, or even serving up applications like Hadoop or non-SQL data bases like Cassandra. For the most part, this limitation is the result of the ARM 32bit architecture, but eventually, as 64bit architecture is integrated into ARM design, things may change drastically. Part of the problem is that in order to allow ARM architecture to handle the computing power required to run a relational database like IBM or Oracle, significant changes need to be made that may not be cost effective or required by end-users, who will ultimately create the market. However, as Mr. Freund, so eloquently expressed it, there are really two aspects to this equation: enablement; and technological adoption.
The interesting thing that I’ve learned about Calxeda that differentiates its product from the rest is the concept of creating a hyper-efficient SoC that reduces power utilizing and increases network speed by eliminating the need for additional external networking gear. Calxeda has integrated the network and i/o fabric on the chip along with the memory with DIMW in the nodes. The nodes within the servers sitting in the racks then connect to other racks.
According to Oppenheimer in its recent white paper, Cloudy With a Chance of ARM, adoption of microservers is being affected by a number of key factors including:
1) Changing workload goals based on maximizing how quickly data can be accessed rather than how quickly data can be computed;
2) The changing nature of workloads, which is becoming more dynamic as exemplified by Web 2.0 companies where high-volume transactions are business drivers. These companies are being forced to design and build internal data centers that can quickly and efficiently scale capacity;
3) Despite the fact that today’s microservers are inherently less powerful relative to traditional servers, they are also much more efficient delivering a 60-70% savings in total cost of ownership;
4) Based on the adoption of Cloud, the microserver market will grow from less than 1% of the x86 market today to 21% in 2016.
Clearly, with the advent and growth of the Cloud to support mobile devices, including smartphones and tablets, which applications and content is being stored or distributed through a microserver mesh, the adoption trend will increase quickly, especially when considering that, based on Intel estimates, a new server is required to support every 600 new smartphones, or 122 tablets. Quantifying this market means that by 2015, the market size will range between $2B and $3B. But more exciting than the statistics is the amount of innovation that is happening as expressed in different designs and architecture, which may solve the limitations currently inherent in the componentry of microservers.