The Right Stuff: Predictions for 2014
So here we are at the end of another year — by the time you read this you may have already completed your 2014 plans and budget requests (as well as your fallback budgets and plans). The drive for greater efficiency (energy, operational, and staffing) is now ingrained into almost every business plan and conversation, so you might as well get used to it.
Before writing this column, I was browsing thought Netflix and wound up watching the 1980s movie “The Right Stuff,” described as “The story of the bravery of the original U.S. Mercury 7 astronauts as they faced the lack of real knowledge when men first went into space in 1961-3.” So 50 years later what does this have to do with wrapping up 2013 and looking forward?
It struck me that unlike the last 50 years of well-funded space programs, the new economic reality is that even NASA can no longer afford their own spaceships and must outsource their trips to space. In a close parallel, the traditional enterprise corporate data center’s (which, up until recently, represented a majority of data centers) days are numbered. Major corporations and even financial institutions want to concentrate their key resources and capital on its core business. Instead, they are migrating toward co-lo and the cloud services, rather than building and operating their own new facilities. Which brings me to my next question, will we finally have a meaningful cloud interoperability standard? Not to mention developing a better cloud security model.
What is even more interesting is that data centers in more and more cases are no longer really “data” centers. While Netflix (who does not have its own data centers and uses Amazon), as well as other on-demand streaming or downloadable media services that store digital bits and bytes (really gigabytes, terabytes, and petabytes on a scale never previously seen), is not what we normally think of as “data.” However, conversely (perhaps even perversely), the byproduct of our viewing, listening, and especially our social, searching, and shopping patterns, have become “big data.” Moreover, the related analytics peering into our collective (and individual) psyche have become key elements in virtually all majors businesses marketing plans.
So what is the roadmap for the data center and IT roadmap going forward as we are driven to meet the challenge of trying to satisfy this exponentially growing, self-feeding demand for ever-higher performance IT hardware and virtually “unlimited” network bandwidth to support the content delivery to “all things internet” and mobile-based everything?
It was nearly impossible to open my email this past year without seeing a message announcing a new bigger “mega-scale” or hyper-scale data center being built in almost every part of the world. And in most cases, either directly or indirectly, each one was touted as “green,” “lowest PUE,” or powered by sustainable energy sources. The race to the perfect PUE of 1.0 is now to the point where some are beginning to post claims with three decimal places (1.0xx) for new builds. So I think the new facilities have gotten the efficiency message (some more than others). The older facilities will continue to operate mostly as they are, some will try to optimize their systems (within the constraints of existing building and cooling systems), via fine tuning, perhaps guided by a DCIM system or even just using best (or at least better) practices, such as improving airflow management via the more obvious, simple, hopefully now well-known methods (blanking plates, sealing unneeded openings, and perhaps even add-on containment kits).
The onus is now on IT hardware and software. The hardware manufacturers have gotten the message already and have been addressing it in several ways. The first, and most direct message, starts with the CPU. While Moore’s Law continues to prove out with ever rising performance, it comes with a price — more energy and heat. The CPU manufacturers are well aware that they have been improving the performance-per-Watt ratio for the last several generations of chips. In fact, it is the mobile device market (laptop, tablet, and especially the smartphone), that has done more for chip energy efficiency (to deliver better battery runtime), rather than the data center market (which up until recently did not care that much about energy usage). However, they are near the physical limits of existing technology — unless and until they can make the quantum leap via the next gen chip material (i.e., moving past silicon-based CPUs). 2014 may also be the year where low power processors (ARM, Atom, etc.) start making significant inroads in hyper-scale data centers.
As far as software goes, DCIM should help improve energy efficiency for both the physical data center facility, as well as compute functions, by tracking and helping to correlate and optimize effective energy usage of IT equipment and applications. Beyond that, virtualization, coupled with SDE a.k.a. software-defined everything (my own definition), should help deliver more effective hardware utilization.
Will the coming year see more focus to make zero carbon data centers that are able to utilize the IT-based waste heat instead of just dumping it into the air or water? Even at a PUE of 1.001, each Watt of IT becomes waste heat. These do exist to one degree or another (where the heat may be used to heat a nearby building), but they are still rare exceptions. On the quest for über efficient data centers, will there be a greater adoption and move toward a direct current data center? (a DC-DC?).
Or perhaps by the use of bathtub-like submersive liquid cooling, as a result of efforts pushed by the HPC and research crowd? Will on-site fuel cells fed by natural gas (or methane or biogas) be the new normal? Or will we be going even one step further: just recently the Microsoft Global Foundation Services team announced that they are exploring putting fuel cells inside the data center whitespace and showed a prototype of a fuel cell-powered server rack (fuel cell and servers in the same rack). I would like to be there as they try to get a local building and fire department’s approval to pipe methane to each rack.
Even for more conventional facilities that buy local utility power, start thinking sustainable source energy. It may no longer be enough to a have super low PUE à la Google and Facebook. Now instead of just looking to build in an area with a low cost power and perhaps a favorable climate to maximize free cooling opportunities, you will need to know how your power was generated (a.k.a. source energy). If you purchase coal-based power you may be in the crosshairs of Greenpeace, as Facebook and others found. Will new builds soon be forced to have acres and acres of PV solar fields? While other industries and even normal residential consumers of energy may purchase less sustainable power with no anti-social stigma—– it seems that data centers will remain the target of Greenpeace and other similar groups who for some reason believe that data centers are inherently evil.
On the networking front, copper has reached the end of the line as a primary backbone design. Copper-based backbones will begin to decline rapidly, as improved compute and storage performance further drives the need for increased network bandwidth to be delivered over larger distances. Network backbones will soon be all fiber with copper used only for the last few meters to connect servers to top of rack and end of row switches (that may soon be fiber as well). The significant size difference of fiber vs. copper is also a factor, cable trays in data centers will have more available capacity with fiber instead of overflowing with far larger and heavier copper cabling.
Look towards a “SiPh” future (Intel’s new name for silicon photonics) for data centers. Photonics will begin making inroads into some leading edge HPC designs and then may be part of the next mainstream networking standard. Besides the existing fiber 40 to 100 Gbit networking standards, earlier this year, Corning, in conjunction with Intel, announced the MXC Connector, which provides up to 64-fiber connectivity to deliver up to 1.6 terabits per second — at lengths up to 300 meters — far beyond anything that copper can hope to deliver. We have moved from individual servers to blade servers and now toward “rackscale” computing, coupled with SDE virtualizing everything in the data center, which will also begin to re-shape physical design of the data center.
And while I don’t normally pontificate on any end-point devices such as tablets and smartphones, I thought a special mention should go out to Apple’s iBeacon technology, which is now embedded in its iOS 7. It allows mobile positional tracking accurate to within a few feet, even when inside of a structure such as a shopping mall. This gives marketing and sales groups even far more granular consumer information to add to their big data profile of consumer behavior and preferences (of course this just reminds me of when Scott McNealy, then head of Sun Microsystem, said “Get over it …You have zero privacy anyway,” back in 1999). However to put it all in context, Google already has long been doing this with your online searches and by analyzing the content of your gmail accounts.
Of course this is music to Larry Ellison, CEO of Oracle, who later echoed McNealy’s sentiment by saying, “The privacy you’re concerned about is largely an illusion,” since Oracle makes real-time analytical software to help sort out all that additional consumer metadata that will be coming in from mobile device tracking. This added bit of tracking technology may make the “No Such Agency’s” efforts look mild in comparison, from a privacy advocate’s point of view, but it will certainly fuel the need for more “big data” data centers.
Notably, given the more recent events, I feel that I would be remiss if I omitted to give my “Horse Built by Committee Award” for “How not to build a web portal (and back-end database),” to the senior project managers who directed those who designed and built the Healthcare.gov website (note this is not a political or social health care comment, just a personal observation). Hopefully, by the time this is published it will all be working properly.
THE BOTTOM LINE
It is getting harder to polish up my old crystal ball and guess I will need to try Google Glass next year (and check to see if they added a “predict future” button, even if it is in beta).
So what is “The Right Stuff” for 2014? Flexibility and economy. Many organizations will need to review their presumed need for direct data center ownership and operation as the cost of building, operating, and upgrading their own facilities becomes more expensive and less of a strategic advantage. So like NASA, they will need to outsource to one degree or another, and may be driven to migrate toward co-lo and cloud services and a hybrid of both.
For new data centers, think sustainable resource optimization. While we may not see a data center powered by wood pellets, sustainability will have a greater impact on site selection. Decisions will be based on taking advantage of favorable climatic conditions, as well as fuel types or sustainable power, either generated by onsite (e.g., fuel cells) or supplied by local generation (solar, where possible), via power purchasing agreements. This will not just be for Apple, Ebay, and Microsoft scale sites, even some enterprise and co-location providers (and their customers) will begin to see the social, political, and business value of this. This will be coupled with better overall energy efficiency (not just facility PUE), perhaps aided by DCIM or simply by better cooperation between IT and facilities — and if all the planets were to align, perhaps even with senior management.
Climate change and catastrophic weather — The 100-year event is the new normal. Organizations will need to revise their traditional disaster recovery (cold site) strategies and move towards multiple geographically diverse sites with active-active replication to deliver business continuity rather than disaster recovery. This will become the de facto alternative and has become more technically and economically feasible due to virtualization, coupled with lower cost of bandwidth and cloud services.
So as I log-off for 2013 (and hopefully will be able to sign-on again in 2014), stay tuned and see how these predictions pan out, and have a Low-Carbon, Green Holiday Season and a Happy Sustainable New Year!