Before writing this column, I looked back at an article that I wrote five years ago with my predictions for the beginning of the new decade. It focused on some of the new developments related to the physical facility and how it might be impacted by the then recently introduced cloud computing. It looks like that vision of the future is here in full force. The issues and definitions of the “data center,” as well as who wants to own or operate them (especially for the enterprise), have become more complex as the related titles in the C suite have expanded (CIO, CTO, CSO, etc.), as well as CFOs, who tend to see colos and cloud service as “utility computing,” financially strategic operating expenses rather than depreciating fix assets for a data center facility. Needless to say, the concept of an application bound to a dedicated server is now nearly obsolete thanks to virtualization, a term and technology that now encompassed the data center itself, hence the virtual data center or “VDC.”

Moving even further, we are now entering the age of software defined everything (SDE) (which began as software defined networks “SDN” and later software defined data center “SDDC”), wherein IT hardware is no longer purpose built or whose role is no longer even clearly delineated (e.g., server, storage, network), and is perceived as the new panacea. In some cases this was motivated by large customers wanting lower priced, generic hardware (directly sourced, bypassing the major OEMs, e.g., “no-frills” hardware a la open compute). In others cases, it was developed to be the next generation of universal “bare-metal” hardware, designed to run open source software, which is architected to overcome the limitations/bottlenecks of moving bits between the “dedicated” and siloed category of devices.

MEMOS TO ALL DEPARTMENTS FOR 2015

Open secrets: It is no secret that the computer hardware sold by major OEMs are essentially all made by Asian contract manufacturers. Up until recently, one of the main reasons many enterprise organizations purchased the major branded product was for the sales, logistical, and technical support offered by the brand. However, while the Internet giants such as Google and Facebook have long had the scale and purchasing power to have custom low cost hardware built, some smaller or even some large organizations, did not want to try to directly source, self-support, and use “generic” hardware. Moving forward, similar to the way Linux developed and became accepted, the Open SDE movement has begun to take hold and generic hardware platforms for server, storage, and networking will become the underpinnings of cloud service providers and even some enterprises.

So it looks like 2015 will be the year that the generic hardware “genie” comes out of the bottle. For example, in September Alibaba went public on the New York Stock Exchange and will use the capital to expand its global market presence and build a direct gateway for low cost Asian hardware manufacturers to reach potential U.S. customers.

Seeing the handwriting on the wall, Juniper announced “vMX,” the industry’s first carrier-grade virtual router, which will ship on a USB stick in 2015. The code is intended for those organizations that want to custom build their hardware platform or use open-source “carrier-grade” hardware — but want to use Juniper software. They are not the first to offer their software without their hardware — Cisco already offers their Cloud Services Router CSR1000V, which can run on Amazon’s “Bring your own License” (BYOL) service offering (at rates ranging from 7 to 42 cents per hour). Even the stalwart of the industry, IBM, shifted its business model (again) and sold off its lower and midrange server lines to Lenovo in 2014.

At the micro level: Intel announced that in 2015 it will offer a CPU that can “morph” or at least be dynamically reconfigured on the fly for various tasks. It is adding an onboard hybrid chipset that incorporates a field-programmable gate array onto its high-end Xeon E5 server chip. The two will work cohesively and will be integrated into a single-package, socket-compatible with regular Xeon chips.

Who buys this stuff: While previously most Intel CPUs were identical and purchased by the major server manufacturers (e.g., HP, IBM, Dell), a few years ago Intel quietly began offering customized chips to hyper-scale data center operators, such as Facebook, Google, and Amazon. These Internet giants represent huge business volumes as direct customers who wanted to customize and optimize every single piece of their infrastructure stacks (including server chips) for their specific needs, because at their scale even minor performance gains represent significant energy savings.

YOU CAN NEVER HAVE ENOUGH DEPARTMENT

Bandwidth: While in the last few years Netflix and YouTube were the bandwidth hogs, Amazon recently announced that its Prime Service video offering will be upgrading to 4K Ultra HD content, which may not improve the quality of the content, but it will surely drive up the bandwidth demands for the home and Internet peering points (as well as make for more contention on the Net Neutrality issue).

Really Big Data: Moreover, there is the promise (or threat) posed by the growing flood of connected devices known as the Internet of Things “IoT,” such as the NEST Learning Thermostat, which was purchased by Google in 2014. They were willing to pay 3.2 billion dollars for NEST, perhaps to learn even more about you at home — even when you are not searching for anything. This will further drive Google’s need to connect and collect, analyze, and store Big Data.

Of course we no longer speak of Big Data in terms of terabytes (which is now for laptops and home NAS devices), we are now dealing on a scale never previously seen or hardly imagined five years ago: petabytes, exabytes, zettabytes, and yottabytess, in the lowest cost and presumably most energy efficient manner possible. This has ever growing storage (and naming convention) challenge has become the “whateverbyte problem,” a term coined by Oracle big data strategist Paul Sonderegger, who referred to it as a symptom of a larger, even more important business issue.

Bigger and greener is still better: Again this year it was nearly impossible to open my email without seeing a message announcing a new bigger “mega-scale” or hyper-scale data center being built in almost every part of the world. And in most cases, either directly or indirectly, each one was touted as green, “lowest PUE,” or powered by sustainable energy sources.

Supersize me: The first building of Facebook’s newest site located in Altoona, IA, which opened in November is described as three times the size of the nearby Walmart Distribution Center (note to Walmart — go see it, you may want to consider offering cloud computing services — it has turned into a price-driven volume business).

Show me the money: Of course, before committing one billion dollars to build the project in Iowa (instead of Nebraska), they negotiated $18 million in state tax credits, as well a 20-year tax abatement from Altoona on the property. Needless to say, Altoona like other Facebook sites will have an uber lower PUE, based on free cooling, so this was designed to be green to the CFO and CTO.

Be careful what you say: Just a note to colo marketing departments, in 2015 only those who are willing to pay the Uptime Institute to certify their data center will be able to use the word “Tier” in the same sentence a data center, since Uptime won the court case against the TIA regarding the use of the “T” word. Of course, that begs the question if purchasers of cloud computing service will even care about the ”T” word as long as they get five “9” of “data tone” (a throwback term related to land-line dial tone, which still exists but is really obsolete in an all-digital telecom infrastructure).

Don’t count me out quite yet: While Moore’s Law continues to prove out with ever rising performance it comes with a price — more energy and heat. The CPU manufacturers are well aware that they have been constantly improving the performance-per-watt ratio for the last several generations of chips. However, they are near the physical limits of existing technology — unless and until they can make the quantum leap via the “next gen” chip material (i.e., moving past silicon-based CPUs). So silicon will still be here for many more years, however 2015 may also be the year where the first generation of graphene-based processors begin to become closer to being available to early adopters.

Keep IT clean: Super computers apparently seem to love warm baths and it seems that liquid cooling is gaining ground for the HPC crowd (see http://bit.ly/1mt7ZOQ). Moreover, helium-filled hard drives designed for both increased capacity and energy efficiency made their debut this past year and since they are sealed, they may also help drive adaption for immersion and fluid based cooling in 2015.

Overlooking the obvious: we have long been accustomed to saving files by clicking on an icon that resembles a 3.5 in. floppy disk (which many younger users have never actually seen). I predict that soon new software will replace that icon’s obsolete graphic with an image of a cloud, especially since many mobile devices and even the classic Microsoft Office is now cloud-based as Office 365.

THE BOTTOM LINE

So what is “The Inflection Point” for 2015? In differential calculus, the point on a curve where the slope of the curve changes direction (picture a sine wave or roller coaster), such as from positive to negative, is called the inflection point. I believe that we are at an inflection point in the curve. To put it in perspective, I was watching an episode of the TV show Mad Men, which is set in the 1960s about an advertising agency. They decide to install a “computer” in a glass house right in the middle their offices to show to their clients they had the latest tools to optimize their advertising strategy (even though senior partners had no idea how to use the system).

Now 50 years later, senior management clearly understands the value of collecting and analyzing data, and now Google and Facebook, as well as other search and social media giants, are the new Mad Men. However, the trend to own and house the hardware seems to have reached its inflection point. Now even some major businesses are abandoning their own data centers in favor of colos and cloud-based services, or at least are no longer rushing to build their own new facilities to meet demand.

Since the traditional enterprise data center as we know it seems to be giving way to colos and clouds — what will a data center morph into over next the five years as we begin the second half of this decade?

So as we enter the age of industrial computing, I have one word to describe them — “hyper-scale.” The facility itself is also at an inflection point (regardless who owns and operates it). The old adage of “form follows function” will continue to be enforced and drive the design of the facility to meet the new requirements as bare-metal hardware and SDE becomes the new computing paradigm.

Therefore, in sharp contrast to the brightly lit, heavily cooled “glass house” showcase that was the hallmark of the data centers of five decades ago, the data centers of the future will be dark, vast warehouse style buildings with minimal or no mechanical cooling at all (a goal declared by ASHRAE in 2011 as a preamble to the release of its 3rd edition of their Thermal Guidelines and the introduction of the expanded allowable ranges). Or in some cases they may have liquid cooled IT hardware with the waste heat being harvested for other purposes.

Moreover, if delivering low cost utility computing is the goal, TCO is the driving strategy, and bandwidth has now become a low-cost commodity, data centers will be built wherever the climate favors free cooling, power is cheapest, and the tax incentives offer the greatest benefit.

And now as I sign-off for 2014 (and hopefully will not have my credit cards hacked), stay tuned and see how these predictions pan out, and have a secure shopping, Streaming Ultra HD, Big Data, Green Holiday Season and a Happy Sustainable New Year! n