In my last column I discussed the net zero energy data center. While to some it may seem like an oxymoron or a punch line to a joke, however to one degree or another, it may become a necessity in the not too distant future. Nonetheless, today energy efficiency, as well as the energy usage by all data centers, continues to be an ongoing and high-profile target for environmental and sustainability groups, as well as the government. This can be seen in the ongoing saga of ASHRAE’s pending 90.4 Data Center Energy Efficiency standard, which is now in its fourth and final revision before it is finalized this summer.

By now, we are all fairly used to seeing ultra-low PUEs of 1.1 or less from the various web hyperscalers such as Google and Facebook, which use minimal or no electrical redundancy, custom built “no frills” servers, and a variety of ingenious “free cooling” designs. They have proved that they can still deliver high overall computing system availability by basing the redundancy and failover in the IT software architecture, rather than the power and cooling infrastructure of the physical data center. Of course for the enterprise, a non-redundant “N” physical infrastructure is a non-starter.

While most enterprises would prefer the “belt and suspenders” 2(N+1) fully redundant everything data center, very few are willing or able to support the cost to build this category of facility. Moreover, even those organizations with deep pockets, which have the capital to build their own dedicated facility, seem to have decided that they can make better use of the capital in their own core business. As a result, while the enterprise data center is not dead, clearly fewer organizations are building their own new facilities.

The winds of change have been blowing on enterprise computing for a while, and the trend in computing strategy is also driven by cost effectiveness of outsourcing the data center facility via colocation providers, and even IT hardware ownership has been offset by cloud services. This rate of change has gone from a breeze and has now turned into a tsunami. Even those conservative stalwarts such as major financial intuitions and the federal government have not just accepted it, they have embraced it. This can be seen by the White House announcement this past March that under the new Data Center Optimization Initiative (DCOI), all federal data centers must reduce their PUE to under 1.5 by September 2018 (unless they are already scheduled to be shutdown, as part of the Federal Data Center Consolidation Initiative (FDCCI). It further recommended that the federal agencies should consider relocating to qualified colocation facilities which have a PUE under 1.5, and also to contemplate using cloud based resources.

Beside the cost of capital investment to build a dedicated enterprise data center, there is also the recurring expense and burden of supporting it with qualified personnel, which further encourages the swing toward outsourcing. This has resulted in a building frenzy of massive colocation and cloud data centers clusters, with no immediate end in sight.

What has become clear is that from the financial perspectives scale matters. It is very difficult for even well-funded enterprise organizations, which in the past may have built only one or two small-midsize (5,000 to 20,000 sq ft) data centers every few years, to match the economies of scale of the colocation giants. Over the past 12 months they announced 1 and 2 million sq ft campuses, involving hundreds of acres and megawatts.

So what about energy efficiency? Is it hype or myth that colo and cloud is more energy efficient than traditional enterprise facilities? The truth is “maybe,” because while there is no “secret sauce,” just utilizing solid designs based on now well-known best practices are maximizing the use of free cooling, coupled with state-of-the art energy efficient cooling equipment and UPS units.

Nonetheless, this is not enough. In many cases it is no longer cost effective for the enterprise to build and operate their own 10,000-sq-ft facility even with a PUE of less than 1.5, especially when compared to some of the colocation costs, which may also include aggressive tax advantages from areas such as “Data Center Alley” in Ashburn, VA. Furthermore, colocation and cloud providers are in a highly competitive business which is based on the financial aspects of energy and operational efficiency of the physical facility.

Moreover, unlike most colocation facilities whose enterprise customers still prefer to stay within ASHRAE environmental recommendations, cloud service providers are free to use any type of IT hardware and are not limited to any recommended temperature or humidity envelopes that the enterprise data center would use, as long as they can reliably deliver the cloud service. This gives them the energy efficiency and cost advantages similar to hyperscale search and social media giants.

The most recent development is that colocation and cloud providers have begun to form a somewhat mutually symbiotic relationship, despite being inherent competitors. This has manifested itself in the form of the “connected campus,” which offers an optimized hybrid of both to the enterprise. The cloud provider leases space within the campus wherein a fiber ring provides direct connectivity to other colocation tenants in the campus. This significantly reduces latency and carrier-related communication costs. This benefits the colocation tenant and the cloud service providers, and of course ultimately the colocation providers which receives revenue from both. Even Microsoft, who has the resources and plenty of capital, has recently begun acquiring a significant amount of wholesale space from colocation providers in order to keep up with the demand for their cloud services.

Even Intel Corporation, which already supplies the majority of server CPUs, has decided to double down on hyperscale computing cloud data centers, and in April announced it was going to restructure itself “to speed Intel’s transition to a company that powers the cloud and billions of smart, connected computing devices.” Intel’s largest customers used to be the major IT OEMs which all used the same standard CPUs in their servers.

However, over the past several years they began to sell more CPUs to internet hyperscalers, and in response to demand, created individualized custom versions of their processors for eBay, Google, Facebook, and others, which use them in their own custom built low cost bare-bones servers. In addition, more of the enterprise crowd are watching the evolving Open Compute Project (OCP) “standards” and the continuing metamorphosis of software defined everything, more closely, to see if they could save hardware and software costs.

Moreover, Emerson Electric is in the final stages of divesting itself the Emerson Network Power division due to declining sales, even while the colocation and cloud services continue to expand. It was originally acquired as Liebert and still known by that name to most of the enterprise customers. It is being spun off this September as Vertiv, a new public company. Even the well-known Liebert name was abandoned, perhaps to remove any association with the past, as the conceptual image of the data center evolves, as well as who builds, owns, and manages them.

 

The Bottom Line

This message is not lost on the financial community and brokers who help fund the investments, which now have crossed into the billion dollar territory. In late May, Digital Realty joined the S&P 500, indicating the economic importance of the data center as an industry, which has become a critical element of our digital economy.

For better or worse, the handwriting seems to be on the virtual walls. While some organizations will continue to operate their own existing data centers while they are still viable, the CEO and especially the CFO will take a lot of convincing before they will approve building another dedicated data center when they can get competitive bids for a dedicated hall built to suite for them by dozens of major colocation providers. The large colocation providers also enjoy lower build-out costs since they purchase hundreds or thousands of UPS units, generators, switchgear, CRACs, and other major components. In addition, they have refined and standardized their designs and building processes to be able to reduce the time to build a site in terms of six to 12 months, instead of the two to three years a typical enterprise data center project takes to complete.

Although there have been improvements in energy efficiency of enterprise data centers and it is now a consideration, it is usually not the top priority. In contrast, the colocation and cloud providers are like the airlines, where fuel consumption per passenger mile is a major factor of their operating expense, and like the airlines, they are constantly watching and trying to optimize the efficiency. Collectively, this makes it harder and more expensive for the enterprise data center to compete on an economic basis as well as the energy efficiency level with the colo-cloud hybrid paradigm.

The glory days of the enterprise data center are waning, yet its destiny is not yet sealed. There will still be some organizations that may always maintain a core facility, the financial and other performance demands of the digital economy will continue to favor the shift toward the colo-cloud hybrid paradigm. However, the winds of change can be fickle, and perhaps the changing nature of the enterprise organization itself will force the enterprise data center to adapt to meet the changes in hardware, OEM, OCP, or whatever comes next. However if it is to survive, it will need to change far more quickly, not the 50 years it took to go its inception as the “glass house” of the mainframe days.