Divergent Trends In The Data Center Industry
I was staring into the official Hot Aisle Insight crystal ball after asking to see what was ahead for 2016 when an image of baseball legend Yogi Berra gradually appeared. Slightly baffled and hoping for a different result, I thought the voice recognition software needed an upgrade and bravely did a free Windows 10 upgrade to the mystical orb hoping for a cleared view of the future. When the globe finally finished rebooting (for the 5th time), it still showed Yogi, this time wearing an Apple Watch. This made me wonder if I had a bad upgrade experience, since the upgrade to Windows 10 was free, or perhaps it was trying to convey a prediction that Apple was going to acquire Microsoft this year. I pondered these and various other possibilities for a while when I suddenly remembered one of Berra’s famous quotes, “When you get to a fork in the road, take it.” Well it seems that the data center industry is doing just that, and on many levels.
IS BIGGER BETTER OR JUST BIGGER?
Again this year it was nearly impossible to open my email without seeing a message announcing a new bigger “mega-scale” or hyper-scale data center being built in almost every part of the world. On one hand we have the colocation giants betting bigger with the recent announcements by RagingWire for one million sq ft built across a 42-acre campus in the Dallas-Fort Worth Metroplex area of Texas, while Digital Realty plans to build up to two million sq ft of new space in “Data Center Alley” in northern Virginia. It made me think of paraphrasing a quote attributed to JP Morgan, “A million here, a million there, pretty soon you’re talking real money.” The data center analogy being, “A million sq ft here, a million sq ft there, pretty soon you’re talking about a real data center.” While on the other end of the spectrum, industry rumors are afloat that players like Century Link may be trying to sell its assets to recover asset equity, but wants to still manage and operate the sites. And even more telling — could Verizon also consider divesting its data center assets as well?
THE RE-IMAGINED DATA CENTER
On the technology front there is discussion of hyper-scale and even a pilot project for a grid connected data center. The theory being that the U.S. electric grid is so reliable that there may be no need for UPS, back-up generators, and all the related switch gear, etc., reducing cost and complexity, as well as improving energy efficiency. While conversely Bloom Boxes are blooming in some areas as relatively clean local power generation using natural gas, which it claims to be reliable enough to also avoid the need for back-up generators and even possibly utility power. So when is a data center not a data center and just a large industrial building?
In addition, there are other puzzle pieces on the IT side to make this not just foreseeable, but feasible, in the form of local energy storage (LES) for standard IT hardware as backup in lieu of a central data center UPS. Google has long used standard sealed lead-acid gel cells onboard their low-cost, open chassis type custom servers. However, one of the drawbacks is temperature vs. life of lead acid batteries. Lithium ion (Li-Ion) batteries have a higher temperature range and higher energy density, but were expensive. Prices have been coming down as they have become a common commodity in portable power tools. Microsoft revealed their design for commodity style hot-pluggable IT power supplies at the March Open Compute Project (OCP) U.S. Summit. It is similar in size and shape to today’s common swappable server power supplies, but it also contains a small Li-Ion battery. This LES design could mimic Burger King’s approach to “have it your way” to also address the ongoing AC vs. DC data center argument by using branded OEM servers since it could also be configured to operate at either 208-240-277 volts AC or 380 volts DC.
Climate change and El Niño may also impact the future design of data centers. According to the U.S. National Oceanic and Atmospheric Administration (NOAA) weather center as of mid-November, “There is an approximately 95% chance that El Niño will continue through Northern Hemisphere winter 2015-16, gradually weakening through spring 2016…. Outlooks generally favor below-average temperatures and above-median precipitation across the southern tier of the United States, and above-average temperatures and below-median precipitation over the northern tier of the United States.”
So it seems we may continue to face droughts, floods, blizzards, storms, as well as anything else that Mother Nature decides to unleash in the foreseeable future. In the new age of climate change and 100-year events becoming more common place, it has become clear that there is virtually no place to place a data center that is totally immune to natural risks (yes, I know that there are some underground data centers, which may minimize some of the risks, but are still dependent on external communications networks to connect to the rest of the world). So virtualized redundancy coupled with geography diversity will become the norm.
As I noted in my September column, “Dehydrating the Data Center,” droughts have caused water usage by data centers to be more closely scrutinized and conventional evaporative water cooled chillers may be next on the target list, especially in California. Desperate times call for desperate measures and Nautilus has announced that it will offer a fleet of floating data centers built on re-purposed barges, cooled by sea water, in the San Francisco Bay area. Droughts may also spur increased interest in liquid cooling, which ironically has the potential to minimize or eliminate the use of water for evaporative heat rejection while supporting much higher IT power density with lower overall cooling system energy (no barge necessary).
BACK TO THE FUTURE
Hey, I just can’t pass up this opportunity. This year, October 21 marked a celebration with the 1989 movie “Back to the Future II,” based on a scene from the time control panel in the time traveling car (the now extinct Delorian). I thought about the film’s many predictions such as the portrayal of daily life in 2015, one of which was flying cars. However, while no flying cars yet, there are lots of drones and the FAA is still trying to control and prevent them from endangering aircraft and just about everyone and everything else. And if written FAA rules don’t work perhaps government mandated software will begin to, yet one more item to add to the list of IoTs generating more network traffic and data center processing and storage demands.
Nonetheless, Google’s driverless car seems to be making solid progress. And while Google may have the technology and back-end computing power to potentially control all vehicle traffic, it is not yet in a position to make cars, trucks, and other vehicles. Somewhat conversely, in a November article in Business Week, GM (still currently known as General Motors, not Google Motors, yet), demonstrated their own “driverless” vehicle. GM’s homegrown vehicle control system for their self-driving car is called “Super Cruise,” and the first version is expected to become publicly available in 2017. While although it will not be fully autonomous, GM foresees a time in the not too distant future where people will not own cars — they may just summon them via smartphone app (a sort of driver-less Uber). The article noted that “if GM stays with its current car selling model, it will go out of business.” In fact, Mark Reuss, GM chief of product development, who was demonstrating, was quoted as saying “Yep, were done.”
Unlike Google’s small almost jellybean like vehicle, the Super Cruise system was installed and demonstrated in a classic symbol of Detroit heavy metal; a big Cadillac. The article described this endeavor as GM’s “most ambitious technological foray since the automatic transmission,” which is a somewhat sad indictment of the future of Detroit’s cars, driverless or otherwise.
So what will be the impact on the data center and network traffic in the near future (not to mention real vehicles with people in them)? In retrospect, in my predictions for the impact of IoT on the network last year, I was worried about the effect of millions of NEST thermostats that Google purchased for $3.3 billion last year. Now I need to look out for super cruising Cadillacs before I cross the street.
In another back-to-the-future moment, the New York Times, a bastion and icon of print, has taken a quantum leap forward to embrace the future. In early November it distributed approximately a million free Google “cardboard” virtual reality (VR) viewers which was included with the home delivery edition of the Sunday NY Times (digital subscribers received promo codes via email to redeem for a free Google cardboard VR viewer). The view was inscribed with “Experience the future of the news at nytimes.com/VR” and also carried the GE logo. The NY Times has started producing content for its own site nytimes.com/VR, as well as commercial sponsorships from GE and others. The required NYT/VR app (Apple and Android) was a free download and hit a record number of downloads for the NY Times website. Ironically, only three years ago the NY Times openly criticized the data center industry as being a wasteful energy hog. In light of their current thrust to have subscribers “Experience the future of the news at nytimes.com/VR” to promote and embrace VR, it makes me wonder what the annual energy use and PUE of their VR data center is.
Of course exposing a million or more potential views in one weekend to free VR may help drive VR adoption. 3D-TV which was supposed to be the next big thing a few years ago seems to have fizzled. Cardboard VR was introduced by Google in June of 2014, but like 3D-TV content, it is not that widespread. However, low/no cost cardboard VR using existing smartphones may be the tipping point (being free helps). It will also drive up network traffic. While new hyper-scale data center buildouts are booming, it is the network (wired and wireless) infrastructure that has also undergone exponential expansion and energy use. Needless to say, this network expansion is necessary to meet consumer demand, but so far seems to be staying under the radar of data center energy watchers.
PUSHED TO THE EDGE
All this will just further drive the need for speed to deliver the massive amount of consumer demand for streaming video at ever higher resolution to both fixed and mobile screens. This means data centers also need to start staying closer to the edge to deliver more 4K Ultra HD content to all the new crop of 4K TVs being sold in volume this year now that prices have lowered. This may not improve the quality of the content, but it will surely drive up the bandwidth demands.
With the “cable” and mobile service providers all competing to deliver enough IP bandwidth to provide streaming HD video to homes with multiple devices, and users watching multiple screens simultaneously, 25 to 50 Mbs is now the bottom end of the home market offerings. However, marketing departments are telling customers they need 300 to 500 Mbs for a typical family of four (not including a pet, which may soon wear an IoT device) so they will not have to endure the dreaded “slow internet.” Seriously, does the home really need 500 Mbs of bandwidth? What does that mean for data centers? They are beginning to be driven further to the edge (hopefully not over) in order to minimize bandwidth limiting latency.
THE BOTTOM LINE
So with all these forks in the road, what’s on the plate for 2016? I believe we have begun to enter the age where the cloud computing and the data center have begun to be viewed like a sausage factory by the public and perhaps in some cases even by CIOs. Many people may like sausage, but they really don’t want to know how it’s made.
Doing more with less is the new normal for 2016 and the foreseeable future for enterprise IT departments even though the economy seems to be improving. This now seems to be embedded in the culture due to the fact that over the last few years everyone was driven to improve not just PUE of the physical facility, but to get a better bang for the buck (and watt) from IT hardware and software as well. Delivering low cost utility computing is the goal and TCO is the driving strategy since bandwidth has now become a low cost commodity. The CFO and CTO will mandate that new sites are built wherever the climate favors free cooling, power is cheapest, or where localized demands exist.
While super-sized, mega-sized data centers are springing up almost everywhere faster than you can say “state tax incentives,” the underlying business model for major vendors of primary infrastructure may not look as bright as it would seem. In April of this past year, IDC predicted a 750% increase by 2019 and stated, “IoT will emerge as the leading driver of new compute/storage deployment at the edge” and “the growing importance of analytics in IoT services will ensure that hyperscale datacenters are a major component of most IoT service offerings by 2019.” Nonetheless, as another example of diverging trends, despite this prediction that massive demand will presumably drive growth of hyper-efficient, super-sized facilities, it does not seem to bode that well for some traditional infrastructure equipment vendors.
Only a few months after the IDC’s April prediction of expected growth, industry stalwart Emerson’s Network Power division (aka Liebert) was “repositioned” by Emerson Electric as data centers have begun to evolve. According to a June 30, 2015 WSJ article titled, Emerson Electric to Spin Off Its Network Power Business, “Sales from Emerson’s network-power unit, which also includes power conversion gear sold to the telecommunications industry, fell 28% from 2011 to 2014.” Additionally, the “... move reflects challenges for suppliers of equipment and services to giant computer server centers powering the Internet.” Moreover, it has also impacted the strategic direction of some top software vendors such as CA, which while it was recently listed by IDC as a one of the major players in the DCIM wars in October, suddenly announced in November that they were abandoning the development of their DCIM offerings.
So now if I can finally stop my crystal ball from asking me about how I liked my Windows 10 upgrade experience (Yogi was right, “It ain’t over till it’s over.”), I will sign-off for 2015. So stay tuned as we see how these predictions pan out. Have a happy, semi-secure shopping experience, be it in-store, on-line, or even VR as well as a Green Holiday Season and a Happy Sustainable New Year!