In my last column, I compared the data center efficiency metric power usage effectiveness (PUE) and the ASHRAE 90.4 standard. Since then, Earth Day, April 22, has come and gone, and discussions and contentions about the sources of climate change, or even if it exists (not to mention the politics), continue to run rampant, and of course data centers seem to be seen by some as part of the cause. 

All of this gave me pause to look beyond discussing how the use of AI and machine learning can help to reduce the PUE of a 100-megawatt hyperscale facility from a PUE of 1.15 down to 1.11. I felt the need to step back to the 50,000 foot view, perhaps to low-earth orbit, where many satellites roam (as well as a lot of old space junk). And apparently it is going to get a lot more crowded up there pretty soon. 

This past April, Amazon announced Project Kuiper, its plans to launch a constellation of 3,236 satellites in low-earth orbit to provide low-latency, high-speed internet service. This follows on the heels of the Nov. 15, 2018 FCC announcement of their approval for SpaceX to launch more than 7,500 satellites, in addition to the 4,425 satellites the FCC already approved for SpaceX earlier in 2018. The stated purpose is to provide internet access from anywhere on earth. And in case you may not remember, SpaceX is owned by Elon Musk, who launched a SpaceX Falcon Heavy rocket in February 2018 toward Mars, carrying his old Tesla Roadster with a dummy astronaut named Starman.  

So here I am thinking wow, what is all this internet connectivity going to do to the world as we know it, and was there life on earth before the internet? I thought I might take a virtual look through a cybernetic time-warp, going back to where it all started. I considered using Microsoft’s Hololens 2, which, according to Microsoft, “offers the most comfortable and immersive experience available — enhanced by the reliability, security, and scalability of cloud and AI services from Microsoft.” I then asked Cortana to take me “back in time,” only to see video clips of the ’80s movie, Back to the Future, not what I expected, although it could be considered a somewhat mixed reality experience.  

Using more conventional research methods (saying “Hey Google”), I eventually made some progress. While I never studied anthropology, apparently early humans lived in caves for thousands of years, primarily to stay alive. According to an article in Smithsonian Magazine, some lived in caves for as much as 78,000 years (https://bit.ly/2Gh09Iy). Human progress and development back then moved slowly to say the least, perhaps because they did not have internet access in the caves so they could not share by tweeting big developments like making a fire (perhaps, was this the first step of global warming?). This made me consider the past half century of the development and evolution of computing, data centers, and the impact of the internet over the past 25 or so years, which in terms of early human development, seems to me to be about a millennium or perhaps two.  

We are approaching or reaching the level of everyday personal technology first envisioned in Star Trek over 50 years ago starting with the “Communicator” which spawned the first “StarTac” flip phone in 1996 by Motorola (which led the industry at the time). In addition, there were the various versions of the TriCorder, a portable scanning device capable of sensing and computing analyzing data that detects environmental conditions, radiation, energy fields, and even “life forms.” Some versions could also diagnose medical conditions within seconds. 

And of course, there was the personal access display device (PADD), a tablet that was a common portable “information” terminal that most crew members used to enter and analyze data, except for Captain James T. Kirk, who would simply start speaking out loud “Computer; Find the best deal on a Hotel in Alpha Centauri.” or to Captain Jean-Luc Picard telling the food replicator, “Earl Grey, Hot.”  

Now, thanks to Alexa, we can now have an Amazon Drone deliver a box of teabags in two hours or less. Moreover, there is an Alexa enabled counter top microwave oven for about the same $59 price as a regular microwave. Some of its listed features include: “Quick-cook voice presets” coupled with “Alexa is always getting smarter and adding new presets.” and finally, the real reason that Amazon offers this Alexa enabled product, “Automatically reorder popcorn when you run low.” 

And although not quite commonplace in every kitchen, there are a variety of 3-D printers (aka additive manufacturing) that can produce foods on command. Of course, they too will need to have their “recipes” updated via the internet.  

Obviously, all this ultimately involves data centers, directly or indirectly, these internet enabled devices, more and more of which are becoming voice enabled requiring processing capabilities. In most cases, they do not process voice recognition locally, the processing and analysis (with some level of AI) and responses are done with backend processing in data centers — very, very large data centers.  

Moreover, Amazon points out “Alexa is always getting smarter” and not only when to re-order popcorn. Voice recognition technology is decades old, and performance has improved over the years and also has come down in price. But it was typically application driven, such as voice dictation software which could run on desktop PC, or to augment a service such as an automated front-end for call centers (“Say or touch 2 to be transferred to the next (un)available agent”). 

Alexa was not the first mass market “voice assistant,” Apple’s Siri was introduced in 2011 on the iPhone 4S. Shortly thereafter, Microsoft’s Cortana first appeared on the short-lived Windows smartphone, and later in Windows 10. There was also Google’s search, using voice input for Android phones in 2012, which differs, but eventually resulted in Google Assistant introduced in 2016 along with the Google Home “Smart Speaker.” Most voice input devices are relatively dumb, the “Smart Speaker” is really a dumb listener, and the “smart” parts are back in the “caves” of hyperscale data centers. 

So where is all this leading as hyperscale, webscale, exascale, (AKA “huge”) data centers are being constructed at an incredible rate to meet ever accelerating demand? All that computing hardware requires power, a lot more power than was ever imagined. They are not being built because their owners and operators want to build them, but because their ultimate customer (the consumer) wants the convenience of their services. A battery-operated smartphone may only take a few hundred milliwatts to trigger a search or to order a pizza, or perhaps a watt while uploading hundreds of multi-megabit photos or gigabit videos, which would appear to be relatively efficient from the device level.  

However, collectively, all is transmitted by wireless networks and processed and stored forever on multiple social media sites in unseen data centers, which require magnitudes more power. Not only to complete the immediate task, but while they are at the ready to meet the consumers’ demand for convenience and instant satisfaction. These billons of requests require that a massive, but relative invisible infrastructure which everyone takes for granted, demands thousands of gigawatts hours annually to fulfill their expectations. Nonetheless, those very same consumers claim that they want those businesses to be sustainable, then criticize how much energy data centers consume without recognizing how their part plays a central role in driving this ecosystem.  

 

The bottom line

I am not suggesting that research and developments in science, medicine, and technology in general have not improved the human condition since we first left the caves; clearly it has increased the average lifespan and quality of life of most people. Yet, history is littered with examples of “advanced” societies that developed and prospered and then decayed into ruin, such as the Roman Empire. Will we soon become the next generation that falls prey to the success of its own advancement?

Our society’s current existence has essentially become totally dependent on computer technology. Many people claim they would be nearly helpless without their smartphones. Moreover, virtually all of us would be significantly impacted without electrical power. In effect, even most governments recognize how significant this is to our safety and security. They now realize that cyber-attacks could potentially cripple our critical infrastructure: electrical system, financial systems, transportation, food delivery and distribution, military defenses, and a litany of other services that would come to a halt if internet communications were disrupted or data records became corrupted.  

Our entire ecosystem is now completely intertwined with technology. Many retail businesses are beginning to not accept cash. And most if not all major chain stores cannot let customers buy anything even if they have cash when the check registers go down or if the store loses communications with its data center. This a relatively minor inconvenience on a local level, however, we really do not have the ability to sustain ourselves if this occurred on a broad scale. 

While I am not planning to live in a cave anytime soon, I cannot help but wonder if we — people, businesses, and the various levels of government — should re-examine our total focus and dependency on technology. We have seen recent examples of this occur (much to everyone’s surprise) when major airlines have total shutdowns grounding all flights for hours, days, or even a week due to a computer issue (be it software glitch, IT hardware, or data center infrastructure failure).  

There is a yin and yang here; “smart” interconnected technology has many benefits, however, total dependency on them is not without cost. There are many arguments to be made to maintain our ability to function on a personal level, as well as a society. What will you do if you tell your smart speaker to order a driverless Uber and it does not show up within two minutes of your request? Even public transportation in some major cities now requires some form of digital device (e.g., metro card or smart phone) to gain access to subways and busses.  

While data centers today take great pains to be energy efficient, and fault-tolerant via facility infrastructure redundancy, as well as geo-diversity and cloud resources, and despite all this, we still can never ensure that things cannot go wrong.  

So while I am optimistic that we will not have a Digital Armageddon and be forced to use data centers as caves for shelter, let’s all recognize there has been an exponential rise in data center and wireless telecom energy that will continue to grow as we move to 5G and massive numbers of IoT devices.  

While there are many differing reports, projections, and guesses as to how much power/energy the entire infrastructure of “the internet” takes, I will use a liberal dose of literary license (as well as my trusty slide rule) and arbitrarily state that collectively it require at least 100 megawatts of backend infrastructure just to ask your smart speaker to tell you what time it is.  

Therefore, until every home, business, and vehicle is powered by a Mr. Fusion, here at the “Hot Aisle Insight” we will continue trying to help improve data center energy efficiency. Nevertheless, even as we get to a near perfect PUE of 1.0x, every megawatt of IT power still turns into a megawatt of heat rejected into the environment, so even though Earth Day 2019 has passed, let’s all try to use those watts wisely.