Edge Data Centers Are On The Rise
The scale and speed of the edge is coming into focus.
Everyone was talking about the shark. It was huge. It was monstrous. It was the biggest they’d ever seen. They said I had to see the movie. They warned me I would never want to go swimming again — never want to go near the beach ever again. Back before the word “hype” was even a word, that shark had a buzz that every Silicon Valley company today would love to have even a fraction of.
Then I finally watched Jaws. I won’t date myself by telling you how long ago this was, but suffice it to say it was not on Netflix or even on a DVD. And I have to admit, my main reaction for more than an hour was, “Where the heck is this shark everyone’s talking about?” All I’ve heard about for weeks is the shark, the shark, the shark. But now it’s just a bunch of people talking about the shark, looking for the shark, not finding the shark, and then talking more about the shark. Eighty minutes of that went by, then Mister Great White finally appeared, and I realized all that talk wasn’t just talk. It was huge. It was even bigger than all the hype had made it out to be. And all the 80 minutes of meticulous preparation the characters had done was not going to be enough, leading Roy Scheider to stagger backwards and say those famous words.
Now, edge computing isn’t scary like a 1970s mechanical shark that is fashionably late to its own horror movie. But people have been talking about the edge for what seems like forever, and you wouldn’t be alone if you were starting to look at your watch wondering somewhat dubiously when it is finally going to arrive — just like I was doing after 79 minutes when they dumped more chum off the back of the boat for the umpteenth time. I’m convinced we’re at that same moment now with edge computing, and the incredible size and speed of it is about to make itself clear — stunningly clear.
THE EDGE IS WAY BIGGER THAN WE THOUGHT … STAGGERINGLY BIG
Given how many words have been written about edge computing, it’s hard to think there are any surprises left — hard to think that we could actually be underestimating the sheer scale of edge computing as mass implementations get underway. After all, people in the industry have been talking about thousands of micro data centers being deployed over the next several years, and it doesn’t cause anyone to bat an eyelash. It’s the equivalent of, “Yeah, we get it. It’s a big shark. Let’s get on with it.”
But what if all of the predictions were off. Way off. Not just in the wrong ZIP code, but in the wrong hemisphere. That’s my takeaway from new research that was presented recently at an important SIGCOMM meeting in Budapest that brought together researchers and industry professionals to discuss the current state of edge computing and where it is heading. SIGCOMM workshops may not get a bright spotlight placed on them, but it’s where the brightest minds come together to shape the future of our industry, and this year’s event was a blockbuster. Researchers from the University of Wisconsin-Madison and the University of Oregon presented a paper with the unassuming academic title of “Deployment Characteristics of ‘The Edge’ in Mobile Edge Computing” and it paints a picture of edge computing at a scale that dwarfs what the industry has been discussing.
Professors Barford, Syamkumar, and Durairajan have been working intensely on technical analysis of not only the technical requirements of effective infrastructure for edge computing, but also doing groundbreaking work estimating the scale of what is necessary to serve the projected demand there will be for edge-delivered content and services.
The entire paper they presented is a must-read for everyone in our industry, but I will focus on one aspect of the research for the purposes of this article. One of the key takeaways from their research is that we have all dramatically underestimated the number of edge data centers that will comprise a mass implementation that truly meets the consumer and commercial demand on the horizon. With apologies to the research team for my paraphrasing of their highly detailed work, the scale is in the hundreds of thousands of micro data centers. For major metro areas like New York City and Los Angeles alone, their projections show the eventual need for tens of thousands of micro data centers each. More than 80,000 in NYC. Nearly 80,000 in L.A. Forty thousand in Chicago. And their research also digs into the needs outside of those major metros, which adds hundred of thousands more.
I should note that these data centers will come in many shapes and sizes, which will stretch the definition of what a data center is. Some will be traditional data centers. Many will be facilities like the 48 kW units like those we design. And others will likely be low-power, storage-only caching nodes that hold the most popular online content in a form factor only a few cubic feet in size.
My colleague, Anton Kapela, who is EdgeMicro’s CTO, is diving deeper into the professors’ research, and I will defer more technical commentary to what he will write on this topic. But even to a non-academic like me, these numbers are staggering. Even the gigantic shark from Jaws would do a double-take at the sheer scale of edge computing. So let’s check the clock in our metaphorical movie. This research just put us at 79 minutes and 15 seconds. We’re getting closer.
THE EDGE JUST GOT FASTER … SCARY FAST
It’s the biggest “duh” statement of the year to say that going to the edge is about reducing latency. I dare you to find anything written about edge computing that doesn’t include that word. It’s like saying a giant shark has a lot of teeth and likes to use them. What’s so newsworthy about that? Well, something significant has shifted regarding latency, and it is the difference between edge computing being a big thing and being something that fundamentally changes how computing is done on mobile devices.
My colleague, Anton, recently wrote about this in depth, but I will summarize the salient points here in a CliffsNotes version for those of us who don’t live and breathe nitty-gritty wireless technologies like he does. In the past, edge computing discussions focused on achieving low-latency because that was what was achievable in practical terms at scale. The Holy Grail was zero latency, but that was only theoretically possible and only a practical reality under specific use cases such as data center set-ups that a handful of financial institutions established for applications like real-time trading. For the rest of us, zero latency was about as realistic as sharks being lifted into the air by a tornado and dropped onto a major metropolitan area.
That was then. Now, it’s not just a practical reality in specific use cases, but at scale. Massive scale. And it dramatically expands the number of use cases for edge computing — dramatically expands the number of applications that would rely on micro data centers — and dramatically magnifies the impact on what consumers and organizations can do with mobile devices. Zero latency changes edge computing from a way to do things faster into something that ushers in a new era of computing as significant as when Netscape Navigator made the WWW — something non-nerds knew about and used.
So how has zero latency gone from something that is largely theoretical to something that you can suddenly have from Paris to Peoria? It’s not one thing. It’s three:
Advancements in the LTE airlink speed — the speed at which mobile devices, like phones, communicate with cell antennas in both directions — are shrinking the latency of those data transfer rates to essentially zero when that is combined with 5G technology. That latency is going away.
The back-end transfer of data on networks is being updated in ways that eliminate the latency that was built into what Anton calls a “tangled mess of fiber, microwave links, outdated gateways, ratty VPNs, xDSL last-mile ‘hacks,’ and other obsolete technology that slowed things to a crawl.” That latency is going away, too.
Lastly, but most importantly, there is a scalable model for micro data centers at the edge that moves content and computing services as close as possible to end users using a peering/colocation model that all the parties already buy into.
Zero latency is achievable today, and it’s going to be a catalyst for edge computing that dwarfs what “lower latency” was already driving. Let’s do another time check. When you combine zero latency as practicality at scale with the professors’ research above, that puts us at 79 minutes and 30 seconds. Can you hear faint murmurs of the Jaws theme music? I sure can.
DEMAND IS BUILDING … AND IT’S GOING TO BE VORACIOUS
I will keep this section short, like that first glimpse of the shark. If you’ve been reading Mission Critical Magazine for the past couple of years, you’ve seen some great deep dives into the kinds of applications that edge computing is making possible, and that, in turn, will drive the growth of edge computing. For just a few examples from people who have their finger on the pulse of what edge computing is making possible, visit “The Optimal Edge” at https://bit.ly/2Jtw1Mu.
We’re talking about applications like driverless cars, ultra-low latency gaming, virtual reality gaming, smart city projects, zero latency wireless industrial applications, real-time business analytics, and a hundred other things that sound like science fiction. At least they were science fiction in the past. Now, it’s scientific reality. Gaming companies are already launching video games that rely on near-zero latency. Silicon Valley is already putting test cars on the road with nobody in the front seat that rely on that level of connectivity. And I haven’t even gotten to the tsunami of pilot projects I’ve heard about from companies that are testing what is possible with edge computing.
If only a small fraction of these pilot projects get out of the testing phase and make it to larger deployments, it will represent demand for edge infrastructure that dwarfs what the industry discussion has been to this point — and that echoes the scale that the professors from Wisconsin and Oregon are predicting in their research.
Ok, we’re at 79:45, and Roy Scheider just filled up his bucket with fish guts for the hundredth time in the movie. This is getting exciting.
TESTING IS THE CALM BEFORE ALL HECK BREAKS LOOSE
Just like in the movie, even though it feels like you’re never going to see the shark, it’s closer than you think. Much closer. Eerily close. Perhaps even right under the boat, like in that creepy scene mid-way through.
That’s exactly where we are at with edge computing. All heck is about to break loose, but you can’t see it yet. You can sense it. In Jaws, the famous music helped give you a clue how close the shark was. In our industry, it’s the pilot projects that are the equivalent of the “Dah-dum, dah-dums” of John Williams’ score.
In my role, I get to have a lot of conversations with people who are at the forefront of figuring out how their organizations are going to take advantage of edge computing. And those dialogues make something abundantly clear: there is a boatload of edge pilot projects in the works, and that is the last hurdle to mass deployments on the scale I’ve discussed above. These are smart people, who work for big companies and who believe in edge computing as a way to solve thorny operational and customer service issues, and they are ready to take projects from the drawing board to the data center.
The challenge has been how to get those pilot projects into production in a real-world edge data center setting that gives companies the test results they need to green light full scale rollouts. In the past, there haven’t been “live” units for this kind of testing, or the cost was prohibitive for organizations to have access to the kind of facility for pilot projects. After all, who has the multi-million dollar budget to deploy a micro data center to support just a pilot project?
But this last barrier is coming down, too, just like all the others I’ve discussed above. Providers of micro data center solutions are progressing to the point where they can fulfill orders with production units, and one company is even launching a free testing environment to help companies do proof-of-concept testing as a precursor to large-scale deployments. These pilot projects are about to move ahead at shark-like speed, and things won’t be the same after that.
TICK TOCK, TICK TOCK
I’m looking at the timer, and it says 79 minutes and 55 seconds. All the talk is finally going to lead to something. All the waiting and suspense is finally going to pay off. It’s not a scary shark that will finally surface, though. This is the future of computing. A new era as big as when PCs became a consumer product and ushered in the personal computing age — or when the internet became mainstream and ushered in the age of connectivity — or when suddenly everyone had a smartphone that put the power of computing at their fingertips — from anywhere. This is the next stage in the evolution of technology, and it’s going to be bigger and arrive sooner than most of us ever dreamed.
And while I have your attention: If a mega shark starts attacking you, be sure to shove an oxygen tank in its jaws. That’s a pro tip!