The edge: A term that can conjure images of the musical group U2’s guitarist or bring to mind the phrase “A Flat Earth.” These days however, the edge is a place where much of the work of the Internet of Things (IoT) is heading; in coming years, the amount of work being done there will boggle the mind. Keeping edge sites up and running will require more power as well as a cool and stable environment, as these data centers will not be a conventional facility but will be in a corner, in a closet, or just about any other environment you could think of. This article is intended to bring you up to speed on operational trends for edge environments, and how it and they apply to the greater data center marketplace.

Over the last few years and especially the last few months, this term, the edge, has become part of our daily vernacular. But what does it truly mean? How do we differentiate it from conventional data centers? Solving these new challenges is and will continue to be a work in progress, it must be … as it’s never been done before.

Perhaps starting with some accurate definitions is in order. I say accurate due to their being several versions of what the edge actually is, resides, or looks like. Below are three definitions that should allow you to gain clarity and, or, streamline your thoughts regarding the topic.

Therefore, In Real-Life Terms, What Is The Edge?

It is the delivery of computing capabilities to the logical extremes of a network in order to improve the performance, operating cost, and reliability of applications and services.1 By shortening the distance between devices and the cloud resources that serve them, and also reducing network hops, edge computing mitigates the latency and bandwidth constraints of today’s internet, ushering in new classes of applications.

In practical terms, this means distributing new resources and software stacks along the path between today’s centralized data centers and the increasingly large number of devices in the field, concentrated, in particular, but not exclusively, in close proximity to the last mile network, on both the infrastructure side and the device side.

As with most things you can break down generalities, the edge is no exception. There are two specific edge’s to be cognizant of as this conversation continues.

The device edge. The device edge refers to edge computing resources on the downstream or device side of the last mile network. These include conventional devices, such as laptops, tablets, and smartphones, but also objects we don’t normally think of as internet devices, such as connected automobiles, environmental sensors and traffic lights, and congestion control systems (AKA smart highways).

The infrastructure edge. The infrastructure edge refers to IT resources which are positioned on the network-operator or service-provider side of the last mile network, such as at a cable headend or at the base of a cell tower. While “last mile network” is a high-level term which has many nuances, iterations and exceptions when you dig into the details, the infrastructure edge can generally be thought of as large-scale facilities owned and operated by a service provider.

Using these definitions as a basis, what are the power challenges? Powering the edge, in either version, must have a simple architecture. These sights, for the most part, will be “lights-out” operation. Most, if not many, will not have emergency power available, meaning no back-up or emergency generators to carry the load if and when the utility power fails or is unstable. This  means that many of the operators and future operators will run a site strictly on the utility power, perhaps a small battery set within the UPS, and they are good with that thought and/or decision. What and how can they be happy with that topology of their electrical infrastructures you may wonder, but it is rather simple: the load will simply shift to another edge data center within a rather close geographic location.

Thinking it through, the number of data centers will come close to doubling to support the number of edge data centers based upon the on sought on the impending and soon to be delivered, quickly, yet in a measured fashion, 5G networks. The power systems that will operate these data centers has to be manageable, remote-controllable, and easy optimized especially through developing software-defined operations.

Understanding what a data center is, is critical as well: Although many people define it differently, one way of understanding the logic of edge computing is to think of it this way … per Wikipedia: A data center (American English) or data centre (British English) is a building, dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications, and storage systems. That, my friends, leaves a very wide berth for what qualifies as a data center vs. what we have historically envisioned a data center being in our mind’s eye.

Ok, so the thoughts quickly go like this: An enterprise-level, purpose-built facility, a large colocation facility with a portion leased for operations, the two rack, road-side shelter all the way to the modular building that is placed under a cell-tower — all of these are or can be considered data centers and depending upon the IT architecture, can be viewed as an “edge operation.” Powered differently than the traditional or legacy facility? Perhaps and most likely. Remote controllable, non-moving parts, thereby increasing the reliability by at least a 10x value, all with real-time measurements of voltage, amperage, phase status, and even down to the temperature of the internal components. These controls must allow for start-stop, non-human assisted resetting to normal operation, among other owner and or enduser defined specific functions. Companies are continuously devising ways to do just these things. As noted last fall at the AFCOM show on this very topic in Nashville, TN, Charlotte, NC-based Atom Power is on the cutting edge of this vein of technology. Recently receiving the seal of approval from the Underwriters Laboratories (UL), this non-traditional strain of electrical devices will forever change the game, solid-state based and derived devices lend themselves very well to many, if not all, of the newly hatched and required operational requirements.

Then let us consider the heat rejection, or cooling/air conditioning, of these same facilities. computer room air conditioner (CRAC) unit(s), computer room air handler (CRAH) unit(s), direct, refrigerant-based, expansion (DX) cooling systems in a traditional layout, atmospheric cooling only systems, adiabatic only, hybrid-adiabatic, air cooled, water cooled, liquid cooling to the device, you name it, as many concepts and/or designs as you can think of to overlay onto the footprint of a data center, someone will be or is giving it a go or piloting the concepts at this very moment. The heat-rejection system’s design will be dictated on a very site-specific basis. Is the heat load (compute environment) lurking in the ceiling of an office building, perhaps in the core of a 47-story office tower on the 28th floor? If so, that is an entirely different design and therefore systemic view of a cooling system vs. the one that is bolted to the exterior wall of a module or purpose-built containerized system positioned at the base of a cell tower. The enterprise-class data center design that we all sort of grew up with, will for sure live on, as many systems simply are designed and will live on as a 5 kw per rack, “normal” design. There is nothing unique about them, therefore, nothing unique about the design. The half-rack data center sitting in the middle of a factory floor, or perhaps the one bolted to the power and/or telephone utility pole on Main Street in Yourtown, USA, will require a more specific thought process in the design, installation, and implementation of the cooling system to support the specific load and the individual design of the environment.

The bottom line is this: We are only now entering into the infancy of this cycle of the computing age. Never before have we seen such a diverse array of installations, the challenges of task-specific needs, and the growing list of expectations of these systems. Think of the challenges, so much equipment, so many unique needs, due to site-selection or site demands, it is a “trying to put the 10-pounds of potatoes in the 5-pound bag” scenario, all the while thinking about reliability, limited hands-on, or touching the gear, due to the remoteness of some of the site selections.

At the end of the day, as so eloquently stated by Matt Trifiro and Jacob Smith in their report, “The State of the Edge 2018: A Market and Ecosystem Report for Edge Computing,” 2 there are four key principles to the edge: 1. The edge is a location, not a thing; 2. There are lots of edges, but the edge we care about today is the edge of the last mile network; 3. This edge has two sides: an infrastructure edge and a device edge; and 4. Compute will exist on both sides, working in coordination with the centralized cloud. If you have not taken time to read the entire white paper, I would highly recommend taking the time to do so. It is robust and complete including a vast glossary of terms.

Parting Thoughts

For sure and indeed, we are early in the game, the fun and exciting part is being able to participate, watch the industry mature around the ever-changing compute power, compute densities, and an always changing appetite for risk and reward of the endusers. Take time to dig in and participate, it is a progressive and invasive field, before we know it, we will be surrounded by the edge, many of us will have a smile knowing we saw the birth and maturation of a new market.