According to Forrester, 32% of global telecommunications decision makers are either presently implementing or expanding edge computing, and another 27% report they plan to implement edge infrastructure within the year. The main drivers pushing companies towards the edge include sensitivity to bandwidth, cost, and latency.

By now it’s well established that edge computing — by localizing data acquisition and control functions, as well as the storage of high bandwidth content and applications in close proximity to the enduser —circumvents the distance, capacity constraints, multiple network hops, and centralized processing loads that exist in traditional internet architecture. Simply put, there just isn’t enough bandwidth to send data back and forth to public and private clouds, much less support the billions of connected devices that will accompany smart city developments, which will require ultra-low latency.

Content providers have for many years now reaped the benefits of localizing their media proximate to consumers in order to deliver a superior viewing experience. Cloud services that require tremendous bandwidth, minimal latency, and high power density, and well as cloud on-ramps, continue to leverage edge infrastructure as a means to move closer to their enterprise customers, while solving such business-critical challenges as performance, security, and cost. Meanwhile, innovative next-generation services and ecosystem partners that are associated with the emergence of the IoT, 5G, and autonomous vehicles will all have to rely on edge infrastructure both today and tomorrow. But to fully understand the future of edge computing, it helps to understand its past.

Mapping the History and Future of the Edge

Historically across the U.S., the internet and its connections have grown over time into a hub-and-spoke model. Network service providers (NSPs) connected and passed traffic between interconnection exchanges in major data center hubs. The problem was that because these hubs were located largely in Tier I markets, data packets often had to follow indirect and even circuitous paths. This often resulted in bottlenecks and network latency, factors that negatively affected the enduser’s experience, especially in Tier II and III markets.

Many established data center providers in the U.S. ignored this situation. And the result was that certain Tier II and Tier III markets remained largely marginalized from the larger wholesale and retail colocation providers as they continued to concentrate on the major hubs in Tier I markets. The more progressive, future-focused providers, early on addressed this connectivity shortfall by building data centers in Tier II and III cities that were strategically positioned near network provider aggregation points, extending the internet’s edge as close as possible to enterprises and endusers. Since then, many early-stage edge facilities have grown to become large regional campuses.

Looking across the pond, the challenges and limitations of a hub-and-spoke internet model also exist in Europe. Despite their collective size, European Tier II and Tier III cities and markets are underserved. The population base of Tier II markets in Europe is generally smaller than Tier I markets individually, but many are close to Tier I markets in geographic size, economic activity, and the amount of data traffic, and there are many more of them than in the U.S. The “SGPTD Second Tier Cities and Territorial Development in Europe: Performance, Policies and Prospects” report identified 124 Tier II cities in Europe, which make up almost 80% of the Continent’s metropolitan urban population. While Tier II cities are typically remote from their host country’s capital and major business hubs, their commercial and social activity significantly affect the performance of their respective national economies.

Here again, the solution is proximity-based, edge infrastructure that is strategically located nearest to the enduser’s point of access, thereby reducing network latency and optimizing performance. In Europe, as in North America, local proximity access also brings the cloud closer to the enterprise, enabling more secure, real-time access to cloud applications and services while reducing backbone transport costs. Moving infrastructure, as well as the connectivity ecosystem closer to the edge of the network will ultimately enable these markets to operate as independent and local resources necessary for the efficient delivery of cloud services, over-the-top (OTT) content, gaming, and IoT-enabled devices.

This last edge driver cannot be overestimated. Especially as smart city technologies continue to proliferate across the continent — as rated in a comprehensive report by IESE Cities in Motion Strategies, 28 of the world’s top 50 smart cities are located in Europe, including Amsterdam, Dublin, and Munich — edge infrastructure will only become more acutely essential.

While Europe may not possess the large, centralized data center markets that we see in the U.S., the dynamics of the edge are very much the same. In the U.S., the edge is almost always anchored by the largest broadband network and the content ecosystem. Likewise, on the continent, the first wave of edge customers consisted of content providers that sought proximity to eyeballs to deliver an optimal consumer experience, followed by a second wave that needed to bring the cloud closer to the enterprise.

In South America, the edge is being driven as much by cloud and content providers as it is by the core broadband networks. However, the scope of an edge facility located in Buenos Aires, Argentina, is that it will be brought into service for an entire geographic region. A first-of-its-kind multi-tenant, carrier-neutral, purpose-built network connectivity platform — with extensive fiber, density, and peering options for low latency content delivery, cloud access, international interconnectivity, and communications services — will facilitate economic growth not only in Buenos Aires but throughout surrounding markets. This will make Argentina a more attractive location for data investment by enterprises across a wide range of industries.

Understanding the Edge in All Its Form Factors

In all of these instances, the main takeaway is that the edge is wherever the customer needs it to be. However, as the edge continues to evolve, the edge is fast becoming as much about what as where. Let’s remember that regardless of its specific location, scale, or workload, the edge is a means to deliver successful business outcomes. Hence, edge solutions can range anywhere from greenfield builds of 10 to 100-plus megawatt (MW) facilities designed for hyperscale cloud deployments to local reach wholesale data centers of 1MW to 10MW to support content and network providers, hybrid IT, and gaming to hyperlocal micro-edge data centers of 10kW to 1MW that will serve autonomous vehicles, the IoT, and smart city applications and systems.

In the latter form factor, the edge allows smart applications and devices to respond to data almost instantaneously, as it’s being created, thus eliminating lag time. Yes, cloud computing provides a strong enabling platform for smart cities because it provides the necessary scale, storage and processing power to derive insights from this information. However, as smart city technologies make increasingly high demands on centralized cloud data centers, hyperlocal micro-edge data centers will be needed to overcome limitations in latency while satisfying the requirement of more local processing.

That said, even the hyperscale campus that supports cloud services, webscale customers, and everything-as-a-service (XaaS) models need to be built where the customer needs it to be, as opposed to where it's easiest to build. No matter the scale, it’s no longer a question of siting a data center on a beautiful parcel of land as it is optimizing your customers’ experience. Here too, you have to bring the data center to them, at the edge, and that’s an entirely new and forward-looking design, build, and delivery paradigm.