The Internet of Things (IoT) is upon us: we have access to more data and are using more connected devices than we ever thought imaginable. In fact, Gartner Inc. predicts that by 2020, there will be 25 billion connected things — sensors, smart appliances, mobile phones, cars — driving up the criticality of information sharing and making major impacts on business, including generating an unprecedented need for computing power, connectivity, and quality of service, especially in terms of latency.

In an on-demand world, lost connections and delays are simply unacceptable. As a result, data centers — the backbone of any business’ IT architecture — are undergoing an evolution. To support needs today and tomorrow, computing power and storage are being inserted out on the network edge in order to lower data transport time and increase availability. No longer relegated to the far off outskirts of a city, data centers are being placed closer to action — that is, at the user or data source — and are a part of a larger cloud computing architecture, called ‘edge computing’ that is challenging the traditional approach of placing facilities in remote and geographically distant locations.

Typically speaking, there are three applications for edge computing:

#1: High-bandwidth content distribution.

There are two terms here that we should become familiar with, if we aren’t already:

  • Latency — or the time between the moment a data packet is transmitted to the moment it reaches its destination and returns. Most often measured as a ‘round trip,’ typical latency measurements fall below 100 milliseconds (ms), with 25 ms being the most desirable.
  • Bandwidth — or the transmission speed of data on the network. While networking equipment speeds are published by the manufacturer, actual speed obtained in a given network is almost always lower than the ‘peak’ rating for the device.

Excessive latency creates traffic jams that prevent data from filling the network to capacity. To relieve this congestion and improve streaming of high bandwidth content — such as high definition video —service providers are implementing connected computer systems on the internet, so that content is cached closer to the user. This enables content to be deployed rapidly to numerous users through duplication of content on multiple servers and the directing of the content to users based on proximity.

#2: Edge computing as IoT aggregation and control point

The future of ‘smart’ technology requires the wide-sweeping deployment of IoT enabled sensors, and as the price of sensors decreases, the growth of connected devices will only continue to grow. This is no bad thing — the IoT and connected devices can help with automating information gathering from physical assets such as machines, equipment, vehicles, and more, and enable the use of that information to provide higher levels of visibility into and control over process and resource optimization.

#3: On-premise applications

A top concern is the need to maintain, and if possible increase, the availability of IT and the connected network. This is often done through cloud computing, which has historically been a centralized architecture. Edge computing, however, makes cloud computing much more distributed. Because of this, any disruption (such as a Distributed Denial of Service or long lasting power outage) is limited to only one point in the network instead of infiltrating through the entire network and its applications. By migrating to off-premise cloud computing, organizations can take advantage of edge computing for increased redundancy and availability, and business critical applications can be duplicated on-site.

Similarly, there are also three types of edge computing environments:

  • Local devices, or devices sized to accommodate a defined and specified purpose. These systems are can be deployed ‘immediately’ and they are suitable for home or small office applications. Typical examples of these architectures are those that run the security system for a building or store local video content on a DVR.
  • Localized (1-10 racks) data centers, not unlike local devices, these data centers can be fast to deploy in existing environments. Additionally, they can also be available as configure-to-order, pre-engineered systems that are assembled on site or as prefabricated micro data centers (assembled in a factory and dropped on site). These architectures provide large amounts of processing and storage capabilities and enable businesses to save on CAPEX by leveraging existing building cooling and power. Localized data centers are also appropriate for rugged environments, as they can be enclosed in rainproof, corrosion-proof and fireproof materials, or be deployed in a typical office IT enclosure.
  • Regional data centers, which are facilities with more than 10 racks located closer to the user and data source than are cloud data centers. These architectures have more processing and storage capabilities than localized data centers and even when prefabricated, often take longer to construct. These facilities also need dedicated power and cooling sources. In these environments, latency depends on the systems’ physical proximity to the user and data, as well as the number of hops in between both.

The exponential growth of data, driven by the IoT, is causing major bandwidth concerns for organizations as they struggle to understand where and how to best manage and process their data. This growing quantity of data will need to be processed and analyzed in real time, and edge computing helps takes data and workload technology to a whole new level. Edge computing can solve latency challenges by moving data closer to the end user enabling companies to take better advantage of opportunities leveraging a cloud computing architecture and provide greater availability and access to data, resulting in a better end-user computing experience and reduced data costs.