Over the past few years, organizations of all sizes have been moving applications to the cloud. This move, for the most part, has been both financially and operationally efficient. However, there are rising concerns that not everything can or should move to the cloud or remote data centers. As more and more devices connect to the internet and processes and factory lines become automated, having data travel back-and-forth to the cloud or a remote data center for processing will become increasingly inefficient and untenable.
The primary concern revolves around latency. For example, an AI-driven automated process won’t work effectively if machine responses are delayed by the processing time required to transmit data and receive results from the cloud or remote data center. The second concern is that some applications run the risk of producing so much data that passing all the data to the cloud will become costly and inefficient. These concerns have given rise to edge computing.
Edge computing is about moving critical application processing to the edge of the access network, with the edge of the network defined here as being within the organization’s management domain. So, edge computing becomes a “mini-data center” that has moved in-house, closer to the source of the data.
One of the environments where edge computing most frequently comes up is in Industry 4.0. Here, the application is the factory floor with numerous sensors transmitting real-time data for near-real-time analysis. This data and analysis are necessary to ensure the smooth, constant monitoring and control of industrial processes. With a non-stop flow of data between sensors and compute, the amount of data transmitted can quickly become very significant. However, this data can be “noisy” with a lot of redundant, irrelevant, or erroneous information. Therefore, it can become extremely cost-prohibitive to transport this data.
Before an organization gets too far down the edge compute path, it will need to converge its edge network. Essentially, edge network convergence places both the wired, wireless, and IoT networks in a unified, integrated management domain.
Edge network convergence simplifies edge computing because the data that drives this requirement originates from different sources. Today, there is no guarantee that each source will have a hardwired connection to the network. Most IoT sensors will connect to the network over a wireless interface — Wi-Fi, BLE, Zigbee, Z-Wave, or some other IoT protocol. The goal of edge network convergence is to ensure a consistent, reliable view of all entry methods to the network and ensure the prioritization, security, and routing of traffic to the appropriate application.
At this point, one might ask, “Can’t this just be done at the router or switch level?” Although this is a valid question, it is somewhat simplistic. First, there is no guarantee that all sensors will use the same wireless interface, so there needs to be a common infrastructure to collect that data. Second, IT organizations will want to prioritize this traffic as soon as it jumps on the network, which means being able to shape traffic starting at the Wi-Fi access point (AP) as well as at the switch. Finally, organizations will want to ensure consistency in configuration, security, and prioritization should there be any reconfiguration or change in the local network infrastructure.
The success of Industry 4.0 is highly dependent upon the rapid collection, analysis, and coordination of numerous processes and devices in near-real time. Edge computing will play a critical role, although moving the analytics engine to the edge is only part of the solution. From our perspective, edge convergence will play a foundational role in the success of an edge computing environment and the ability to process data quickly and efficiently. Whether organizations are currently considering edge computing network designs or eyeing them for the future, edge convergence can be implemented today with operational and IT benefits that are evident both in the short and long term.