The second post in this four-part series identifying the main archetypes for edge applications and the technology required to support them concerns the Human-Latency Sensitive Archetype. The archetypes were defined as a result of an extensive analysis of established and emerging edge use cases considered to have the greatest impact on businesses and endusers. The full report on the archetypes is available here.
The Human-Latency Sensitive Archetype covers instances where services are optimized for human consumption. As the name suggests, speed is the defining characteristic of this archetype. Studies have found that increasing latency above 13 millisecond (ms) has an increasingly negative impact on human performance for a given task so use cases within this archetype require relatively low latency to keep users engaged.
The challenge of human latency can be seen in the customer-experience optimization use case. In applications such as e-commerce, speed has a direct impact on the user experience; web sites that are optimized for speed using local infrastructure translate directly into increased page views and sales.
This effect also extends to payment processing. Amazon found that a 10-ms delay in payment processing caused a 1% decrease in retained revenue. Centralized approval via password took, on average, 7 seconds. A move to local processing brought the time down to 600 ms, an improvement of 6,400 ms with each 100 ms potentially resulting in an extra 1% of retained revenue.
Another emerging example of a human-latency sensitive application is natural language processing. Voice is likely to be the primary form of interaction with everyday IT applications in the future. Natural language processing for Alexa and Siri is currently performed in the cloud. However, as the volume of users, applications and languages supported increase, it will be necessary to migrate these capabilities closer to users.
Other human latency use cases identified include smart retail, such as the cashier-less Amazon Go stores, and immersive technologies such as augmented reality where small latency lags can mean the difference between fun and nausea. In each case, delays in delivering data directly impact a user’s technology experience, as with language processing and augmented reality, or a retailer’s sales and profitability as with web site optimization and smart retail. As these use cases grow, so too will the need for local data processing hubs.
Next up: Machine-to-Machine Latency Sensitive