There’s a perfect storm coming our way. Data is being generated at a faster pace than we’ve ever seen before. There are nearly 7 billion Google searches that take place every day, and WhatsApp users exchange up to 65 billion messages. Everything is collecting data — smart cities, smart cars, and even smart doorbells. All of these devices and sensors require processing and analysis to make them useful. When data sits unused it costs businesses between $9.7 million and $14.2 million annually.

This data explosion continues to provide challenges for the enterprise. Data, artificial intelligence, and machine learning are all rapidly becoming intertwined. AI is fed by data, and making sense of data requires AI. Gartner research predicts 75% of enterprise-generated data will be "created and processed outside a traditional centralized data center or cloud" by 2025.

To cope with that level of change, the industry must create the platform to support low-latency, dense compute capabilities within edge data centers. These platforms need to offer at least the same server resiliency and serviceability as those in larger-scale sites if the expansion at the edge is to be effective.

Until more solutions are developed, the industry runs the risk of “edge washing.” Being edge-ready will need to be about sustainability as much as being operationally resilient. New thinking and truly sustainable solutions will need to be developed and reengineered. It won’t suffice to take a solution developed for inside the data center, tweak it, and then place it at the edge. Solutions will come to market to test the parameters, but many will not be successful because they have not used the right type of electronics and chips, or they didn’t do something as simple as using conformal coating to protect the server boards in otherwise exposed environments. 

A clear solution to edge server environment design is chassis-level precision immersion liquid cooling. There are several variant solutions that address these edge conditions, and most offer a sealed chassis, which creates a controlled environment that is impervious to dust, gases, and humidity. These solutions are also able to maintain data center compute density while offering improved energy efficiency. This allows high-speed, higher-processing power servers to be efficiently cooled by liquid. Sealed chassis servers also ensure that external environmental factors do not affect the compute capability of the edge system.

Autonomous vehicles are often cited as examples for high-performance compute at the edge and for good reason as they are constantly generating data for predictive analytics and search patterns to keep drivers safe on the road. In a split second, the data needs to be filtered, analyzed, and moved. If you are trying to predict if someone is going to have a car accident, latency becomes a critical issue when moving data back to a centralized data center. The infrastructure needs to support the speeds and feeds of the data being generated. Otherwise, you can have a very serious problem on your hands.

Consider, as well, the retail environment. Real-time data is used to improve the in-store customer experience. However, the equipment and servers that are needed for that capability have to be in a form factor suited for a retail environment. Floor space is a premium asset, and any computing device reducing floor displays or stock room footprint is costing the retailer money. Liquid-cooled compute solutions are in form factors identical to air-cooled servers, with the benefit of greatly increased compute density in similar footprints.

As the move to IoT and edge computing continues, colocation is becoming an option for organizations that don’t want to manage hundreds of distribution points. However, it is also likely to be a greater point of disruption from AI. Colocation facilities were designed for legacy, traditional, non-compute intensive applications at 5 kW to 8 kW per rack. If multiple tenants are deploying AI and machine learning applications at 30 kWh per rack, power and cooling limitations within the data center are quickly maxed out.

The good news is, the industry has developed solutions to address these issues. Over the last couple of decades, there have been many studies addressing data center energy consumption. The industry has made massive moves on energy savings by focusing on best practices for optimizing energy and newer technologies to increase capabilities for the same energy use. The shift to the edge will, however, disrupt these efforts. The economies of scale for infrastructure and solutions in a centralized data center will not be easily reproduced at the edge, if at all. The question becomes how can the ruggedized equipment required for the edge maintain data center density and improve energy efficiency?

Edge locations contend with a variety of harsh IT environments. At one extreme are the cold, damp northern climates and, on the other, the hot, humid southwestern states. There are also airborne contaminants, particles, and corrosive gases to be aware of, all of which need to be closely monitored to protect servers regardless of their location. ASHRAE outlines key considerations for reliable operation of servers and equipment in edge locations. These range from checking IT specifications in order to understand the impact to equipment warranties, servicing capability, and corrosion limits as well as the impact of air and temperature on equipment. New standards are likely to evolve as we more deployments occur in unusual locations — utility towers, light poles, and perhaps even in vaults beneath pavements.

Enterprises are at the center of an unprecedented data explosion. Data, AI, and machine learning are becoming ubiquitous across multiple industries all over the world. Now, more than ever, it is time for enterprises to have an edge transition plan in place. With the right preparation, organizations will be able to capitalize on real-time insights to create greater value for their business.