While attending the DCD Enterprise conference in NYC in May, I was amazed at the number of presentations and vendor booths highlighting edge computing, 5G, and digital transformation (although I am still wondering, was it all analog before 2018?). Of course, let me not overlook artificial intelligence, machine learning, DCIM, block chain, cloud, and hybrid cloud. So in light of the above, I thought I would take a metaphorical peek over the edge.

In the telecommunication world, communications equipment has always been designed and expected to operate over a much wider range of environmental conditions than standard IT equipment intended for a typical data center. From the Arizona deserts to the Alaskan tundra, the “Telecom Shack” is expected to operate reliably in hostile conditions with minimal support.

Telecom-rated equipment typically complies with the NEBS standard, which was inherited from Bell Labs, the mothership, aka “Ma Bell,” by Telecordia. The NEBS environmental operating envelope is much wider than ASHRAEs “recommended” range of 65° to 80°F. However, as telecom has become more data centric, the NEBS and the ASHRAE standard became more aligned with ASHRAE’s latest expanded “allowable” ranges: A3 (41° to 104°F) and A4 (41° to 113°F). In fact, the basic NEBS ranges are listed and compared in the 4th edition of the ASHRAE Thermal Guidelines. However, as you can see from the example below, NEBS rated equipment must withstand far more extreme conditions.

For example, here is a portion of the short-term testing requirements of NEBS equipment from GR-63-CORE, as cited in the 4th edition of the ASHRAE Thermal Guidelines:

 

Drybulb Temperature

  • Frame level: –5°C (23°F) to 50°C (122°F), 16 hours at –5°C, 16 hours at 50°C (122°F)
  • Shelf level: –5°C (23°F) to 55°C (131°F), 16 hours at –5°C, 16 hours at 55°C (131°F)
  • Max rate of change: 96°C/h (173°F/hr) warming and 30°C/h cooling
  • Relative humidity (RH): 5% to 90% 3 hours at <15% RH, 96 hours at 90% RH

It is also noted that in generally accepted telecom practice, the major regional service providers have shut down almost all humidification based on Telcordia research.

While the full NEBS GR-63 series of documents are quite extensive and go far beyond this simplified example, the 173°F/hour maximum warming rate of change (which is approximately a rise of 3°F/minute), provides a good indication of the expectation that NEBS IT equipment must remain functional even if cooling is lost. In comparison, this is nearly five times the ASHRAE 36°F/hr rate of change.

The classic remote telecom shack HVAC system typically provides far less temperature stabilization in almost all but the most extreme locations by providing just enough temperature control to mitigate the extreme highly variable and rapidly fluctuating temperatures, but virtually no humidity control.

So with the expected explosion of 5G nodes and related demand for low latency distributed edge computing, what environmentally rated grade of IT hardware will be used, and what type of environment control systems will be deployed?

In a previous article, “The 5G Data Center” (https://bit.ly/2zUJTO0), I discussed the fact that 5G coverage ranges will be much shorter due to higher frequencies, which are not able to penetrate buildings and other objects as well as current 4G LTE systems. Therefore, more 5G nodes will be required, which will also need much greater network data processing and storage capacity and overall throughput at each 5G node. While conceptually similar to the small telecom shacks at each cell tower, they will require much more data processing; each 5G node will effectively become an edge data center.

In many situations the hardware will need to be densely concentrated into compact weatherproof self-contained enclosures the size of a single rack, half-high rack, or even a suitcase size enclosure. They will need to be installed in many locations where there may not be an easily accessible external source of mechanical cooling. This becomes a very difficult environment for conventional air cooled IT equipment.

Moreover, initial cost and energy efficiency and energy costs will come into play if cooling limitation requires more expensive NEBS type IT equipment, which utilize much greater airflow to operate at the higher temperatures. I believe some of these smaller units will begin to be based on internal liquid cooling, allowing them to be capable of operating continuously in ambient temperatures of up to 140°F (60°C), without fans or mechanical cooling.

Moreover, most locations have utility power, but far fewer will have generator back-up. Therefore, the IT equipment will still need to operate during an outage on internal back-up batteries. The batteries must also be able to withstand the extreme temperature ranges, which will also need to rely on Lithium Ion (Li) technology, which is less impacted by these temperatures.

In addition, some remote sites will use solar power, supported by Li batteries (or whatever new battery chemistry will be developed in the next few years), capable of eight to perhaps 24 hours of back-up time for the IT hardware. So even if mechanical cooling was provided by the solar array, cooling would be unavailable overnight, further necessitating the need for NEBS temperature ranges and especially the tolerance for a high rate of rise at a minimum and again favoring liquid cooling. One of the thermal advantages of liquid cooling is temperature stability, since the inherent thermal mass of liquid is a magnitude greater than air. This puts far less mechanical stress on the CPU and other components from rapid and repeated temperature swings ensuring greater overall reliability — a telecom mandate.

There is no doubt that the established telecom equipment manufacturers and service providers are well aware of these environmental challenges. However, it may take a while for newer equipment providers which jump into the edge data center marketplace to realign their thinking regarding these conditions, which differ greatly from traditional data center conditions. This is not to say that in many cases air cooled IT hardware cannot be used, but I believe that once the case and benefits of liquid cooling have been tested in these hostile conditions, because of necessity, it will ultimately prove to become more cost effective once volume manufacturing lowers the cost.

While high-performance computing (HPC) has been the more common driver of liquid cooling, Google recently announced that they utilized liquid cooling for their latest artificial intelligence project. Their newest version of the Tensor Processing Units (TPUs) developed by Google could not be effectively air cooled despite the massive heat sinks used in their previous TPUs. Instead they used liquid cooled heat sinks. This is not just an AI research project; Google is currently offering this as accelerated machine learning cloud-based service, so it will become a large scale deployment.

As more high-profile cloud service providers begin to deploy liquid cooled IT hardware, it will help validate its feasibility and reliability, and help to accelerate industry acceptance. As an example, it was hyperscalers such as Google and Facebook that first piloted direct airside free cooling. They then began to build their own massive production data centers designed to pass outside air into the data center, something that was seen as heresy by the traditional data center community at the time. One of their primary motivations was to improve their energy efficiency, which lowered their operating costs. Moreover, this also reduced their initial cost of the facility since they avoided or reduced their investment in mechanical cooling equipment.

Furthermore, the long-term benefit of liquid cooling is to improve the recovery and reuse of the waste heat energy rejected into the environment by data centers. Even at a super low PUE of 1.0x, for each megawatt of power going into the facility, a megawatt of heat is added to the environment (even if it is  dumped in a lake or river). Instead, it could be used to reduce the amount of fossil fuel burned for heating, as well as the cost. For example, Stockholm Data Parks, along with a power grid provider, are expanding efforts to attract data center operations based on a heat recovery marketplace called Open District Heating, which gives credits for waste heat which is used for district heating. I consider this is real example of a “smart city.”

 

THE BOTTOM LINE

So while 5G networks, smart cites, and fully autonomous networked cars controlled by AI (and perhaps even flying taxis) may arrive faster than we expect, somewhere (and everywhere) there will still be a need for IT hardware, which still needs to be powered and reject heat, somehow. So while the big data centers are not yet ready to join the dinosaurs, while distributed and edge processing becomes the next big thing, here at the Hot Aisle we still like to think about keeping it all cool.