The technology world today is consumed with and driven by data. Now more than ever, we find ourselves accumulating, processing, and storing a tremendous volume of data that we’re then tasked with interpreting.

This need to accumulate and process vast amounts of digital information has led to several innovations from supercomputers and cloud computing to AI and machine learning. This, in turn, increases our need and accelerates the demand for quicker delivery, better efficiency, and increased time to value.

Operating in a global market comes with the expectation that access to the vast amounts of data must be delivered instantly, and we continue to see new innovations, all of which are tied to a data center in some fashion.

Challenges Facing the Data Center Industry

Data centers are large, high-cost facilities occupying hundreds of thousands of acres and more than 205 terawatt-hours of electricity usage. As of 2018, the largest data center in the world takes up a whopping 6.3 million square feet.

Traditional data centers utilize air-cooled technology, which is not the most efficient when it comes to thermodynamically rejecting heat, and, thus, requires more space to spread out the power and heat generated. In the traditional setup, data center operators can only blow so much cold air or suck so much hot air out of a certain space before the technology equipment needs to be spaced out. This leads to larger data centers and a lack of efficiency in cooling and space utilization.

More recently, many other new or compounded challenges have surfaced, including:

  • Moore’s Law has stalled —
    • Moore’s Law states transistors on a computer chip and computing power double every 18-24 months, and this has held true until recently. The thermodynamics of air as a heat rejection vehicle are not efficient enough and, in some cases, not capable of cooling chipsets.
    • The designs being released for the next set of chipsets will take 48-V power natively, which will generate more heat and will further the strain on traditional air cooling.
  • Increase in public demand for more efficient utilization of space and power —
    • We can no longer take up acres upon acres of space for buildings, as we are running out of optimal areas for these large data centers. Additionally, technology devices are getting smaller, so why are data centers not following suit?
    • Power and electricity demands are at an all-time high and will continue increase. The public is demanding the data center industry become more efficient with power utilization and look for more sustainable alternatives.
    • Power utilization efficiency (PUE) is a measurement that shows for every kW of power coming into the facility, what percentage of that is going to cooling the technology. Most data centers have a PUE between 1.2 and 1.75, with most falling in the 1.4 to 1.6 range.
    • The 1.2 to 1.75 range means that 20% to 75% is being utilized for cooling, not computing. At best, we are only 80% efficient with our use of power.
  • Demand for the Edge —
    • The current demand for access to data means that compute topology needs to be closer to where the process and consumption of information occurs, also known as the edge. This creates additional demand for smaller data centers.
  • Innovation is not slowing down.

Traditional air-cooled data centers will continue to work for many of the legacy applications. However, there are better solutions that enable space reduction, increased efficiency, cost reduction, and sustainable operations, all while providing the ability to compute at higher densities, causing a reinvigoration of Moore’s Law and enabling future technology innovations.

Immersion Cooling and Single-Phase

Using liquid to cool data center technology is not a new concept. In fact, the first innovation in liquid cooling used water to cool the hot side of the server cabinet and reduce the temperature. Since water conducts electricity, the water is contained in a basin and flows through pipes, running alongside the server behind a barrier. Essentially, the water cools the server without ever making contact.

This first innovation and thermodynamics led to the evolution known as liquid immersion cooling, which has two types: single-phase immersion cooling and two-phase liquid immersion cooling.

In single-phase immersion cooling, servers are immersed in a special cooling fluid that, unlike water, does not easily conduct electricity. This specialized liquid, called dielectric fluid, is hydrocarbon-based and is comparable to mineral oil. Throughout the cooling process, the fluid remains in its liquid phase, and heat is transferred through it by electronic components. These single-phase immersion cooling systems are not passive, as they require hydraulic pumps to remove the hot fluid and cool.

A big advantage of single-phase immersion cooling is that PUE is significantly reduced. The single-phase immersion providers claim lower mechanical-only PUEs. However, there are a couple of drawbacks to single-phase immersion. For one, the fluid is flammable, messy, and leaves a slippery residue. Additionally, the OEMs will not warranty any server that is utilized in single-phase immersion cooling, and it is a safety hazard to have employees changing servers over a vat of boiling fluid.

Recently, there have been significant strides in the evolution of cooling technology as dielectric fluid providers and data center manufacturers worked together to create a new type of cooling technology.

Two-Phase Liquid Immersion Cooling

Two-phase liquid immersion cooling (2PLIC) is the latest evolution of cooling technologies. Similar to single-phase immersion cooling, electronic components are submerged into a dielectric heat transfer liquid. There is a stark difference between single-phase and two-phase immersion fluids. The high-performance fluid absorbs the heat and begins to boil. The fluid then turns into a vapor that rises and condenses on a heat exchanger within the tank. This circuitous two-phase system allows for the fluid to be completely contained, preventing the loss of any fluid during operations. The simplicity of this passive system requires less mechanical cooling infrastructure, allowing for lower and/or zero water usage, less square footage consumed, and overall higher efficiencies.

The liquid used in 2PLIC is also different compared to single-phase immersion cooling because it is engineered for higher boiling points, creates no residue, prevents fluid degeneration, and does not need a pump. Companies like 3M and Solvay have been engineering this clean, environmentally friendly, and nonflammable fluid for almost 40 years.

Most 2PLIC systems operate at a full system (not just a mechanical system) PUE of 1.028, meaning less than 3% of energy is used on cooling infrastructure, leaving 97% available for computing. Through this technology, companies can be more energy efficient, improve system reliability, operate at a higher density, and deploy in more places. Done correctly, 2PLIC will enable companies to be more productive at lower costs.

As information about 2PLIC becomes widely known, companies are beginning to educate themselves on the solution and the impact it can have on their businesses. Today, there are only a few companies that have been able to create and operate a data center platform that takes advantage of this process and the specifically engineered fluids to go with it. For example, TMGcore developed a self-contained robotic data center platform called OTTO, which uses 2PLIC technology to increase efficiencies, increase compute densities, lower costs and provide the ability to put a data center anywhere due to its compact size. For reference, the 600kW OTTO only takes up 160 square feet.

So why aren’t we seeing 2PLIC data centers everywhere? The simple answer is the IT server hardware is not being mass produced quite yet. As mentioned above, Moore’s Law was in a stall, but tech giants, such as Dell and Intel, are working to create and develop hardware specific for 2PLIC technology. Currently, there are a couple of server options available, but as Moore’s Law continues to be reinvigorated, other OEMs will develop hardware and allow for the rapid mass adoption of 2PLIC.

The Future Is Here

With 2PLIC removing heat rejection challenges, Moore’s Law will be reinvigorated, and technology innovations will be perfected and deployed in the very near future. With current and future data centers needing more compute power deployed at the edge, the applications are endless. A few examples of areas where 2PLIC technology can instantly improve operations are:

  • AI/machine learning
  • Health care — gene and virus mapping
  • Cloud delivery
  • 5G
  • Autonomous driving
  • Smart cities
  • Graphic processing
  • Financial services and trading
  • E-gaming
  • Video streaming
  • Computational fluid dynamics on-site on demand
  • On-rig seismic processing for oil and gas

These applications require higher computing densities and, in particular, compute on the edge. Many of the current challenges these applications face can be bypassed by 2PLIC technology, providing an increased time to value for all companies globally and enabling future innovations.