Over the last 30 years, the IT industry — data centers in particular — has seen the pendulum swing from centralized to decentralized and back again. Is it any wonder why mission critical operations are so driven by the latest buzzword of the day? In the last decade, as we decentralized into the cloud, we all learned it’s not cost-effective to build data centers for less than 2 MW of power requirements, right? We knew the economies-of-scale wouldn’t allow us to be competitive. But with the edge needing nearline data and processing to reduce latency for data-intensive applications, we’re now rethinking how our facilities can best support new dense processing and storage. 

So why are edge data center applications breaking the mold? If you ask me, the process of business requirements -> driving IT requirements -> driving data center facility requirements is once again in vogue. If the business application can perform better by processing some cached data at the edge, then that’s what must be done. In 2018, Gartner reported that 80% of traditional enterprise data centers as we know them today will be closed by 2025. Some of this transition will be driven directly by edge applications. Global Market Insights forecasts the edge data center market to grow from $4.5 billion in 2018 to $16 billion by 2025.

But hold on — hasn’t data center resiliency also driven many to abandoned building and managing their own facilities, since they can’t possibly build to the scale that major colos or hyperscalers can? While it’s true that most enterprise users can’t build to that scale of redundancy and resiliency, we must remember that advances in IT resiliency are what drove the market to the cloud and hyperscalers in the first place. IT innovations and resiliency can overcome shortcomings in facility resiliency, which is what we realized once we could replicate data at a distance of more than 20 miles. Suddenly, it made sense to look beyond 20 miles away to locate our disaster recovery and business continuity sites.

What’s Our Plan to Support the Edge?

This is the billion-dollar question that enterprise-converged infrastructure architects and engineers are now getting bombarded with from C-level management, application owners, and vendors.

Business requirements drive edge application needs first. Video-on-demand media companies and manufacturers of self-driving cars are probably well underway with designing edge applications and architecture. But for several businesses, there may not be an edge play just yet.

Now, let’s say we do have a need for edge infrastructure to support an application. Should we renovate the data center and network room facilities we still have in our portfolio? Should we stand up modular containers or micro data centers? Should we follow the design requirements for building Tier III-Plus facilities, since we know  edge applications will be critical to our business? The answer may be yes to any or all of these ideas. 

Plan for Today and Tomorrow

First, not only must we determine what edge applications need to run today, but we also have to get our arms around how the edge  may change in three to five years. No one wants to do another construction project every couple of years and a forklift upgrade to expensive power and cooling equipment. 

Most likely, your edge application will be smaller and easier to plan for than your old enterprise data center that was built big and later scaled back when virtualization cut the demand for space, power, and cooling infrastructure in half. This is where modular and scalable critical power architecture can be your friend, especially at the scale the edge will typically call for. 

You’ve Got Options

Even though modular and scalable UPS systems may cost more initially, you’ll quickly recover those costs the first time you need to scale capacity up or down. These systems typically offer internal redundancy of rectifier and battery components. 

At a minimum, you’ll also want to look at line-interactive UPS technology for your edge application. You’re investing in the edge to improve performance and resiliency, right? So, it only makes sense to invest in the more reliable UPS architecture.

Online double-conversion UPS technology will provide the highest efficiencies and longest battery life for a site where you may not have 24/7 hands-on support. Still not sure? Keep in mind the Ponemon Institute reported the average cost of a data center outage was up to $9,000 per minute in 2016 and $740,000 total on average. Adding to that, it’s reported that 25% of data center outages are caused by UPS and battery failure. Approximately 22% of the failures were caused by human error, so you’ll also want to consider communications and remote support options for your backup power. Most major UPS providers offer SNMP and Modbus communications to support the NMS, DCIM, BMS, and BAS software you’re running today. They also offer global 24/7 remote support where they can augment your staff, make you aware of issues, and proactively dispatch field support. 

Since your edge application most likely won’t be large, it’s worth considering lithium-ion batteries. While they’ll cost about twice as much initially, lithium-ion batteries weigh and take up half the space of traditional batteries, and they last longer too.

If your edge application is larger and you’re concerned about the California Fire Code or the IBC enacting NFPA 855 Code that requires greater clearances around lithium-ion and no more than 250 KW of batteries per fire-barriered room, you should still consider longer battery life solutions like pure lead. An edge application of that size should have reliable and redundant backup generators with kinetic energy store capabilities. This option can alleviate the high cost, high maintenance, space, and weight-hogging of battery solutions altogether.

By the way, reduced hands-on maintenance and remote access shouldn’t be limited to just backup power. You should consider those design criteria for your power distribution as well. Many don’t want the potential security risk of remote outlet switched power distribution units. But being able to remotely recover locked-up IT hardware with a cold reboot may provide higher availability when weighed against possible longer downtime from having to get someone to a remote location for a cold reboot.

My advice is to get with your network security team as they can limit the risk of remote access, perhaps limiting remote access to a single or handful of IP addresses. Since most edge applications will be smaller, keep in mind all the great options out there for single-IP-address appliances, including switched metered power outlets for cold reboot of remote hardware, environmental monitoring, and IP security cameras.

Think Beyond the IT Load

Most anyone can easily come up with a solid plan and design for a reliable backup power and conditioning solution for an edge data center’s IT equipment. But one of the most overlooked areas in planning reliable power infrastructure is addressing the non-IT loads. I’m talking about support infrastructure, such as the mechanical load.

Of course, we don’t want to introduce those motor loads with high inrush current demands that will cause harmonic distortion on our IT equipment’s UPS, but there is power conditioning equipment out there that can provide ride-through of blips, sags, surges, and spikes while regulating the voltage from the utility and continuous power supporting the critical support infrastructure. Some of this power conditioning equipment can also make use of apparent power. So, not only can you increase the resiliency of your support infrastructure, you can also save 10% to 15% on energy by implementing power conditioning equipment that typically offers a 12 to 15 month return on investment.

Speaking of mechanical infrastructure, edge environments have unique airflow and cooling needs. Although chilled water typically offers the greatest efficiencies, edge loads will usually be much smaller than traditional data centers, which means the economies of scale will not be there for chiller water. Self-contained containers and racks typically utilize DX air-cooled solutions and are very efficient because of their separation between the cold air supply to the IT equipment and its hot exhaust air return. 

Keep these same airflow management principles in mind even if you’re just trying to outfit a small IT room or closet. Ductless mini splits are often utilized in small IT rooms because they don’t take up valuable floor space, but they aren’t designed to separate the cold air supply and the hot exhaust air return. To resolve this, ducts can be added, and there are containment solutions available.

At the End of the Day…

Though the edge is new, it shouldn’t be viewed as something completely different than what we need for any of our critical operations. It’s the drivers for the development of a plan that should be different. Edge applications should be driven by business needs. And planning and designing reliable, resilient infrastructure should be driven by the need to support the applications that require low latency.