Power. It is the lifeblood of any mission critical data center facility, but it certainly does not run cheap. There are many ways to configure your data center strategy to ensure your facility is providing efficient power so your customers get the best possible value out of their services and remain happy, loyal patrons.

The importance of efficiency cannot be overstated, as new data-reliant technologies (more on that below) continue to pour into the marketplace on a seemingly minute-by-minute basis. Here are only some of the latest emerging technologies that will assuredly add to current facility power loads:

  • Internet of Things (IoT): This buzzword has millions of potential applications across both consumer and business segments. Current estimates indicate that by 2020, 20.4 billion IoT “units” will be active across the globe. At the moment, analysts estimate that there are approximately 8.4 billion IoT connected devices in use worldwide.1  So, in a very short period of time, we’ll have a sizable uptick in IoT devices tapping into data centers, which means a substantial increase in pressure for facility power systems.

  • 5G: Another highly anticipated emerging technology with some serious data center power implications, the 5G market is expected to grow at a compound annual growth rate (CAGR) of 70%, leading to $28 billion in annual spending by 2025.2 Nearly everyone has a mobile phone, which themselves are extremely data-intensive. Once 5G becomes mainstream though, facilities will see a surge in power demand due to enhanced (and highly demanded) on-the-go internet experiences at a greater capacity and lower latency.

  • Virtual reality (VR): Projections forecast that the VR market (worth $1.37 billion in 2015) will grow to a $33.90 billion industry by 2022, which translates to a CAGR of 57.8%.3  As head-mounted-displays (HMD) increase in popularity and accessibility, the amount of data traffic transmitted to and from VR devices will steadily increase. This will certainly tax data center power systems, as other data-reliant technologies either maintain or exceed current power usage.

  • Augmented reality (AR): The Pokémon Go craze during the summer of 2016 gave us a taste of just how taxing AR can be on data center infrastructure. After the success of that iteration of AR, there will certainly be more instances of the technology on the way. Predictions for the AR market suggest it will grow from a $2.39 billion industry in 2016 to a $61.39 billion industry by 2023, or a CAGR of 55.71%.4

It’s clear that the demand placed on power systems across the data center industry are about to hit an all-time high, so what can you do to be as efficient as possible in your facility?

 

SITE SELECTION

Looking at things from a macro perspective, the first thing to consider for any data center is its location. The United States is a vast country, spanning just under 3,000 miles from the East Coast to the West Coast. There are many different climates and weather patterns throughout the country. Because of this, not all regions have access to every variety of renewable energy resources. Here’s a quick breakdown of some of the most popular renewable energy sources of the moment:

  • Hydroelectric power – A highly reliable energy source that is typically fed into by large bodies of water. This energy source generates power from the flow of water through hydroelectric turbines that produce no emissions and minimal pollution, making it one of the most energy efficient options available. Because it needs a large body of water to generate power, it is not a viable option in water scarce climates.

  • Solar power – This energy source generates power from the most abundant natural resource we have, the sun. But, if the sun is not shining consistently, solar power cannot be relied on as a standalone energy source. Solar systems generally have a lifespan of 15 to 30 years, require a large amount of space, and are built using scarce resources like polysilicon.

  • Wind power – Another source that produces no emissions, but its energy output is based on consistent wind production. Some other drawbacks include the large amount of land that is needed to house the wind turbine structures.

  • Geothermal power – This energy source converts heat energy from the earth to generate electricity. It is an efficient means of energy generation and has minimal impact on the environment. There are very few geothermal fields active enough to provide consistent energy.

As the old adage goes, availability is the most important ability. If you’ve got your heart set on establishing a data center in a region with inconsistent sunlight production, but you’re really into solar power, you may have an issue. All of this to say, you can only be as green as the region your data center location will allow.

If you’ve narrowed in on a particular renewable energy source that you’d like to power your data center, the next step would be determining the location of your data center based on where that energy source is available. But, energy sources aren’t the only determining factor here, it’s also crucial to understand the cost of electricity in the desired region. The cost of power varies quite a bit across the U.S. with the most populated areas generally having the highest cost of electricity, while less populated areas tend to have more affordable electricity rates. The U.S. Chamber of Commerce Global Energy Institute is an excellent resource, with charts and graphs that break down the average retail electricity rates state-by-state throughout the country.

 

DISASTER RECOVERY

Disaster recovery very much ties in with site selection and, from a power perspective, has to be thought of in both primary and failover facilities. The traditional way of thinking about disaster recovery involved having a production data center in one location and a separate location where the most critical functions of the production data center were replicated in the event of a disaster. Companies have realized that this isn’t necessarily the best way to do it. The best way to do it is to have multiple sites that are “hot” and slightly pair back the reliability and redundancy of each of those sites, versus spending gobs of money on the highest levels of reliability and redundancy with an uptime certified Tier 4 site.

Ideally, having two synchronous facilities completing the exact same mission at the exact same time in every respect would be the best way to go, but that is not necessarily a cost-effective option. Going with more modest infrastructure at multiple locations that are tied together asynchronously (meaning facilities have separate missions to complete) can save some serious money in terms of power and operations costs.

There are elements where you can split the compute mission, where you have multiple facilities that are not doing the same thing. This is known as business continuity (BC) strategy, where you have some eggs in multiple baskets rather than all of them in one. This saves money because you don’t have to have power and cooling infrastructure running at maximum output in all locations at all times. The cost savings come because the power and cooling reliability can be somewhat lower than what you would need for multiple facilities duplicating the same exact compute mission, since the likelihood of both facilities having a problem at the same time is relatively low.

Below are the options for business continuity strategy, ranked from highest to lowest power consumption:

  • “Hot-Hot” synchronous BC: This is the most reliable business continuity model, where there are 30 fiber miles or less between the two facilities. In this model both facilities are tasked with accomplishing the same compute mission at the exact same time. If one facility has a major issue, the other facility is able to completely take over while the facility with issues is mitigated. This is also the most expensive option in terms of electricity costs because you have two facilities running at maximum capacity at all times.

  • “Hot-Hot” asynchronous BC (multiple regions): In this model, both facilities are active (hot), but they perform different functions. Taking this approach safeguards operators from having both facilities affected in the event of a natural disaster. Here, data center operators need to factor in the different electricity costs for the regions in which their facilities are housed. Although this model calls for two active sites, the electricity requirements aren’t as great as the “Hot-Hot” Synchronous model per facility because the workload is split.

  • “Hot-Hot” asynchronous BC (single region): This model also tasks its facilities to perform separate functions. Because both facilities are located in the same region, there is a higher likelihood that a natural disaster could affect both facilities. From a power perspective, data center operators only have to account for electricity costs in a single region (may differ from state-to-state). This model is also less power driven than the “Hot-Hot” synchronous model because of its split-function-approach. From a convenience standpoint, this model provides much easier accessibility between your facilities than the multiple region asynchronous model in the event that your personnel needs to mobilize at the failover facility.

  • “Hot-Cold” BC: In this model, you have multiple facilities that have enough horsepower to failover to the other facility if one of the facilities were to fail. This, of course, is a more cost-effective option in terms of electricity spending requirements. As much as is spent on computers and hardware, this model allows operators to spend less on the redundancy of the power and cooling for each facility.

  • “Hot Colocation” with cloud hybrid support: In this model, colocation (colo) functions can be offset to a cloud environment in the event of a disaster. A key point to keep in mind here is the physical location of the cloud server that is serving to back up colo services.

 

COOLING PRACTICES

Here’s where we start to get into more of the micro mindset of data center power consumption. If power is the lifeblood of the mission critical facility, then cooling is the heartbeat. We all know what happens when equipment overheats … it malfunctions.

Since the days of the first data centers, cooling has largely been handled via airflow technologies and commercial-grade air conditioning systems. The basic idea is to ensure that the hot air and the cold air move freely down the appropriate aisles without “mixing” in order to keep the air around functioning equipment at optimal temperatures. Active plenum returns aid in the containment process by using chimney cabinets, or other containment methods, that move the hot air directly to a plenum without passing across the cold air of the server room. Blanking panels and perforated tiles are also crucial to ensure that you’re being as efficient as possible with your airflow containment strategy. But, even if you have everything in place, cooling your data center is a huge cost.

Airflow-based cooling is not the only way of doing things. Data centers that cater to high-performance computing (HPC) systems are likely familiar with some pretty impressive alternative cooling technologies. Some advanced cooling systems have a rear door heat exchanger that neutralizes heat before it ever leaves the cabinet, meaning you avoid having to blow air by pointing the heat transfer directly to the chilled water return. This eliminates the need for traditional cooling systems to operate on the HPC environment. These types of systems continuously monitor atmospheric variables to adjust conditions on a real-time basis, increasing data center efficiencies.

 

DATA CENTER SOFTWARE

Power costs are very tangible once the electric bill arrives, but there are technologies available that make monitoring power usage in real-time a cinch. Data center infrastructure management (DCIM) solutions allow operators to understand what they can’t physically see in their day-to-day facility operations.

Currently available software helps operators understand utilization of power on a rack-by-rack basis, as it monitors temperatures and humidity wirelessly to ensure that overcooling does not occur. It’s critical that operators understand hot spots from one rack to another, so this type of technology really is an absolute necessity. This ensures that you know where your power is being used so you can manage cooling, hot aisle, cold aisle, blanking panels, water cooling, etc. You need to know where to put your energy and you can’t do that without the data captured by DCIM technology.

On a slightly smaller level, there are also technologies available that allow data centers to “shut off the lights” on lighting or other equipment that isn’t being used at the moment. While this seems like it would be a drop in the bucket of electricity costs, committing to powering down equipment that isn’t in use makes for major power savings over the long-term.

 

FOOTNOTES

  1. Gartner – “Gartner Says 8.4 Billion Connected “Things” Will Be in Use in 2017, Up 31 Percent From 2016.”

  2. MarketWatch — “5G Network Infrastructure Market to Grow at a CAGR of 70%, Accounting for $28 Billion in Annual Spending by 2025.” 

  3. MarketsAndMarkets – “Virtual Reality Market worth 33.90 Billion USD by 2022.”

  4. MarketsAndMarkets – “Augmented Reality Market worth 61.39 Billion USD by 2023.”