The green data center market is forecast to grow from $43.24 billion USD in 2018 to $147.88 billion in 2024. Sustainability has become as central to data centers as, well, data. Uptime may be king, but the energy cost of that uptime is a factor the crown cannot afford to ignore.

If you’re about to future-proof a data center with wind power or build a near-arctic facility for that sweet free cooling, then you’ve got the resources to achieve amazing power usage effectiveness (PUE). But if you’re locked in a staring contest with an unhappy budget projection, a big, green goal can look like a longshot. You need resources now to help you save resources later. A holistic energy-efficiency analysis of your data center can help you green light your efficiency strategy with clear ROI on best practices for improving your PUE.


You can't manage what you can't measure

And if it’s not complete, it’s not accurate. A comprehensive energy-efficiency analysis of your data center obviously measures cooling, IT loads, airflow, and humidity. Cooling alone can eat 40% of a data center’s energy total for inefficient systems. But lighting, electrical distribution, power quality, and even cleaning procedures also impact energy use and downtime risk. Once you have a clear picture of every factor that feeds your PUE, that data can help you determine opportunities for improvement. 

Your building management system (BMS) or data center infrastructure manager (DCIM) is a good place to start. You’re looking for data on every independent system, not just your IT, but across the facility infrastructure. An electrical power management system (EPMS) can give you more granular insight into power distribution. If you don’t have this kind of system in place, consider the improved risk mitigation you’ll enjoy with real-time monitoring and accurate data you can put to timely use. 

In addition to insight into your PUE, best practices in measuring temperature and airflow around your racks gives you actionable information and access to additional metrics you can use to zero in on beneficial efficiencies. For instance, return temperature index (RTI) and rack cooling index (RCI) help you benchmark cooling performance and identify opportunities. ANCIS designed RCI to express compliance with ASHRAE recommendations, and both are used in the Department of Energy (DOE) Air Management Tool. 

Using sensors over the minimum suggested by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) would allow you to compute RCI at a more granular level.


Warm is the new cool

ASHRAE gives a recommended temperature range for data centers, but operating closer to the maximum recommended (80.6°F) instead of the minimum makes a world of difference to your cooling system — and therefore your energy budget.

The question is whether a higher temperature coincides with your uptime protection strategy. Risk averse operators may feel that running cooler will pad the safety margin against equipment failure due to overheating. While a lot depends on the facility, this is also a case where more data can help. Monitoring equipment in real time can create a more informed, targeted safety net for equipment than running the whole facility colder than it needs to be.


Go with the green flow

Air management best practices help you make sure the energy that goes into cooling that air isn’t wasted. You want to minimize, and if possible, eliminate two big problems: bypass of cold air and recirculation of hot air. In terms of airflow, that’s making sure all the cold air hits the target, so to speak, and making sure that ‘used’ warmer air doesn’t get mixed back up into your flow. 

The ideal state would have the only airflow from the cold side (where the IT equipment intakes are) to the hot side (where the IT equipment exhaust is) be actively cooling the equipment. Containment strategies use physical barriers to separate the cold aisles and hot aisles.

Blanking panels seal off open spaces in racks, while air dams and floor penetration covers plug more holes for you. It’s also important to remove airflow obstructions. Cold air that’s blocked requires more energy to move around.

Moving air when you don’t have to wastes energy, too. Variable-frequency drives (VFD) help your system respond to current conditions more accurately. The closer your cooling is to “as needed,” the less energy is wasted.


Don't take it lightly

Light is usually the second main efficiency target for an office space after HVAC, and it shouldn’t be an afterthought for data center facilities. A control system with occupancy sensors and timers cuts down on waste, while energy-efficient LEDs make sure the light you need to use is using less energy.


Use your power to save

Data centers usually have a robust electrical distribution system dedicated to protecting their uptime. With transformers, PDUs, and UPSs there’s a lot of chances for excessive resistance and power quality issues. Clear hot spots need be addressed or contained. Infrared (IR) testing can help you identify the less obvious hot spots.

Power quality can also be a factor in equipment performance and life cycle. Everything from servers to your cooling system are getting more sophisticated and often more sensitive to non-linear loads or harmonics. For many, sustainability isn’t just about shaving down utility spend, but also about reducing waste in multiple areas, like equipment turnover and loss. 


Clean air is a factor, too

Dust and particles reduce efficiency. When operations talk about green cleaning, they’re usually making sure cleaning products, tools, and processes meet environmental or wellness standards. Those aren’t bad goals for a data center; after all, every employee appreciates a safe, non-toxic, allergen-free workplace — especially the team doing the cleaning.

Special care should be taken for data centers to develop standard operation procedures that control dust and airborne particles. Beyond the deleterious effects on IT equipment, excessive dust and particulate matter can reduce cooling efficiency and equipment wear. 

This leads both to our conclusion and back to our beginning. Efficiency depends on good data about the facility, but not without action. Use that data to empower proactive and preventive maintenance that keeps systems running at peak efficiency and mitigates the risk of unplanned failure.


Cited works