What’s holding you back from having a world-class data center? If your challenges include outdated or inefficient cooling systems, then the answer could be easier and less expensive than you might think.

In more than 80% of the enterprise data centers we visit each year, we observe opportunities to reduce cooling and energy usage costs between 20% and 50%. And the good news is that the infrastructure changes required to improve efficiency also result in greater reliability or system uptime. Emerson Network Power recently surveyed IT, facilities, and data center managers in the United States and Canada about plans for upgrades in the coming year. The research revealed that half of all data center managers have or will perform thermal retrofit upgrades to their systems by the end of 2016. 

So why are some data centers still not taking actions?  Here are the primary reasons we encounter:

  • Risk of downtime. IT organization managers sometimes don’t see enough upside to infrastructure improvements and fear a risk of cooling loss during changes. A well-thought out plan, however, will mitigate or eliminate these risks.
  • Cost and budgeting. While getting budget approval was cited in our survey as the most difficult challenge in upgrade projects, the good news is that energy rebates from utilities and local governments are now available in every state, which helps deliver a faster return on investment. Rebates and efficiency gains can provide a quantifiable return on investment of less than two years for most thermal system upgrades.
  • Manpower and resources. Most companies lack the manpower and resources to plan and stage an infrastructure improvement. However, there are many consultants and vendor representatives who can help.

 

Primary drivers behind data center upgrades include a need for higher equipment reliability, greater energy efficiency, and additional capacity. Since cooling accounts for a large part of data center energy usage, it remains a primary focus for companies looking to improve resource conservation. And some of these improvements are surprisingly easy and can be done today with minimal risk. Here are the most common recommendations we make:

 

  • Improve airflow. If you have raised floors, ensure your floor tiles are properly placed and sized to provide the right amount of air to the places that need it the most. Over time, racks are moved without also changing floor tile locations. Relocating perforated floor tiles can greatly improve airflow. The same is true for using blanking panels more effectively as rack equipment changes. Engaging an experienced professional or vendor can help in this area if needed.

Hot or cold aisle containment is another technique that can be used to improve airflow by preventing the mixing of hot and cold air, which increases the temperature of the return air. Higher return air temperatures allow heat removal units to operate more efficiently. A 10°F increase in return air temperature can result in a 38% increase in unit capacity and an increase in efficiency.

  • Increase return air or chilled water temperatures. The old standard was 72°F for return air with relative humidity at 50%. Today, you can push return air temperatures as high as 95°F. For every 1°F increase in temperature, you can save as much as 2% of your energy costs. This should be done in small increments to avoid unexpected trouble and to ensure all the IT equipment continues to function properly. Enlist your facilities manager or vendor partners to assess the safest way to do this.

Raising chilled water temperatures provides similar benefits. For data centers using chilled water, 45°F was the standard temperature for water in the chiller. Today, it is possible to operate chillers up to 65°F. But even moving to 55°F will reduce energy consumption by 20%. Each 1°F increase in water temperature reduces chiller energy consumption by about 2%. It is important that you work with your facilities manager, however, since raising chilled water setpoints could reduce cooling capacity in your data center cooling units and it generally requires a chilled water loop separate from the building comfort cooling

  • Match cooling to IT load. If applicable, your thermal management equipment should have variable capacity components (fans and compressors) to adjust cooling capacity up and down with your IT load. Our research found that the most common form of thermal upgrade is adding variable speed fans or variable-frequency drives to cooling units. A 10-hp fan running at 100% speed uses 8.1 kilowatt hours (kWh) of electricity. By reducing the speed to 90%, the fan uses only 5.9 kWh, for a 27% savings. Even better, at 70% fan speed, electricity usage drops to 2.8 kWh, a 65% reduction. Variable speed technologies are a relatively inexpensive way to achieve a fast payback, usually within months.
  • Properly utilize thermal controls. Another important change being made to the thermal infrastructure is the addition of intelligent controls, in part because they optimize the performance of variable speed technologies. According to our survey, 41% of data centers are adding or replacing thermal controls.

Today’s intelligent thermal controls operate at both the individual unit and system levels, using advanced machine-to-machine communications, powerful analytics, and self-healing routines to ensure greater protection, efficiency, and insight into thermal conditions and operations. By harmonizing multiple cooling systems to avoid conflicting operations, these controls can improve thermal system energy efficiency by up to 50% over legacy technologies. For example, in an enterprise data center with 500 kW of IT load and energy costs of $0.10/kWh, annual thermal energy consumption can be lowered from 380 kW to 184 kW, yielding $171,690 in savings. That potentially can lower the mechanical power usage effectiveness (PUE) by more than 20%, from 1.76 to 1.37.

Intelligent controls include a number of features that reduce deployment costs and energy costs and also improve thermal protection. At the cooling unit level, integrated controls provide a high level of protection and optimal unit performance. These unit controls:

 

  • Monitor hundreds of unit and component points to eliminate single points of failure
  • Include self-healing features to avoid surpassing unsafe operating thresholds
  • Simplify operations to save time and reduce human error
  • Utilize multiple, automated unit protection routines, including lead/lag, cascade, rapid restart, refrigerant protection, and valve calibration and auto-tuning.

 

While controlling individual cooling units to optimize their performance and protection is important, organizations are also looking across the data center and adopting multi-unit, holistic thermal management strategies to remove heat while achieving capital and operational savings. System-level thermal control offers a way to do just that by optimizing thermal system performance and capacity across the data center, providing quick access to actionable data and automating system diagnostics and trending. These systems provide:

  • Advanced monitoring and at-a-glance reporting on performance metrics and trends for efficiency, capacity, and adverse events
  • Up to 50% system efficiency gains, with 30% lower deployment costs
  • Teamwork modes that prevent conflict between units and allows them to adapt to changes in facility and IT demands
  • Improved efficiency and availability and reduced system wear and tear — saving more than $10,000 per unit per year in energy costs
  • Simple and easy deployment — auto-configuration to detect and configure up to 4,800 sensors, eliminating the need for custom integration to building management systems and cutting sensor deployment times in half

 

Along with thermal controls, installing wireless sensors makes it simpler to extend visibility beyond return air and supply air temperatures to include multiple server inlet temperatures. When combined with other thermal management technologies, sensors can deliver real efficiency gains. For instance, a colocation facility that recently implemented wireless sensors, variable speed fans, and intelligent controls to enable closed loop control of heat removal reduced PUE from 1.47 to 1.30, saving $200,000 annually.

 

  • Replace aging systems. More companies are seeking to minimize or eliminate mechanical cooling as they upgrade for greater efficiency. In fact, 40% of our survey respondents say they are adding economizers to provide “free cooling” when outside temperatures allow. As more companies look to economization for energy savings, it’s become clear that no single economizer technology fits every situation. Each has its own strengths based on location and application, and each has its challenges. For instance, while some companies use direct economization that brings outside air into the data center, others tell us they want indirect economization that uses heat exchangers, pumped refrigerant economizers, or chiller systems to avoid bringing in outside air.

Interestingly, in another recent survey we conducted of mechanical engineers and contractors, 55% said pumped refrigerant economization will be the number one technology replacing chilled water systems over the next five years.

In addition to considering new types of economization, it’s also time to rethink air handlers. A growing number of custom air handlers on the market are designed specifically for data centers using evaporative, chilled water, and DX technologies. They offer a level of customization and flexibility needed by these unique environments and are typically used in data centers from 5 to 30+ megawatts. The units can include intelligent controls that enable them to work more efficiently and help data center managers achieve annual mechanical PUEs under 1.2.

 

MAKE 2016 A YEAR FOR IMPROVEMENT

I hope this article has given you new ideas for improving data center and thermal system performance. The best is yet to come for data center infrastructure, and the technologies discussed here are just a few of the many thermal innovations and changes we see occurring in data centers in the year ahead. With the development of these technologies, data centers are able to incorporate more advanced technologies, helping them run more reliably and efficiently than was possible just a few years ago.  All the best in 2016; there’s no reason to wait to make your data center more reliable and efficient.