The velocity of data growth in combination with today’s cloud-centric environment is driving the development of new data centers seemingly every day. In many cases, data center owners have done all they can to maximize the buildings’ efficiencies; their architectural and engineering teams designed efficient solutions, their contractors built per the plans and specifications using green technology and resources, and the building has been fully commissioned to operate within specifications.

Despite these efforts, at some point during a data center’s life cycle, a gap forms between the theoretical assumptions made during the initial design and construction phase and the reality of the data center’s operational conditions. Facility operators become responsible for closing this gap and effectively adapting building systems in order to meet these new conditions. What can they do to ensure the building operates as efficiently as possible while meeting IT equipment requirements?

Adapting existing data centers for optimal performance begins with understanding the root causes of why a data center may not be operating under initial design conditions. This is important so that any implemented solutions are not simply a Band-Aid, but a long-term fix. Although many factors are at play in these situations, addressing the root causes and the issues that result remain important in order to keep the data center operating as green as possible.



Owners of new data centers often build these structures without the knowledge of exactly what IT equipment will be installed in the white space. This is especially prevalent in the colocation market since owners may not know which tenants will populate the white space. This factor can affect enterprise data centers as well; IT equipment evolves at such rapid speed that each new generation of equipment becomes available after a data center is already in construction. This unknown variable can create potential issues once the IT equipment is brought online in the space.

Oftentimes, the installed IT equipment has higher localized power densities than what was originally planned. While the total data center load may meet the intended design, cooling systems designed for an average of 5 kW/rack (for example) may struggle to keep up with a few rows of equipment operating at 10 to 20 kW/rack. In these conditions, operators must find a solution for how to overdrive or supplement their cooling systems in order to maintain design conditions for these larger than planned IT loads.

A more common issue that arises is that the installed IT equipment has lower overall power densities than originally planned. Although industry trends are moving toward individual IT components — or racks — with larger nameplate loads, the combination of IT equipment usage diversity and IT strategies that employ redundant components results in lower actual IT power usage density. In this case, the facility equipment may continually operate at less than optimal loads and speeds; or equipment may cycle on and off so frequently that it negatively affects the long-term life of the equipment. Finding a way to appropriately load equipment so it consistently operates at optimal efficiency without affecting the operating redundancy/resiliency of the systems becomes the operator’s challenge.

Somewhat less common — but certainly worth noting — is a condition in which IT equipment is installed to allow for a wider range of server intake conditions than originally planned for during design. At this point, operators have many choices among setpoints of various pieces of equipment, which they can adjust to make the data center operate more efficiently.



Whether it is a 10MW enterprise data center, a 1MW colocation data center, or a 100kW IT room, oftentimes the initial IT equipment installations represent a very small portion of the available system capacity and whitespace area. At the other end of the data center life cycle, an IT refresh, business merger/acquisition, transition to off-site cloud computing, or any number of other scenarios can lead to a drastic reduction in IT equipment installed in the data center.

Whatever the cause of these low-load scenarios, it may be months or even years until IT installations reach their full build-out potential. Until that time, operators must act on the opportunity to adapt large systems to installations that support a fraction of the initial design load condition.



Legacy data center operators have the greatest challenge when it comes to managing efficient data centers. As more business is digitized, CEOs and CFOs are exposed to knowledge and intelligence that causes them to begin to question the efficiency of their data centers, which have been in operation for a decade or more. Meanwhile, CIOs are committing to cloud IT strategies as their companies grow, resulting in the need to implement next-gen IT equipment that have higher load densities and rack footprint requirements than their legacy system counterparts. The pressure is on for data center operators to reduce energy usage any way they can, and often with minimal funding.

Operators armed with an understanding of why and how a data center is operating differently than initial design assumptions and conditions provides them with many avenues for making changes that impact how green the data center can operate. Specific solutions are unique to each data center and owner, and can depend on many factors including location, system topology, and recourse availability. However, consider the following factors when taking into account changes to building operations.



Changing cold-aisle (supply air) temperature is often the first variable operators adjust when seeking data center efficiency gains. Higher supply air temperatures are certainly advantageous when the employed mechanical systems allow for more annual hours of free cooling. However, raising supply air temperatures should not be employed as a solution carte blanche as increasing temperatures can create drawbacks. Some mechanical equipment may operate more efficiently at lower temperatures, while higher temperatures may require larger volumes of air to circulate and properly cool the IT equipment, which would necessitate more fan energy. Operator comfort can also become a concern in data centers where staff needs to regularly service IT equipment in the data hall. It is critical to investigate and weigh each of these factors before adjusting supply air temperature.



In many scenarios, data center operators find themselves with more equipment capacity than needed for installed IT loads. This can result in a detriment to data center efficiency because most equipment is not equally efficient at all operating points. From pumps to transformers, different types of equipment have various ideal load conditions for maximum efficiency. Therefore, staging off certain equipment in order to keep the ideal load on the equipment that remains online can affect overall system efficiency gains.

Implementing this in an ideal manner can be a daunting task. It may require working with individual equipment manufacturers to get a full understanding of efficiency profiles; mapping how some staged equipment may affect the way in which related equipment is operating; and adjusting control schemes to take advantage of the efficiency gains. There may also be some discomfort with having equipment turned off, and short cycling may become a concern if not properly implemented. In some instances, however, there is a very clear benefit to turning off select equipment that has historically operated at very small loads.



Although best efforts are made to stretch the limits of existing systems, a major equipment refresh or building expansion can leave no other option but to add capacity. Systems can expand in one of two ways, each with different impacts on how efficiently the data center operates.

Supplementing existing systems is often a good solution for short-term or localized needs. Through the implementation of discrete, incremental, and often localized solutions, the new equipment can be right-sized and targeted at what is driving the system expansion. For example, implement supplemental in-row cooling in a row of high-density racks or use standalone direct expansion (DX) computer room air conditioning (CRAC) units in a small expansion area. Data center operators can also install and bring online supplementation without disrupting existing data center systems. 

Augmenting existing systems is often the best course when the data center undergoes large or wholesale changes. The addition of large centralized system components can tie new equipment into existing systems or, when left to stand-alone, can serve a defined portion of the data center. By taking advantage of economies of scale and the ability to shift capacity where needed, these large systems are often more efficient than smaller incremental systems. Increasing the capacity of a chilled water plant through the addition of chiller equipment, or adding large indirect evaporatively cooled air handlers, are two examples of augmentation.

The existing system topology and the long-term plan for data center operation will dictate whether it is appropriate to supplement or augment. Data center operators should intentionally investigate both to determine what will be the greenest solution while meeting the constraints of the expansion.



Data center operators should remember that “being green” is more than energy usage. Fresh water is arguably the world’s most precious resource, and the trend towards direct and indirect evaporative cooling for data centers makes them very large water users. In addition to the evaporation component, the process results in large amounts of concentrated wastewater as well. Blowdown rates, cycles of concentration, and proper chemical treatment systems can have an impact on not only water usage and waste discharge flowrates, but also wastewater quality. Monitoring and controlling wastewater content and discharge rates are important to utility service providers, and limiting impact to local water sources and treatment facilities is part of implementing a green solution to data center cooling.

Another key consideration with water usage is how operational strategy affects maintenance of wetted items. The use of a reverse osmosis (RO) system and chemical treatment of supplied water may extend the life of evaporative media, but the increased water use and possible environmental impact may outweigh the savings in equipment maintenance.

Data center operators have several opportunities to affect water usage. There may be tradeoffs to the reduction in water use, such as an increase in energy use or an increase in maintenance frequency or costs. Building operators that understand the importance of water use and how it impacts the function of other systems can make informed decisions on how to adjust their water usage to operate their data centers in the greenest manner.



Sometimes the simplest solution to maintaining a green data center is the most obvious. As data centers evolve in their life cycle, attention to the small details that impact efficiency sometimes gets ignored. Readily identifiable and easily fixable issues such as proper installation of rack blanking plates, proper installation, appropriate location of floor tiles, cleanliness of underfloor plenum, and regular maintenance of equipment often fall by the wayside. Proper training of data center personnel and implementation of regular audits can assist operators in maintaining maximum efficiency of data centers.

Ultimately, operators are at the forefront of optimizing green data centers. In addition to having the support of good consultants and builders, operators also should have the authority and approval of owners to adjust systems to adapt to ever changing white space conditions. The decisions they make with regard to ever changing data center conditions is what will determine the efficiency and value of the data center over its entire life cycle.