In older data centers, primary mechanical and electrical architectures such as uninterruptible power supplies (UPSs) and electrical switchgear are often nearing the end of their recommended service life. On average, most data centers in use today are roughly eight to ten years old with an average power density of five to seven kilowatts (kW) per rack enclosure. These older data centers are usually ill equipped fully handle today’s technology trends for the following reasons:

  • Aging equipment
    In vintage data centers, regular inspection and maintenance must be kept up to assure equipment is functioning properly, safely and efficiently. Over time, power system equipment inevitably become less reliable, more expensive to maintain and significantly riskier. It’s easy to forget that even equipment like capacitors and circuit breakers has a usable shelf life. Further, as conditions evolve in the data center, additional generators and backup power may become necessary. It’s often better to take action to replace, upgrade or add these components than to respond to a crisis upon a preventable failure.
     
  • Low efficiency power and cooling equipment
    The more work a server performs, the more energy-efficient it is. Fully loaded equipment runs best, but sometimes older mechanical and electrical systems are not fully loaded; in addition, this hardware tends to deliver lower energy-efficiency than newer, more modern products, further increasing operating costs. According to the National Resources Defense Council (NRDC), “the average server operates at no more than 12% to 18% of its capacity while still drawing 30% to 60% of maximum power. Even sitting virtually idle, servers draw power 24/7, which adds up to a substantial amount of energy use. To put this in perspective, much of the energy consumed by U.S. data centers is used to power more than 12 million servers that do little or no work most of the time.” Weak power and cooling efficiency can also make compliance with environmental regulations exceedingly difficult, if not impossible, as the current data trends continue to evolve.
     
  • Insufficient cooling capacity or ineffective cooling
    Cooling alone accounts for 30% to 40% of the power costs for the entire data center. The cooling systems used in most vintage data centers date back to an era of significantly lower power densities. As a result, they often struggle to cope with the intense heat generated by today’s dense, power-hungry IT equipment. In other cases, some facilities do have sufficient cooling capacity, but they are unable to deliver it where needed. Or, the data center may not be running at the capacity originally anticipated. The typical response has been to overcool instead of stabilizing cooling at a comfortable, even temperature.
  • Crisis response and disaster recovery (DR)
    In many vintage data centers, updated or brand new crisis response plans need to be established and implemented as a high priority for centralized sites as well as all secondary data center sites. What will happen if something goes wrong in a smaller but mission-critical site? Downtime and lost data are simply not permissible in today’s culture, making DR a key driver for every data center project.
     
  • Speed to deploy
    Some data centers have a requirement for avoiding hot electrical or mechanical work during peak operations time, but often the site may not have the same level of discipline or structure with respect to the wiring. This double standard should be avoided; there is a need to take the higher transfer of data into account and not put the data center at risk due to piping.
     
  • Security
    Early consolidation efforts have already resulted in a heavily increased emphasis on security, as organizations are forced to assure the protection of massive amounts of mission-critical or regulated data. In many vintage data centers, appropriate infrastructure was not fully considered to be able to account for today’s strictest data security and increased privacy needs. In many cases, modernization efforts may be advisable simply to avoid being hacked. One strategy for keeping a primary facility anti-hackable has been to pursue deck-to-deck security at a colocation site.
     
  • Inappropriate sizing
    Organizations now face a delicate balancing act with respect to managing data load levels amidst the proliferation of data and rising power densities. While many data centers built in the last decade assumed growth would occur on a massive scale, they now find themselves inefficiently over-provisioned in some respects. Bigger data centers are not always better; in today’s data centers, utilization rates tend to range between 30% and 50% on average. At the same time, with data generation continuing to explode, some may yet face an exponential increase in demand for compute and storage. To meet these demands, power must be abundant, reliable, renewable and energy-efficient. Two critical questions are whether the existing electrical infrastructure is able to cope with all the data generated today (and in the near future) and whether enough power can be provided to help support the growing data needs. The ability to flexibly “right-size” the data center is critical to cutting cost and improving efficiency.
  • Lack of flexibility and scalability
    The integration of newer IT technologies requires the Mechanical, Electrical and Plumbing (MEP) infrastructure to adapt to changes and load demands to meet future business drivers while minimizing additional capital expenditures. Virtualization and cloud environments can cause roaming hot spots in the data center due to dynamic shifting of workloads. Related variations in power demand can be managed safely with planning and technology, as the operators of the electrical grid have shown. According to the NRDC, “The data center industry should follow the lead of the utility industry, which ramps its power plants up and down depending on demand.” Vendors, data centers and utilities will need to work together to find viable solutions for supporting increasingly dense IT environments and fluctuating data processing conditions more sustainably.

Developing upgrades to a vintage data center isn’t simple, careful planning and skilled execution can dramatically streamline the process and strengthen ROI. Above all, organizations contemplating a retrofit of an older data center’s MEP infrastructure should seek assistance from a skilled vendor with deep and relevant experience.

To learn more about facing these vintage data center issues and more specifically UPS upgrades, register for an exclusive Mission Critical webinar on September 24 hosted by Eaton product manager Ed Spears and Eaton product line manager John Collins. Click here to register.