According to the National Resources Defense Council (NRDC), “nearly 3 million U.S. data centers help power our economy, make our lives easier and render our buildings and electricity grid ‘smarter.’” At the same time, data centers are now rapidly driving the construction of new power plants as they become one of the nation’s largest and fastest-growing consumers of electricity.
With the explosion of trends such as big data and cloud computing actively disrupting the space, rising costs of energy and economic hard times mean resource efficiency has never been more important.
Disaster recovery is also increasingly a top priority and data center requirements vary greatly by industry and business model. As organizations evaluate their optimization and sustainability options, in many cases it is the existing data center facilities that continue to carry the proverbial load.
Implementation of more highly efficient power and cooling designs plays a significant role in saving money and recouping earlier investments in the data center. By modernizing vintage power and cooling equipment and employing emerging technologies — such as economizers — many data center expenses can be more fully contained while increasingly scarce energy resources are conserved.
Emerging trends and issues put vintage data centers under pressure to perform beyond their current means. A variety of beneficial mechanical, electrical and plumbing (MEP) infrastructure upgrades can alleviate concern while adding stability and longevity to existing facilities.
Trends affecting vintage data centers
IT continues to evolve at a rapid rate, introducing changes few data center designers could have anticipated a decade or more ago. The following are among the most important recent technologies and priorities affecting existing facilities:
Mobile computing and big data
There are over 2.5 billion Internet users worldwide, with roughly 250 million in the U.S. alone. As of 2013, it was observed that a full 90% of the world’s data had been produced within the previous two years. As smartphone adoption evolves and rapidly gives way to a new generation of wearable computing devices, big data will continue to explode, with new needs emerging for real-time data access, storage and processing.
As more and more ordinary devices and appliances essential to everyday living are equipped with sensors, processing power and networked capabilities, the Internet of Things (IoT) will further introduce a more mind-boggling and unprecedented state of data proliferation. Analysts assert that the processing needs associated with this trend may dictate data center build and location decisions on the basis of achieving and managing optimal processing conditions closer to the end-user. Distributed data center architectures may not support the exponential data needs of future computing power; instead, smaller, regional sites are expected to provide more acceptable latency.
Consolidation and virtualization
Seeking to lower capital and operational expenses by consolidating underutilized hardware, businesses today are making widespread use of server virtualization, which enables a single physical server (or host) to support multiple virtual machines, each with its own operating system and applications. Often used in conjunction with virtualization, blade servers are plug-and-play processing units with shared power feeds, power supplies, fans, cabling and storage.
By compressing large amounts of computing capacity into small amounts of space, blade servers can dramatically reduce data center floor space requirements. For many organizations, the ratio of physical to virtual servers is literally flip-flopping through virtualization. Consolidation is a popular strategy for medium size data centers that went to identify a path for return on investment with least risk over time so they can focus more on delivering core IT services.
A key question many organizations now face is that of determining where data processing occurs best. Some companies looking to lower overhead and improve efficiency are rapidly adopting cloud computing in either public, private or hybrid implementations. Public cloud solutions deliver applications and infrastructure resources via the Internet, whereas private cloud infrastructures employ the same basic technologies, but reside behind an individual organization’s firewall. While cloud-based strategies continue to attract interest and hype in the news media, enterprise data centers are not disappearing any time soon, especially if prior investments in them were significant. Organizations also need to be careful to understand their consolidation needs well before moving to the cloud as a quick-fix solution. In some cases, the newness of a computing need inadvertently influences and skews data processing location decisions.
Energy efficiency and sustainability
The NRDC finds that: “U.S. data centers are on track to consume roughly 140 billion kilowatt-hours of electricity annually by 2020, equivalent to the output of 50 large power plants (each with 500 megawatts capacity) and emitting nearly 150 million metric tons of carbon pollution.” Within vintage data centers, “up to 30% of servers are ‘comatose’ and no longer needed; other machines are grossly underutilized, and a number of strategic and tactical barriers still remain.” Today’s energy resource assessments further find, “If just half of the savings potential from adopting energy-efficiency best practices were realized, America’s data centers could slash their electricity consumption by as much as 40%. In 2014, this represented a savings of $3.8 billion and 39 billion kilowatt-hours, equivalent to the annual electricity consumption of all the households in the state of Michigan.” (Statistics courtesy of the NRDC.)
For help in measuring a data center’s power efficiency while setting realistic efficiency improvement targets, most data center operators rely on a metric called power usage effectiveness (PUE). Many utility companies offer energy-efficiency incentive programs that can help ward off the threat of penalties. Flexible, proportionate, modular and scalable solutions for the data center can qualify for these incentives, help address issues and improve PUE, but making businesses more sustainable also requires looking beyond the present to anticipate future growth and asset utilization.
The world of technology has seen plenty of change in recent years. To keep up with it, organizations with data centers that are 10 years of age or older should seriously consider modernizing those facilities. Upgrading a vintage data center’s mechanical and electrical infrastructure can boost reliability, efficiency, flexibility and scalability while reducing operational spending. It can also save companies the considerable expense of building entirely new facilities.
If your data center components are reaching the end of their recommended service lives, consider some low-cost, low-risk and high-reward upgrades like uninterruptible power supplies (UPSs). On September 24, join the discussion on aging UPSs and how modern technology can allow you to not only be more reliable and efficient, but also generate more revenue. Eaton product manager Ed Spears and Eaton product line manager John Collins will cover key UPS upgrade decision-making criteria to help guide you in improving your data center. Click here to register.