Today’s data centers are no longer just enablers to the business; they can often provide corporations with competitive advantages and are the underpinnings to corporate success. Effective data center infrastructure management strategies can propel the efficiency, utilization, and availability of data center assets and services. However, a fundamental rethinking of the relationships between facilities and IT infrastructure components, that provides consistent and predictable levels of efficiency, utilization, and availability to their respective organizations, needs to occur.

A new perspective is emerging on managing the critical infrastructure gaps is emerging that recognizes:

•The importance of real-time data to understand the true capacity of available infrastructure

•The criticality of interdependencies between logical and physical layers

•The need for holistic management capabilities and visibility of IT and facilities infrastructures

•The need for more powerful management tools that offer a rich, visual view of the infrastructure and can guide design and change management


Over the last decade, new advances in IT technologies have placed new and additional pressures on traditional methods of facilities planning and management. Demands driven by economic and market conditions, as well as new initiatives that are energy efficient or “green,” have necessitated building and IT management tool adoption. However, with the advent of technologies such as virtualization and the move to high density or cloud computing strategies, new challenges emerged, broadening the “information gap” between logical and physical infrastructures and masking critical interdependencies.

Virtualization is one of the most significant drivers of both potential benefits and risks. Advanced, automated provisioning capabilities enabled in virtualized environments, whether they are server, storage, network, or desktop, bring inherent management challenges and impacts to the infrastructure.

Data center managers have been challenged to maintain or increase availability, utilization, and efficiency in the face of rising costs and demands. Despite the large investments in today’s data centers, significant inefficiencies still exist.

The collision of static infrastructure with dynamic IT operations has hindered progress. This has been compounded by the inability to achieve end-to-end visibility of true performance, let alone lend to predictive performance. Accounting for data center costs is often split across multiple organizations, forcing a compartmentalized decision-making process without consideration of upstream or downstream ramifications, increasing risks of data center and business application failures.

These decisions often jeopardize the very goals they strive to support, as impacts at the physical infrastructure levels can have far more devastating impacts to a virtualized environment, server, or rack, than impacting availability of applications at an exponential rate. As IT adapted to more dynamic operations, new issues emerged in managing the overall data center as the information gap between logical and physical infrastructures masked critical challenges such as:

  • Common performance and utilization data accessible by skilled IT and facilities practitioners is difficult and expensive to collect
  • Inconsistency of data with little to no insightful information has left data center managers without “actionable and contextual” information to guide optimization decisions
  • Managing complexity and volatility has proved extremely challenging to current staff and processes

As the need to drive further optimization continues, there is an apparent lack of tools and processes to synchronize the virtualization automation with the physical infrastructure without significant service engagements. This strategy limits optimization in design, necessitating a stronger synergy between the physical and IT infrastructures and the growing need for them to have a more dynamic relationship between the facility and IT infrastructures.


Maintaining that value to the business comes amid rising costs of infrastructure, power, and cooling, and the mantra of doing more with less. In a 2008 report, McKinsey & Company states, “Today’s data centers account for approximately 25 percent of the total corporate IT budget, when you take in account facilities, servers, storage and the labor to manage them.”(McKinsey & Company, 2008) Figure 1 illustrates typical enterprise costs.

Holistic, real-time infrastructure monitoring, measurement, and management allow for the effective realization of utilization and allocation of resources across the data center. For example, a web server with a faceplate power value of 450 watts may only draw 150 watts or less under lower utilized usage. Conversely, there is typically a gap between stated faceplates and actual peak, further inflating the “cushion” of true peak performance by as much as (+/-) 10 to 20 percent.

Conceptually, virtualization has been tied to servers with the end goal to increase physical resource utilization. Virtualization technologies have focused on managing virtualization technologies to achieve their goals, not to manage the infrastructure needed to support them. Prior to the introduction of server virtualization, average consumption of physical server resources was at 15 to 20 percent of the asset’s full capacity. Increasing the usage of the physical resource by increasing the consumption of the asset from 15 percent to upwards of 85 percent redefined how the data center would be managed. Although the advent of virtualization and high-density computing has extended the horizon of utilization, the gaps caused by the lack of insights into actual or absolute peak have hindered maximum utilization of assets as shown in figure 2.

These strategies and methods of maximizing the usage of the asset increased the power consumption as well as the environmental aspects. With the utilization watermark of the asset increasing, existing power and cooling systems had to keep pace. Consolidation of physical resources allows for higher utilization; however, a larger emphasis would be put on the physical resources due to the importance of keeping the resources online and highly available. It is common today to find a server with upwards of 20 applications, making the risk of losing a server or rack of servers at this application density exponential over traditional one-application- to-one-server designs.

The concept of virtualization and the new focus on cloud computing become a pivot point to the recovery time objective (RTO) and ROI of the business. In addition, with a truly virtualized environment, resource pools can be leveraged; therefore, a dedicated resource would no longer be as critical. However, the overall data center and facilities become increasingly important to the successful operation of the business.


Industry estimates put the cost of building a data center (the building shell and raised floor) at $200 to $400 per square foot. By building a data center with 2,500 square feet of raised floor space operating at 20 kilowatt per rack vs. a data center with 10,000 square feet of raised floor space at 5 kilowatt per rack, the capital savings could reach $1 to $3 million. Operational savings are also impressive—about 35 percent of the cost of cooling the data center is eliminated by the high-density cooling infrastructure.

High-density cooling brings cooling closer to the source of heat through high-efficiency cooling units located near the rack to complement the base room air conditioning. These systems can reduce cooling power consumption by as much as 32 percent compared to traditional room-only designs. Pumped refrigerant solutions remove heat from the data center more efficiently than air-cooled systems and provide incremental energy savings between 25 percent and 48 percent based.

The well-established practice of hot/cold aisle alignments sets up another movement—containment. Aisle containment prevents the mixing of hot and cold air to improve cooling efficiency and enable higher densities. High-density power distribution: Power distribution has evolved from single-stage to two-stage designs to enable increased density, reduced cabling, and more effective use of data center space.


In the race to achieve improved energy efficiency—and, ultimately, cut costs—businesses cannot lose sight of the importance of maintaining or improving availability.

Uninterruptible power supply (UPS): Data center managers should consider the power topology and the availability requirements when selecting a UPS. Enterprise data centers should select double-conversion UPS technology for its ability to completely condition power and isolate connected equipment from the power source.

The extra protection that a double-conversion UPS affords does come with a small price in terms of efficiency; however, most organizations believe the small amount of energy lost during the conversion is well worth the added protection this process delivers. In addition, newer UPS systems are now available with energy optimization controls that enable users to open and close different components of the UPS based on organizational priorities and operating conditions.

Intelligent paralleling improves the efficiency of redundant UPS systems by deactivating UPS modules that are not required to support the load. In N + 1 UPS configurations, the load is typically evenly distributed across all modules. If a failure occurs, or a module is taken off line for service, the load is redistributed across the remaining modules.

This feature is particularly useful for data centers that experience extended periods of low demand, such as a corporate data center that is operating at low capacity on weekends and holidays. In this case, it ensures the UPS systems supporting the load are not operating at loads at which they cannot deliver optimum efficiency.

Economization: Economizers, which use outside air to reduce work required by the cooling system, can be an effective approach to lowering energy consumption if they are properly applied.

Service: Professional services ensure that a facility reaches the promise of improved, continuous operational performance and accelerate return on data center investments and assets, while reducing the cost, risk, and complexity of change.


What sets successful IT operations apart from the rest is how they manage assets in an agile environment, bringing control to the chaos. Access, control, and management of literally hundreds or thousands of heterogeneous devices in the physical infrastructure spread across multiple locations and geographies are a daunting task.

Implementing data center best practices and taking a holistic perspective is imperative to an organization’s ability to meet its business goals. A critical component is the availability of data.

Access to the mountains of data is not enough. Once available, a management system based on contextual, actionable data allows the transformation of data into true insight. Actionable insight into how equipment is being utilized, what capacity is available, where it’s residing, how much power it’s using, and the ability to test scenarios prior to change can optimize performance and reduce risk associated with managing high-density infrastructures, no matter how complex the environment. 


Emerson's Consolidation Project

When Emerson decided to build a new data center to support a major consolidation project, the team responsible employed a strategy to design the next-generation data took advantage of Emerson’s’ industry leading solutions and deep experience in providing equipment, appliances, and management of the data center infrastructure along with a full range of infrastructure technologies that extend from “grid to chip.”

When the new 35,000-square-foot Emerson data center opened in St. Louis in 2009, it clearly accomplished the objective the team had been working for–create an efficient, highly reliable data center with the flexibility to support the future growth of the business. The facility was awarded LEED Gold Certification and today uses 30 percent less energy than a traditionally designed data center without compromising the availability required to support the global organization.