Every disruptive technology in the data center forces IT teams to rethink the related practices and approaches. Virtualization, for example, led to new resource provisioning practices and service delivery models. Cloud technologies and services are driving similarly profound changes. Data center managers have many more choices for service delivery, and workloads can be more easily shifted between the available compute resources distributed across both private and public data centers.

Among the benefits stemming from this agility, new approaches for lowering data center energy costs have many organizations considering cloud alternatives.

The potential is huge. Power and cooling already account for a significant component of the data center budget, and the collective energy draw of the world’s data centers continues to go up at an alarming rate. Many local utility companies are straining to meet demand, or have put caps on the power they can provide to any one site. Government regulations are forcing conservation in many regions, with hefty fines and cap-and-trade programs adding to the data center management burdens.


It is always worthwhile to pay attention to practices employed within the world’s largest data centers. Google, for example, demonstrates how technology advancements make it possible to separate the data center from geographic areas they serve. One of the company’s largest data centers built in the last decade was constructed in The Dalles, OR.

The area offered Google affordable land for constructing a center the size of two football fields, and a surplus of fiber optic cable in the region was another big draw. The $600 million project also included four-story-high cooling towers for the rows and rows of densely packed racks. However, it was inexpensive hydroelectric power that also topped the list of features that made The Dalles an attractive location for Google.

Other companies like Amazon and Facebook routinely include affordable energy as top priorities when selecting data center sites. In the U.S., this thinking is bringing big-name companies to Oregon, Arizona, and other areas with innovative and low-cost utility services.


Every data center service and resource has an associated power and cooling cost. Energy, therefore, should be a factor in capacity planning and service deployment decisions, especially when comparing all of the available private data center locations with public cloud infrastructure alternatives.

That said, many companies do not leverage all of the energy-related data available in modern data centers. Servers, power distribution units, airflow and cooling units, storage devices, switches, and other smart equipment broadcast temperature data and real-time power consumption levels. Holistic energy management solutions can gather and aggregate this data and generate graphical thermal and power maps of the data center. Logged data can be collected to build a historical database for energy and temperature pattern analysis.

With the ability to automatically collect and leverage power and temperature information, data center managers and facilities teams can realistically arm themselves with knowledge. IT and facilities teams no longer need to rely on much less accurate theoretical models, worst-case manufacturer specifications, or time-consuming manual measurements that are quickly out of date. They can leverage user-friendly consoles to gain a complete picture of the patterns that correlate workloads and activity levels to power consumption and dissipated heat. Specific services and workloads can be profiled and an extensive knowledge base established for energy management.

That means that cloud decisions can take energy costs into account. Knowing how workload shifting will decrease the energy requirements for one site and increase them for another makes it possible to factor in the different utility rates and implement the most energy-efficient scheduling. Within a private cloud, workloads can be mapped to available resources at the location with the lowest energy rates at the time of the service request. Public cloud services can be considered, with the cost comparison taking into account the change to the in-house energy costs.

From a technology standpoint, any company can achieve this level of visibility and use it to take advantage of the cheapest energy rates for the various data center sites. Almost every data center is tied to at least one other site for disaster recovery, and distributed data centers are common for a variety of reasons. Add to this scenario all of the domestic and offshore regions where infrastructure as a service is booming, and businesses have the opportunity to tap into global compute resources that leverage lower-cost power and in areas where infrastructure providers can pass through cost savings from government subsidies.

It makes sense to turn multiple sites to a cost advantage and extract as much value as possible out of the existing energy and temperature data within each managed data center.


For the workloads that remain in the company’s data centers, increased visibility also arms data center managers with knowledge that can drive down the associated energy costs. Energy management solutions, especially those that include at-a-glance dashboards, make it easy to identify idle servers. Since these servers still draw approximately 60% of their maximum power requirements, identifying them can help adjust server provisioning and workload balancing to drive up utilization.

Hot spots can also be identified. Knowing which servers or racks are consistently running hot can allow adjustments to the airflow handlers, cooling systems, or workloads to bring the temperature down before any equipment is damaged or services disrupted.

Visibility of the thermal patterns can be put to use for adjusting the ambient temperature in a data center. Every degree that temperature is raised equates to a significant reduction in cooling costs. Therefore, many data centers operate at higher ambient temperatures today, especially since modern data center equipment providers warrant equipment for operation at the higher temperatures.

Some of the same energy management solutions that boost visibility also provide a range of control features. Thresholds can be set to trigger notification and corrective actions in the event of power spikes. A result of a local utility company’s aging equipment, a heat wave, or even a lighting strike, a power spike can result in serious damage to data center equipment. Besides letting data center managers monitor for spikes, energy management solutions help identify the systems that will be at greatest risk in the event of a spike. Those servers operating near their power and temperature limits can be proactively adjusted, and configured with built-in protection such as power capping.

Power capping can also provide a foundation for priority-based energy allocations. The capability protects mission critical services, and can also extend battery life during outages. Based on knowledge extracted from historical power data, capping can be implemented in tandem with dynamic adjustments to server performance. Lowering clock speeds can be an effective way to lower energy consumption, and can yield measurable energy savings while minimizing or eliminating any discernable degradation of service levels.

Documented use cases for real-time feedback and control features such as thresholds and power capping prove that fine-grained energy management can yield significant cost reductions. Typical savings of 15% to 20% of the utility budget have been measured in numerous data centers that have introduced energy and temperature monitoring and control.


As the next step in the journey that began with virtualization, cloud computing is delivering on the promises for more data center agility, centralized management that lowers operating expenses, and cost-effectively meeting the needs for very fast changing businesses. The cloud can also be an enabler for rapid change control, and give data center managers the ability to fine-tune resource allocations and quickly provision the resources required for a new lab or project.

With an intelligent energy management platform, the cloud also positions data center managers to more cost-effectively assign workloads to leverage lower utility rates in various locations. As energy prices remain at historically high levels, with no relief in sight, this provides a very compelling incentive for building out internal clouds or starting to move some services out to public clouds.

Every increase in data center agility, whether from earlier advances such as virtualization or the latest cloud innovations, emphasizes the need to understand and utilize energy profiles within the data center. Ignoring the energy component of the overall cost can hide a significant operating expense from the decision making process.