In late 2011,DatacenterDynamicsreleased its annual survey of energy use in data centers. It made for stark reading.

• Global data center power usage in 2011 was 31 gigawatts (GW). This is equivalent to the power consumption of several European nations. Thirty-one GW is approximately 2% of global power usage in 2011.

• Power requirements for 2012 were projected to grow by 19%.

• Fifty-eight percent of racks consume 5 kilowatts (kW), 28% consume 5 to 10 kW and the rest consume more than 10 kW per rack.

• Forty percent of participants stated that increasing energy costs will have a strong impact on data center operations going forward.

 

If these numbers come as a shock, they should be considered against several other factors that will impact the cost of running data centers.

• Environmental or carbon taxes are on the increase and data centers are seen as a prime target by regulators.

• As a result of the Fukushima nuclear disaster, several European countries are planning on reducing and even eliminating nuclear as a power generation source. This will create a shortage in supply and drive up power costs.

• Around 40% of all power used by data centers is to remove heat and could be considered waste.

• The move to cloud will only shift capital expenditure (CAPEX) out of the budget. Power is an operational expenditure (OPEX) and will be added to the cost of using the cloud thus driving OPEX at a faster rate than CAPEX is likely to come down.

DESIGN FOR COOL

Removing heat effectively is all about the design of the cooling systems. There are several parts to an effective cooling system:

• The design of the data center.

• Choosing the right technology.

• Effective use of in rack equipment to monitor heat and computational fluid dynamics (CFD) to predict future problems.

DATA CENTER DESIGN

A major part of any efficient design is the data center itself. The challenge is whether to build a new data center, refurbish existing premises, or retrofit cooling solutions. Each has the ability to deliver savings with a new build and refurbishment likely to deliver the greatest savings. Retrofitting can also deliver significant savings on OPEX, especially if reliability is part of the calculation.

Building a new data center provides an opportunity to adopt the latest industry practices on cooling and take advantage of new approaches. Two of these approaches are free air cooling and splitting the data center into low-, medium-, and high-power rooms.

In 2010, HP opened a free air cooling data center in Wynyard, County Durham, UK. In 2011, HP claimed it had only run the chiller units for 15 days resulting in an unspecified saving on power.

Refurbishing an existing data center can deliver savings by allowing easy access to rerun power and change out all the existing cooling equipment.

In 2011, IBM undertook more than 200 energy-efficiency upgrades across its global data center estate. The upgrades included blocking cable and rack openings, rebalancing airflow and shutting down, upgrading, and re-provisioning airflow from computer room air conditioning (CRAC) units. The bottom line was a reduction of energy use of more than 33,700 megawatt-hours, which translates into savings of approximately $3.8 million.

Retrofitting a data center can be a tricky task. Moving equipment to create hot aisles and deploying containment equipment can have an immediate impact on costs.

Kingston University was experiencing problems in its data center. IT operations manager Bill Lowe admits, “As the university has grown, so too has the amount of equipment housed in its data center. As new racks and cabinets have been added, the amount of heat generated started to cause issues with reliability and we realized that the only way to deal with it was to install an effective cooling system (Figure 1). Using Cannon Technologies Aisle Cocoon solution means that we will make a return on investment in less than a year” (Figure 2).

TECHNOLOGY

Reducing the heat in the data center is not just about adding cooling. It needs to start with the choice of the equipment in the racks, how the environment is managed, and then what cooling is appropriate.

Power supplies inside servers and storage arrays should all be at least 85% efficient when under 50% load. This will reduce the heat generated and save on basic power consumption. ASHRAE guidelines suggest 27°C as a sustainable temperature, which is acceptable to vendors without risking warranties. Newer generations of hardware are capable of running at higher input temperatures.

High energy workloads such as data analysis or high performance computing (HPC) will generate more heat than email servers and file and print operations. Mixed workloads cause heat fluctuations across a rack so balancing workload types will enable a consistent temperature to be maintained, making it easier to remove excess heat.

Liquid cooling includes any rack that uses water or any gas in its liquid state. ASHRAE has recently begun to talk openly about the benefits of liquid cooling for racks where very high levels of heat are generated. This can be very hard to retrofit to existing environments due to the problems of bringing the liquid to the rack.

Hot/cold aisle containment is the traditional way to remove heat from a data center (Figure 3). Missing blanking plates allow hot air to filter back into the cold aisle, reducing efficiency (Figure 4). Poorly fitted doors on racks and the containment zone allow hot and cold air to mix. Forced air brings other challenges such as missing and broken tiles allowing hot air into the floor, while too much pressure prevents air going up through the tiles.

Chimney vents can be easily retrofitted even in small environments. Using fans, the chimney pulls the hot air off the rack and vents it away reducing the need for additional cooling.

CRAC has been the dominant way of cooling data centers for decades. It can be extremely efficient, but much depends on where the units are located and how airflow is managed within a data center.

One danger of poorly placed CRAC units, as identified by The Green Grid, is the problem of multiple CRAC units trying to control humidity if air is returned at different temperatures. The solution is to network the CRAC units and do coordinated humidity control. Effective placement of CRAC units is a challenge. When placed at right angles to the equipment, their efficiency drops away over time causing hot spots and driving the need for secondary cooling.

In high-density data centers, ASHRAE and The Green Grid see within row cooling (WIRC) as imperative to get the cooling right to the sources of the heat. WIRC also allows cooling to be ramped up and down to keep an even temperature across the hardware and balance cooling to workload (Figure 5).

If the problem is not multiple aisles, just a single row of racks, open door containment with WIRC, or alternatively liquid-based cooling, can provide a solution. For blade servers and HPC, consider in-rack cooling. This solution works best where there are workload optimization tools that provide accurate data about increases in power load so that as the power load rises, the cooling can be increased synchronously.

New approaches to CRAC extend the life of these systems and improving their efficiency. Dell uses a subfloor pressure sensor to control how much air is delivered by the CRAC units. This is a very flexible and highly responsive way to deliver just the right amount of cold air and keep a balanced temperature.

Dell claims that it is also very power efficient. In tests, setting the subfloor pressure to zero effectively eliminated leaks and while it created a small increase in the power used by the server fans, it heavily reduced the power used by the CRAC units. Dell states that this was a 4:1 reduction. Dell has not yet delivered figures from the field to prove this savings.

WORKLOAD-DRIVEN HEAT ZONING

Low, medium, and high power and heat zones allow cooling to be effectively targeted based on compatible workloads. An example of this is the BladeRoom System where the data center is partitioned by density and power load.

Effective monitoring of the data center is critical. For many organizations, monitoring duties are split across multiple teams and that makes it hard to identify problems at the source and deal with them early. When it comes to managing heat, early intervention is a major cost savings.

There are three elements here that need to be considered:

• In-rack monitoring

• Workload planning and monitoring

• Predictive technologies

All three of these systems need to be properly integrated to reduce costs from cooling.

In-rack monitoring should be done by sensors at multiple locations in the rack: front, back, and at four different heights. It provides a three-dimensional view of input and output temperatures and quickly identify if heat layering or heat banding is occurring.

As well as heat sensors inside the rack, it is important to place sensors around the room where the equipment is located. This will show if there are any issues such as hot or cold spots occurring as a result of air leakage or where the airflow through the room has been compromised. This can often occur due to poor discipline in the data center where boxes are left on air tiles or where equipment has been moved without an understanding of the cooling flow inside the data center.

Most data center management suites, such as CannonGuard provide temperature sensors along with their CCTV, door security, fire alarm, and other environmental monitoring.

WORKLOAD PLANNING AND MONITORING

Integrating workload planning and monitoring into the cooling management solutions should be a priority for all data center managers. The increasing use of automation and virtualization means that workloads are often being moved around the data center to maximize the utilization of hardware.

VMware, HP, and Microsoft have all begun to import data from data center information management (DCIM) tools into their systems management products. Using the DCIM data to drive the automation systems will help balance cooling and workload.

PREDICTIVE TECHNOLOGIES

CFD and heat maps provide a way of understanding where the heat is in a data center and what will happen when workloads increase and more heat is generated. By mapping the flow of air it is possible to see where cooling could be compromised under given conditions.

Companies, such as Digital Reality Trust, use CFD not only in the design of a data center, but as part of its daily management tools. This allows them to see how heat is flowing through the data center and move hardware and workloads if required.

There is much that can be done to reduce the cost of cooling inside the data center. With power costs continuing to climb, those data centers that reduce their power costs and are the most effective at taking heat out of the data center will enjoy a competitive advantage