It is impossible to miss the flurry of activity and visibility recently around the multitude of so-called green initiatives being developed and unveiled. Almost every industry is pushing for the reduction of energy consumption and subsequently, the reduction of carbon emissions. Regardless of how anyone feels about global warming, the fact remains that as the consumption of fossil fuels increase, so too does the amount of carbon emission being released into the atmosphere. It is a destructive domino effect. As some regions of the world show growing concern, others are remaining tight lipped. And, although America has generally been the leader in total carbon dioxide emissions from a single country, in 2007 China took that undesirable position.

For some time now, the world has known that air quality and air pollution would be a pressing issue for future generations, and air quality is becoming a more urgent problem all the time. However, it might be the insatiable consumption of energy that will have the most immediate impact. Like a perfect storm brewing, increasing energy needs are running head-on into the escalating costs of producing electricity. 

The information technology (IT) sector and the modern computer data center in particular feel the im-pact of green power. Running everything from web sites to online banking, data centers have come under significant scrutiny recently. The consumption of electricity in data centers doubled from 2000 to 2005 and continues to climb. In 2006, data centers consumed 1.5 percent of the electricity used in the U.S., a whopping 61 billion kilowatt-hours. All this power drain caught the eye of the U.S. government, and in December 2006 Congress passed public law 109-431, which required the Environment Protection Agency to investigate alternatives for reducing energy consumption in data centers. Although we don’t expect EPA regulation on servers until late 2008 or early 2009, a smart CIO will no doubt want to be ahead of the regulatory “power-curve”.

OEMs provide a number of products based on low-power processors.

Power and cooling are at the forefront of energy consumption. At an average of 10 cents per kilowatt hour, most facility managers are acutely aware of the rocketing costs to power their IT data centers. In-terestingly, most IT data center managers do not have visibility to those costs. This stems from a long-standing separation of facility departments and IT organizations.  Typically, the biggest concern for the IT manager is getting enough power to the data center for an ever-increasing workload. Many IT cus-tomers are simply out of capacity and evaluating new data center construction options.  Meanwhile, the facility manager picks up the check for all the electricity without concern or hesitation. In the future, to be successful, companies will need to merge those teams together as power and cooling issues become more pervasive. Companies should be able to live much longer in their current data centers, by leveraging energy saving products and solutions.

From the CIO perspective, success comes not just from delivering on required service level agreements, but also from using IT as a competitive weapon in the aggressive global business environment. Therefore the data center should no longer just support the business; it should help drive the business. And, freeing up expense dollars that would normally be consumed by purchasing electricity is one way to do exactly that. Companies need to think of energy consumption in data centers as a variable cost that with proper management can send more dollars to the profit line.

Surprisingly, taking a bite out of the data center electricity costs is relatively easy. There are a myriad of potential solutions, and every server manufacturer seems to have its own idea on how to approach the problem. Some easy things can be done without significant costs, but a more comprehensive and holistic strategy will yield the biggest results.

The Whole Solution

First of all, every data center operator needs to work with a server manufacturer or data center consultant that can deliver an integrated holistic solution-to include an up-front assessment of the current situation. The solution must address the entire power and cooling structure of the data center, everything from the smallest microprocessor chip to the largest air-conditioning units (chillers) that cool the data center - and everything in-between. While most IT vendors offer energy-efficient servers, very few can actually address the entire problem, from chip to chiller.


Starting with the Chips

The industry’s low-power processor is a good start. Additionally, small-form factor (2.5 in.) hard drives are also a must, since they consume half the power of 3.5-in. drives. Low-power memory is beneficial as are efficient power supplies in reducing energy consumption. Energy-efficient servers or blade units us-ing these components can reduce power consumption between 15 and 36 percent. Until recently it has been impossible to evaluate the performance per watt of servers on a level playing field. That all changed in December 2007 when an industry performance body, the Standard Performance Evaluation Corporation (SPEC), issued a new benchmark to measure server energy efficiency, called SPECpower_ssj20081. SPEC  is made up of IT industry leaders, and this new benchmark establishes rigid standards to ensure that valid, unbiased comparisons can be  made. In November 2007 the HP ProLiant DL160 topped the SPECpower_ssj2008 result list.




Green IT means more than saving energy, it also means increased reliability.

Virtualization

Virtualization provides some real energy-saving opportunity through consolidation of both servers and storage devices. Some data centers expect to save between 30 and 40 percent of their electric costs through virtualization. For servers, it works like this: take four servers operating at 20 percent and con-solidate those to one server operating at 80 percent. Admittedly the processors in that one server need more energy, but the processors account for only 30 percent of the server electricity use. So the increase in utilization to 80 percent mostly affects only the 30 percent of power that runs the processors. The virtualized server leverages the same rotating hard drives, mother board, I/O slots, memory, fans, and power supplies which account for 70 percent of the remaining power needed in a server. Furthermore, a server power supply operating at 80 percent capacity is much more efficient than one operating in the 20 percent range. Storage products also have new solutions to reduce the number of hard drives needed through a unique provisioning process that allows the operator to provision once at a high capacity, but only add physical storage drives as needed. Many power companies will offer rebates for replacing older servers with newer more energy-efficient ones.


Software

Power management software is also a powerful tool that can be used effectively to reduce power con-sumption. The processor vendors all have a type of power throttling that can reduce consumption on processors that are not being fully utilized. Most server manufacturers leverage this technology and gen-erally add features that allow IT managers to monitor, measure, control, and reduce the power draw in server or blade equipment. Unfortunately, most data centers do not use basic power management soft-ware tools. HP Power Regulator, for example, comes free with ProLiant servers and can reduce energy consumption by up to 10%. And, HP Insight Power Manager (IPM) is an integrated power monitoring and management application that provides centralized control of server power consumption and thermal output at the datacenter level.  That’s what is commonly referred to as “low-hanging fruit” that can pro-vide a positive benefit at minimal or no cost.


Blades

Blades themselves are providing leadership for energy savings. Although they are in a more dense form factor, the very unique features of blades can save significant money. A recent Sine Nomine report in 2007 compared the HP BladeSystem to an equivalent amount of compute power in a rack-mount con-figuration. The result was a surprising 36 percent less power consumption for the HP BladeSystem com-pared to the rack-mount servers. Thanks to the “active cool” fans that turn faster or slower, (depending on the cooling needs), the blade system delivers just the right amount of cooling, to the right location.


Practices that save energy begin at the chip but also include rack and room-level strategies.

Data Center Infrastructure

In addition to servers and blades, the data center itself and the associated infrastructure needs to be closely examined for energy efficiencies. High airflow rack cabinets, power distribution racks, and three-phase uninterrupted power supplies (UPSs) can reduce power loss attributed to converting electricity from ac to dc and back to ac, as some online UPSs do. The power distribution rack cabinets reduce all the cable clutter under the raised floor, thus allowing better airflow in a more laminar fashion. And, taking a single leg of a 480-volt (V), three-phase rack PDU may eliminate the need for a step-down transformer to get 277-V delivered to the server. A server vendor can provide good advice and offer the necessary products to meet energy needs and assessment services. Every watt of IT power requires about a watt of power to drive the cooling. Therefore, more efficient power distribution has a twofold benefit; reduced power losses in the power distribution chain and lower power required to cool those losses that are generally given off as heat. Many server providers in the industry are narrowly focused on only one aspect of the problem; reducing power consumption of IT equipment.


Building cooling systems complete the green IT package.

Big Air Chiller Units

New and revolutionary technology controls the delivery of cold air better than ever, ensuring it is dis-tributed and directed to the area where it’s needed the most. One product on the market is HP Dynamic Smart Cooling (DSC), which provisions data center cooling actively and makes constant adjustments. Most customers have data centers that feel like refrigerators, which is a sure indication that too much cooling is being delivered and energy is being wasted. A side benefit of Dynamic Smart Cooling (DSC) is that maintenance personnel won’t need a winter coat any longer to work in the data center. DSC pro-vides cooling where it is really needed and prevents random distribution. The net energy savings can be significant. For a data center that uses liquid cooling in air handling units, the IT manager can save up to 40 percent on the costs to cool IT equipment.

What kind of savings can these solutions deliver? The answer certainly varies widely because every data center is different. However, cutting a third, or possibly more, of electricity costs is very achievable if the right measures are taken.

Admittedly, not every solution will be right for everyone. Smaller customers will gravitate to the solu-tions that are quick and easy to deploy. Enterprise data centers may want to take advantage of large saving opportunities that come through Dynamic Smart Cooling. 

Whatever the case, partnering with a server manufacturer that offers the most holistic solution across the entire chip to chiller spectrum leads to the best solution. Focusing on a single element like an energy-efficient server will limit savings.