Thirty years ago, when I first started working in data centers, energy efficiency wasn’t a concern. It wasn’t even considered other than to ensure systems were sized correctly to meet the load. IT pursued improved performance and facilities pursued increased reliability. But as the costs of energy went up and the loads increased, the spotlight began to swing towards energy consumption and, more importantly, to the “bottom line,” AKA the electric bill.
Around 15 years ago, I asked an IT equipment manufacturer why each new generation of computers had to include increases in heat density and overall watts per square foot. I explained how this was causing difficult challenges to critical facilities operations such as faster and more extreme thermal transients following a cooling outage. His response was a true eye-opener for me. He said those who select and purchase computer equipment are not responsible for paying the electric bill and they are not responsible for cooling the equipment. All they care about is better IT performance. The IT department’s definition of “improved performance” was defined by clock-speed, processor speed, through-put, increased memory, etc. No customer ever purchased a server because it used less energy than their competition’s, so why invest in making their products more efficient?