Most data center owners and operators do not measure or track the computational output of their data centers. When working with a new data center client, I normally ask a few stock questions to obtain a broad understanding of how the facility consumes energy (What is the square footage? What it is the typical IT load? How is the data center cooled?). My next questions are usually aimed at informing my understanding of the facility’s process efficiency (How do you measure useful work in your data center? Have you developed a productivity or kilowatt (kW)/compute metric?).

Data centers are often described as the factories of the Information Age. Unlike traditional factories, however, it’s relatively rare to measure process efficiency actively in data centers. For example, the owner of a dishware manufacturing plant would likely be able to tell you precisely how much clay, glaze, and other materials are required to produce a single plate — but a data center manager generally would not be able to tell you how much energy is required to process a single health care record or complete a single financial transaction.

There are exceptions to this statement, of course. In the high-performance computing space, many users run LINPACK to benchmark the performance of their systems. Additionally, I have worked with some financial institutions and Fortune 500 companies that have developed custom productivity metrics. Many organizations use power usage effectiveness (PUE), but this metric simply compares energy consumed by computing equipment with energy consumed by overhead equipment, and does not measure output (i.e., computational performance).

Willdan has worked with a number of organizations in New York to help them develop metrics to measure their useful work. We approach productivity metrics in a data center in the same way we approach productivity metrics in a manufacturing plant. We work with the organization’s management and facilities teams to understand their core mission, the output they provide to their customers and the process needed to provide that output.

Most recently, we worked with a cable and high-speed internet provider and found that the ultimate output of the data center to its customers was a combination of video, voice and data. We worked with the provider to determine the throughput of each type of service provided in terabytes (TB). Once that number was identified, we were able to determine a kilowatt-hour per terabyte (kWh/TB) metric and understand how much energy was needed to support each unit of service.

Willdan also has worked with a financial institution to track the throughput, or computational load, of new servers per kilowatt of demand. Because new servers can take on higher compute loads/kW, by upgrading to new servers, the customer was able to increase its throughput while decreasing its demand by 32 kW. To gain the same amount of computational load under a business-as-usual scenario (i.e., the existing servers took on the additional computational load), the customer would have seen a net increase of 26 kW.

Now that both customers understand the process for determining the metrics, and all of the inputs are understood, the providers are able to make more informed decisions when purchasing new equipment, driving more energy-efficient decision making.