Most IT and facility managers want to know fully how power is being consumed throughout a data center. Yes, they know how power is being distributed to rows and racks, and how much gets consumed by the IT equipment and the cooling system (about half and half in the typical data center). The facility manager also knows the total energy consumption and peak demand during every billing cycle. But this basic knowledge reveals nothing about how power is actually being consumed by the very reason data centers exist: running the applications needed to operate the organization.
Why is it so important to know precisely how much power applications consume? Because power (or more accurately running out of it) is the reason most organizations outgrow a data center. That should be reason enough, of course, but it might also be worth knowing that power-related problems are now the primary cause of application downtime, and that the operating expenditure to power a server over its useful life now normally exceeds the capital expenditure to purchase it.
Advances under Moore’s Law enable servers to keep pace with growth within the rack space available in most data centers. The resource likely to be exhausted first is, instead, power as relentless growth exhausts available capacity both directly from upgrading and adding IT equipment and indirectly from expanding the cooling system.
A symptom of having a potential and premature outgrowing problem is stranded power. Power gets stranded in almost every data center, and exists whenever the power distributed to any rack substantially exceeds the peak power it actually consumes. For a single rack, this mismatch may seem trivial. But multiply a few kilowatts each by the many racks that fill a data center, and the amount of stranded power often approaches 50% of the total capacity available.
The usual cause of stranded power is the use of notoriously conservative nameplate or datasheet ratings on equipment, especially servers. These ratings specify the maximum power consumption possible by the server’s power supply, and that only rarely (and usually never) actually occurs, even under peak workloads.
One option to minimize stranded power would be to use a “guesstimate” of a little more than half of nameplate ratings to “fully configure” each rack. But that would foolish because, while the approach might work initially, it would ultimately (and perhaps soon) fail as the inevitable and relentless increase in application workloads causes just enough additional power consumption to begin tripping circuit breakers.
A better option involves knowing precisely the peak power consumed and the “work per Watt” performed (measured in transactions-per-second-per-Watt) by the applications running on every server. Fortunately, the new UL 2640 Test Method for Server Performance standard from Underwriters Laboratories provides these and other useful metrics, enabling IT and facility managers to now know how power is being consumed throughout the data center. And (if you’ll forgive the pun) that knowledge is power!