From an engineering standpoint, data centers exist to deliver IT workloads in a manner that is both capital and energy efficient. Yet, the power delivery method remains rigid, inflexible, and usually misaligned with IT SLAs, which has a significant negative impact on performance and infrastructure capital efficiency.
For many in the data center sector, one of the most pressing concerns is that much of the world’s data center infrastructure operates in a manner that is financially suboptimal and environmentally unsustainable. While there is steady capital inflows to the data center sector, some are questioning whether data centers are as capital efficient as they could be. If a data center is only using a fraction of the available power, then the capital investment that is tied up in inflexible power infrastructure is impotent. The question is, who is paying for that stranded capacity and unused space?
All data center stakeholders are interested in operating sustainable data centres and saving money while making a return on investment. This includes general managers in hyperscalers, colocation facility engineers and their end user customers, and enterprise CTOs and CIOs.
One answer is to change the data centre power provisioning where static power infrastructure is not aligned with IT workloads.
Changing IT Needs Responsive Power
As power usage rises and falls in infrastruture and platform as a service environments, investors and operators are seeking savings by trying to figure out how they can align their power SLA with the IT SLA. The fundamental problem is that virtually all data centre power designs are static. Changing to a dynamic power design is key to eliminating waste and maximizing power utilization. The need for “power as a service” has never been greater, and it can be done while saving CapEx and OpEx.
The Adaptable Redundant Power Journey
Adaptable Redundant Power (ARP) has been developed at i3 Solutions Group and demonstrates that data center power systems can be both flexible and responsive, driving efficiencies and lowering operating costs.
ARP works by capturing and then using stranded capacity in the data center. By providing higher utilization to the power modules, it delivers significantly improved returns on invested capital.
As occupancy incrementally increases, an ARP-enabled power design requires fewer power modules. The modules are then run at higher utilization. This means that as the data centre load increases, the investment capital of deploying new modules can be deferred because the operator can now access what would otherwise have been stranded capacity from the power system.
ARP does this by provisioning predefined variable redundancy levels to IT loads by accessing redundant power from stranded islands of power capacity trapped in the electrical infrastructure. This leads to significant deferred capital and substantial capital savings.
In addition, in operating terms, the benefits of ARP start immediately when data centre utilization rates are low.
For example, in a 10-MW data centre with five halls of 2 MW each, an ARP-based design starts to deliver benefits when utilization rates are at their lowest — when only one or two halls are in use and IT workloads are just beginning to ramp up. At this stage, ARP end-state power cost savings as a percentage of total power cost are at their highest. This means operators are not buying unused power or wasting energy. And as data centre utilization rates rise, ARP ensures CapEx and OpEx savings throughout the lifecycle of the data center.
The answer to questions about capital efficiency and environmental sustainability in the data centre sector must be seen through the lens of power as a service, and that’s what ARP achieves.
Report Abusive Comment