To meet the simultaneous occurrences of exponential cloud growth and an unprecedented AI demand curve, every global hyperscale data center firm is pressing ahead with new projects, including acquiring suitable colocation space.

The increase in training and inference data for AI and machine learning (ML) requires massive amounts of computational power. Already, the IT infrastructure providers are warning that, once the models begin use for actual problem-solving, we will rapidly enter the “exascale” era. AI computational power is measured in terms of exaflops (“flop” stands for floating-point operations per second, and “exa” indicates 1 quintillion operations per second), and data storage is measured in “exabytes” (1 billion gigabytes).

Exaflops and exabytes may be unfamiliar terms to data center engineers, but they are the new computational scale that will require leaps in power provision for servers and cooling. Hyperscalers are already committed to building exaflop supercomputers for AI applications. Meta, owner of Facebook, is building a 5 exaflop supercomputer. Inflection AI is constructing the world's largest AI cluster, consisting of 22,000 NVIDIA H100 Tensor Core GPUs. Google is planning a compute machine that scales to 26,000 GPUs (26 exaflops of AI throughput). AWS is building GPU clusters called UltraScale to provide an aggregate performance of up to 20 exaflops. And, to accommodate AI integration, Equinix has identified a $22 billion market for data center services that support AI.

In computational power terms, it can be considered an arms race — expect more and bigger announcements.

But, such a race raises pressing questions for operators: Could escalating power demand become a critical obstacle to AI digital infrastructure development? Where will the power for these workloads be sourced?

Who’s got the power?

While remaining discreet about it in public, tech titans, including the aforementioned Meta, Microsoft, Google, and AWS, are exploring grid-independent power generation options as a strategic move to sustain their AI ambitions.

To address this, hyperscalers are investigating smart grid technologies, microgrids, and advanced power management systems to regulate and optimize energy distribution within their facilities.

Stabilizing power fluctuations within data centers is a paramount concern. AI workloads may lead to erratic power usage patterns, demanding robust stabilization mechanisms.

Whatever the power sources, companies will require battery energy storage systems (BESSs) at scale and advanced UPSs — conditioning and stabilization technologies capable of seamlessly transitioning between power sources and ensuring uninterrupted operations.

What part commercial colo third party power?

Securing power also raises questions for colo and cloud providers.

For the long term, can big colo companies find sufficient power to host AI purely by sourcing renewable energy? Can hyperscalers build or buy hundreds of megawatts of green power through power purchase agreements (PPAs) and renewable energy certificates (RECs) to run their platforms?

Doesn’t a hybrid mix of grid access and clean on-site power generation, combined with stabilization and conditioning, seem like a more achievable and feasible solution?

Sustainable AI

Meeting the AI demand puts pressure on organizations as they strive to meet net-zero pledges.

By investing in renewable energy sources for their data centers, hyperscalers aim to minimize their environmental impacts. Such projects include Google's Renewable Energy Buyers Alliance, Microsoft's carbon-negative pledge, and AWS's investments in renewable energy projects. The goal of achieving energy independence will come through on-site power generation and proximity to renewable energy resources.

All are part of a new understanding of power generation.

New power strategies

The future of AI heavily relies on the seamless, efficient, and sustainable power to supply these colossal data centers.

Some grid independence in hyperscale AI data centers involves integrating renewable energy, implementing energy storage solutions, employing microgrids with on-site power and conditioning, and leveraging AI itself to optimize energy consumption and ensure uninterrupted operations.

The shift toward energy independence signifies a transformative phase for hyperscale companies as they reshape the AI infrastructure and data center landscapes.

As tech giants steer toward more sustainable and reliable power solutions, they are met with many challenges. Figuring out how to power data centers to meet the soaring demands of AI, marks the beginning of a new era in technological innovation.