The complexity of data center environments was already growing by leaps and bounds before 2020, but, when the COVID-19 pandemic hit, complexity accelerated to a new level.
Operators who thought they had a handle on managing their assets prior to the pandemic were left scrambling to keep up with the accelerating trend toward distributed IT environments, demand for greater capacity to meet new business requirements, and problems finding qualified staff.
As data center operators seek to navigate the continuing complexity of the landscape in 2022 and beyond, many will do so amid mounting pressures to be more cost-effective, higher performing, and more sustainable. Thankfully, there are tools and approaches available to help untangle this complexity in a way that can lay a foundation for future flexibility and growth.
The increasing complexity of IT, technological advances, and business demands has upended the idea of the traditional, centralized data center environment and even redefined the concept of the data center itself. Because of this, each environment is becoming more unique in such a way that simple, one-size-fits-all solutions are insufficient to meet the growing list of challenges operators face.
One of those challenges is the explosion of compute and storage at the edge. The push to bring business-critical data closer to end users while helping alleviate capacity and bandwidth issues in the core data center has pushed more critical infrastructure to distributed environments, creating an unwieldy architecture that becomes increasingly difficult to manage. According to Uptime Institute, 58% of companies still expect to see a significant increase in edge computing going forward. As more distributed environments come online, it becomes more challenging to monitor and address issues, such as power outages, when they inevitably occur.
In the face of greater demands and a finite amount of data center space, operators should consider whether there’s an opportunity to optimize assets in their core data centers and better manage capacity. The cloud may be attractive to some, but the same Uptime Institute survey found 73% of operators were unwilling to shift critical workloads to the public cloud. Rather than buying capacity in colocation facilities or building new data centers, both of which are costly and result in less control over assets, many wonder how to better use the capacity and assets that currently exist.
Operators weigh these concerns while managing the same significant hiring struggles that have impacted many industries amid the pandemic. The Uptime Institute study found roughly half of operators have difficulty finding qualified staff to fill open data center positions, leading to a concerning lack of on-site support or staff to manage critical functions. This can make addressing issues with capacity and workloads even more daunting, as operators are forced to be more efficient with fewer hands.
These challenges alone, and the complexities caused by them, are enough to keep even the most seasoned operator up at night. Thankfully, there are ways to employ both technology and processes to help mitigate these issues while enabling flexibility and the opportunity to optimize for future growth.