Energy efficient, renewable, sustainable, carbon neutral, and of course the catch-all, going green — all make for great sound bites, but how do we really quantify how well we are doing in the age of gigawatt scale data centers?
If you like simplicity when it comes to considering which data center availability “standard” you want to follow, the 1-2-3-4 rating system may be just the right approach for you.
While attending the DCD Enterprise conference in NYC in May, I was amazed at the number of presentations and vendor booths highlighting edge computing, 5G, and digital transformation.
Happy 2018, and if you have not heard these terms last year, container, cloud, colocation/MTDC, hybrid, edge, modular, and OCP, applied to data centers enough times — it is time to update your subscription to Mission Critical Magazine.
As is my usual practice for my year end column, I asked the official Hot Aisle Insight crystal ball for guidance on the latest developments and trends in data centers.
By virtue of its name, starting from the days of the mainframe, we traditionally tend to think of the “data center” as the central point for the data processing, storage, as well as the nexus of the data network.
A long time ago in the mid-1990s, Ken Brill, Jedi Knight, brilliantly created the concept of a “tier system of availability” based primarily on the redundancy level of the facility power and cooling infrastructure, and subsequently founded the Uptime Institute (UI).