The old joke in New York City is that the western region of the U.S. starts on the west bank of the Hudson River. Before 9/11, most data centers were often within blocks of each other on the 4 Line.

While they started out like lab spaces in the ’70s, data centers evolved. By the ’90s, most had transformed into enterprise-owned, purpose-built, “island fortresses” with all data held inside.  Banking saw the first distributed data centers for DR/backup while processing in their regional markets with a mixed bag of data availability and application latency.  Some remember overnight data compilation and the morning reports that appeared on green-lined tractor feed paper as a result.  Sarbanes-Oxley forced further physical dispersion.

Data never left that physical or logical ecosystem, and failovers were often manual.  Site selection was closed-looped and facility-based.  It worked well at the time.  Until the mid-2000s, the end user was always somewhere under the roof of the business or tethered to that enterprise data. Whether it was a customer-facing website or a company’s retail branch, you were directly linked to the company’s central nervous system.

From the Physical to the Logical

Things are a lot different today than they were in 2008 — you can now stream “The Mandalorian” just about anywhere you have a Wi-Fi signal.  Data presence is anywhere and everywhere that it’s consumed, generated, or transmitted.  That dynamic drove massive changes in app and network architecture, which, in turn, drove radical changes in facility selection and design.  Facilities simply followed the business.

Many older enterprise facilities don’t fit cleanly into this new set of sensibilities.  They may be located in the wrong place, aren’t powerful or large enough, or can’t be adapted to handle the shift to newer hardware and off-premises data.  Amazon, Facebook, and Microsoft have all downgraded their older data centers to lab spaces, or they have completely abandoned them.

Sites now follow the distributed nature of the applications and networks they host and serve.  Older, enterprise-owned data center “fortresses” still exist to support network or legacy apps that are tough to migrate.  Physical hubs have been replaced by logical ones.  Facility solutions now have an operational dialog with several sites resulting in fluid traffic and demand response while maintaining requisite enterprise-level redundancy. Cloud operations are not monolithic and host dozens, even hundreds, of organic applications, further complicating site selection and app latency over large areas like North America.  The rules have changed, and you likely don’t even own the site yourself anymore.

Site Criteria Evolves

Historical site selection criteria still apply.  They are exhibited in several places: The Uptime Institute, EIA/TIA, and ANSI/BICSI.  These are all beacons of common sense — build near your long-haul fiber, don’t put the building under the airport’s flight path, place it in a location strategic to your business or customers, hide in plain sight, etc.  That worked for asynchronous backups for on-premises data.  They still apply for physical site selection.

But site selection has evolved to include development expenditure (DevEx), capital expenditure (CapEx), operational expenditure (OpEx), enterprise reliability, and app latency.  The older physical site selection criteria still apply, but we now have several more constituencies that force far more complex deliberations and decisions.  Financial incentives, data privacy, network architecture and latency, power cost, workforce availability, and meteorology are new aspects that are important.  All of these considerations can have a clear cost/benefit argument applied.

Often complicating deployment decisions is the need to deploy capacity — whether that be via colo, cloud service provider (CSP), or build-to-suit/own.  Billions of dollars of IT and real estate development funds facing exponentially expanding data and client demands can be charitably described as impatient.  That money wants to work hard.  One rule is that if you own it, the longer it takes to get the asset in play. 

The key difference between site selection criteria of yesterday and today is simple. Older criteria speak only to risk and physical location, with modest sensitivity to development and CapEx costs and little consideration for facility OpEx optimization.  DevEx, CapEx, and OpEx are now dominant site selection forces, along with latency and reliability.

Data and network architecture have a strong bearing on why and where you build.  Cloud-based operations are built and linked regionally.  Some may use POP sites to aggregate network traffic or buffering data centers ahead of a larger site (think of 5G but on a really big scale).  Others may allocate a campus or building for storage.  Some may build a gateway to a physical region that has yet to be developed or poses development risks too ungainly to overcome.  This all respects application, availability, latency, and failover. 

What you end up with today is a lattice of cooperating sites that’s logically self-healing.  It’s a far cry from the state-of-the-art buildings of 15 years ago.  Buy, build, or lease — the questions are all the same; only the real estate solution varies.