The modern data center business emerged in the ’90s, with a subsequent boom and bust in the early 2000s. Fallow facilities were being absorbed as late as 2007, with gross inventory growth essentially static. Equinix was a fortunate survivor, raising money via a public offering in 2000 and reaching a valuation of approximately $432 per share in today’s terms on the NASDAQ — yes, Equinix peaked at over twice their current valuation with just $20M or so in revenue. Similarly Digital went public in 2003, making the company just over 11 years old.
The relatively nascent nature of the industry has produced some shocking investment debacles perpetrated by some of the nation’s most respected brands. Long-term, fixed asset investing is risky, and in the contemporary data center industry, self-taught IT staff are applying IT budget cycles and investment analysis to 20-year fixed assets of upwards of $50 to $100 million or more. This perfect storm has resulted in an industry where over-provisioning by 100% or more is commonplace, with most CFOs disavowing responsibility and “leaving it to the techies.” Well respected enterprise and Internet firms, such as Intuit, Dell, and Zynga have grossly over-invested in data centers.
Limited skills with limited oversight are a dangerous combination, but it gets only worse when an environment undergoing rapid technological change is added to the mix. Inexorably, Moore’s Law continues to change the relationship between power and cooling, and virtualization and solid state hard drives are improving utilization rates and lowering power consumption as well. Rapidly changing technology makes any long-term data center forecast highly suspect.
Forecasting pain is widespread. Sabey and Benaroya, two privately held Pacific Northwest data center investors, are struggling due to poorly placed bets on data centers. Equinix reluctantly threw in the towel, now outsourcing numerous builds to Digital Realty Trust. How should decisions be made when planning large-scale, long-term, fixed asset (e.g. data center) investments that span multiple budgeting cycles in a field undergoing rapid change (e.g., technology)?
The answer is disciplined, multi-year, cross-functional investment planning that produces concrete and measureable ROI. The following are a few metrics to utilize when considering building or buying a data center:
- A concurrently maintainable data center can be built for between $8 and $10 million per megawatt by major service providers. Our own rapid deployment designs corroborate these figures. The shell and real estate are, roughly, only 10% of these figures.
- The weighted average cost of capital (“WACC”) of popular data center REITs, as reported to Wall Street, is approximately 12% over 10 years for new construction.
- If you cannot better the above figures by a significant margin, then execution, technology, and obsolescence risk should push any rational decision to outsourcing (e.g., shifting risk to third parties).
- Given the smaller scale and infrequency of purchase in comparison, it’s highly unlikely that the average enterprise can better the above CAPEX figures; however, one way to reduce capital cost is via application layer redundancy, thus diminishing the need for facility redundancy and ultimately the capital cost. The phone system can be a good metric — if the phone system is reliable enough, then you don’t need Tier III, concurrently maintainable. That’s also true if you have two facilities — both can be far less redundant, saving significant CAPEX.
- Electricity rates are a simple way to save OPEX cost. Every one cent less in utility rate, as measured in cents per kilowatt hour, results in approximately $800,000 in cash savings per megawatt of critical load over 10 years.
- Outside of electrical cost, PUEs are below 1.3, and with the scale of large data center providers, additional cost savings via efficiency or operations management are unlikely.
- Location doesn’t matter. With 10G layer two transport from coast to coast running at between $2,000 and $4,000 per month, bandwidth is exceedingly cheap and ubiquitous. Focus on the cost of power and use the network to access IT services. This is not a new concept — in 1991 the IT ops of the Japan office of my former Fortune 500 employer was partially managed from Finland. Many firms now routinely manage computer infrastructure distributed across the globe.
- The biggest risk is over provisioning, not under-provisioning, unit cost, or natural disasters. If it’s only 50% occupied, your unit CAPEX and OPEX just doubled, which describes the average Equinix customer. At average utilization of 50%, and average cabinet rental rates of over $2,000 per month, the true cost to an Equinix customer is on average around $4,000 per month per cabinet or $1,200 per kilowatt of critical load.
- Get a good engineer. Given enough money, anyone can build a data center, but good engineering results in a data center that is low cost, well occupied, and scales with demand. Don’t allow planning or engineering skill gaps to force purchases that break the bank.
- A full data center is a good thing — the cloud and short-term colo is for everything else. Make sure the data center can be fully occupied immediately, and treat it like a base-load power plant, which means, keep it at full load 100% of the time. It’s the only way to take advantage of economies of scale with a long-term fixed asset.
- Lastly, view “strategic” pressure skeptically, and let financial models drive decision making. Some of the GAMFY (Google, Apple, Microsoft, Facebook, Yahoo!) crowd are buying power plants for “strategic” reasons, but it makes no sense when the WACC of these firms far exceeds that of power companies. An astute CFO would squash these plans and outsource instead.
Smart companies have made some very large mistakes…don’t be one of them.