Data center design and colocation RFP’s continue to list a DCIM solution as a requirement. It is an easy box to check, or not, as is more of often the case. Like Tier certification, the ambiguity and misunderstanding of the technology on the RFP author’s side, or the actual enduser, has been the Achilles’ heel for DCIM, and it getting value engineered out of projects.

The maturing of data center infrastructure management (DCIM), and wide adoption of it, will really accelerate as the sophistical data center CIO broadens their knowledge of facilities operations, cozies up to facility management with decades of BMS experience (DCIM for them!), and learns how to specify the myriad data points available into something not just useful, but that will be used.

In preparation for this post, and to capture a broad market view, major universities, biotech and pharmaceuticals, major telecommunications and financials, and colocation companies were queried on a national level.

DCIM: The Last Five Years

The rise of DCIM is well documented: in approximately 2009 a technology that didn’t truly exist commercially invigorated the market, and yielded dozens, to eventually more than a 100, companies all claiming to be the best DCIM provider. Here is an opportunity to increase MEP system utilization by understanding IT better. Systems that are loaded to a higher percent will improve efficiency, and reduce OPEX, at least, is one of the numerous claims. Like all technology advances there were early adopters that led the market, and lived to share their varied success.

Approximately five years ago, a major U.S. health care company became an early DCIM adopter. A lot of time and money was spent deploying and managing their system. The vendor they selected is now out of the business, and while the DCIM solution remains functional they do not readily utilize the data.

Worse, in hindsight, they would have selected other data sets, and now must choose between living with an investment, going back out to market, or implementing something custom. This particular company chose to go back to market, and eat the losses. A similar story was told by a major Boston-area university, but, as a lesson learned, they chose to build their own product with a potentially low-cost staff whose fate only they controlled.

A New England research facility has deployed a number of what they consider best in breed software to tackle DCIM, but wrote the code to link them and extract the exact data they wanted to track and evaluate. A hybrid strategy to be sure. Why? They evaluated the market and could not find a product that met their myriad needs. A national media outlet similarly combines DCIM with an internally developed system.

A top five U.S. financial agreed that off-the-shelf was a difficult approach to meet unique needs, and, despite multiple product evaluations and substantial human resource investment, has not selected a product. An international pharmaceutical company, too, went to market, but ended up buying from a custom vendor to meet their exact requirements.

Finally, despite having a few data center environments, and a few thousand server cabinets, a local flavor of a national health care network said they were too small to implement a DCIM solution.

The first six engagements in the writing of this post yielded exactly one happy customer in the last five years, and that single company built their own product with a custom vendor. The DCIM fabric that tied all these organizations together was that they each needed something different and unique to their needs, and while so many products existed the market was not meeting the demand. What would happen if these entities were in a common facility? Does the broad spectrum of services better adapt to a situation where multiple endusers are served? Said another way, in consideration of the various needs of a multi-tenant environment how did colocation adopt DCIM?

DCIM was a perceived differentiator. If one facility can check a box that another can’t then there is a perceived advantage. Especially if the RFP is being drafted by someone who does not understand DCIM. The problem for many colo owners was if DCIM could be monetized, literally, or by way of reduced CAPEX or OPEX. The majority of colo operators don’t fill their whitespace with their own IT, and DCIM offered little to no gain to that arrangement.

In fact, DCIM presented security concerns that colo wasn’t presently dealing with. If the DCIM deployment could be built into the lease, deployed securely, offer management panels, even if they were view-only, could it actually help close more deals? Several major colo’s have their own home brew, and ensure they are compliant with evolving data center software. But what about everyone else? In discussions with three national colo providers two chose not to implement, and one did a home brew. As part of a broader search several colo provider’s websites claim to offer DCIM, but a quick forensic analysis reveals more of an enhanced BMS strategy.

The DCIM industry was not reacting to the “voice of the customer”!

From a consulting engineering point of view, it can be difficult to acquire information from the IT side of an organization, and the MEP design based solely on perceived electrical load and calculated thermal management. Worse, IT may not even be involved in the design process. The engineering community is largely focused on the major mechanical and electrical systems, and thus more in tune with facility operations and BMS. Asking to weigh in on what is effectively a BMS for IT platform that will interface with the technology is a bit of a gray area, and yet is commonplace. The majority of data center consulting engineers never set foot in the commissioned white space, have no idea what technology may be deployed, and, worse, may not be IT savvy enough to add value. Matter of fact many architects are asked this question too, and are even further divorced from the technology than the engineer. In the end, like PUE, and due to the communication gap, and/or lack of IT involvement in data center design, it has been very easy to remove DCIM from RFP requirements. 

After five years, those DCIM users who were ahead of the curve complained about:

  • Costly investment, both human and financial

  • Apples and oranges pricing models

  • Too much and unusable data

  • Dropped packets and lost data

 The silver lining is these early adopters saw the good and bad in an industry that started as a few providers, grew exponentially, and, according to Gartner and Forrester, created a billion dollar market.  They may have even figured out the data they could use, benefited from it, and provided feedback to the market to make the environment better en masse. In summary, now we know what to ask for.

DCIM: Today

Consolidation and adoption are the stories of the day: many articles over the past 5-years list CA Technology as a major contender, and yet they discontinued their DCIM product last year. NLyte recently acquired FieldView, Schneider acquired Viridity, and, like the fiber industry, a few players license someone else’s software, and add an overlay of value, like Cormant and Future Facilities; and DCIM is a requirement and is having its own RFP.

Starting with the latter, it is obvious that only the end users know not only what they want and what will be used, but the strengths of their particular staff, their operations procedures and policies, growth strategy and master plan. Asking architects and engineers to merge the need for DCIM with other design requirements, in general, doesn’t work. As an aside, IT should still have a voice in data center design, and their follow-up DCIM RFP and IT strategy will be better as a result.

There are still many DCIM providers out there, but the field and focus is narrowing. The best and worst case scenario is that ultimately there could be a product for everyone, but that the potential to create a competitive bid environment may be lost. Obviously custom software shops and internal development will still be options, and the companies providing an overlay to another product will but will be tough for bid levelling.

Returning to several of the scenarios presented over the past  five years, the major U.S. health care company is out to bid for commercially available DCIM software. The top five financial and one of the colo’s are closing out evaluations, and are likely to commit to an off-the-shelf product. The pharmaceutical company and the university plan to continue their current strategies. And the media outlet is buying more DCIM licenses, but not abandoning their internal product.

The results are more positive than foreboding for the future of DCIM, and while the news is mostly positive it speaks to an industry that continues to evolve.

DCIM: Tomorrow

DCIM will surely continue to evolve, but ways to increase adoption include a universal definition of DCIM, a true specification, vendors to provide network security and segmentation, and more equipment manufacture support the myriad databases involved in DCIM tools.

DCIM needs a formal definition beyond being an acronym open to interpretation, and should have a standard requirement. Enough with “true DCIM,” “hybrid DCIM,” and all other descriptors. You are DCIM plus whatever else if there is a standard requirement.

Beyond a definition, a standard specification, especially if engineers are going to continue to be involved, should be established based on the standard requirements. Treat it like the equipment it supports, and it will be easier to adopt. If manufacturers produce SDKs or other integration tools it will be easier for interested parties to model existing sites.

Thanks to Stuxnet and the Target breach, network security is becoming an even greater concern. With IoT and MEP devices increasing the fabric of your network segmentation of all systems, including DCIM and BMS, is a requirement and associated cost that needs to be added to the price tag.

Conclusion

With approximately five years of DCIM evolution and install base the most common theme continues to be is DCIM ready.  This evolution is approaching maturity, DCIM is being understood, but, most importantly, the right questions are getting asked. Now it’s only the people and process that need to change.