Data center infrastructure management (DCIM) has become the big “must have” on the data center project list despite the ongoing weak economic environment, or perhaps because of it. So what exactly is DCIM and why is almost every organization considering putting it on their shopping list?

If you ask different people on the facilities side of the mission critical environment what “infrastructure” means, you will get some very diverse answers, and yet again a very different set of responses from the IT side of the house. This also holds true for the current crop of DCIM offerings by the various vendors. In some cases, the products are based on adaptations of traditional building management systems (BMS), while in others, it is more focused on IT asset and network management. As to the question of why some have already deployed systems while others have yet to put pilot projects into their budgets, their motivations also vary.

So is DCIM just market-driven hype, or are there tangible benefits to be realized? Considering the costs of some of these products, as well as the cost to implement them, let’s try to exam the business benefit drivers and the pain points the DCIM product are attempting to deliver or solve.

I have written about DCIM previously, but I notice that this year there seems to be an increased interest by potential endusers and a big push by major well-known players in the data center market, as well as by some smaller firms that have not been acquired (yet). In the last several years, the major vendors either developed or redesigned their own platforms, or in some cases just acquired the leading start-ups and merged them into their offerings. In fact, I recently attended the March DataCenterDynamics NYC conference and noticed that there was a significant rise in the number of presentations by vendors and some enduser case studies of DCIM products.

The Gartner Group predicted DCIM will have grown from only a 1% market penetration in 2010 to 60% by 2014. Of course, what does 60% market penetration really mean — did a company only test a 30-day trial download demo product for a pilot project, or was it a full blown deployment that monitored 50,000 servers across four data centers on three continents?

Every industry wants productivity-enhancing tools and reportable metrics, and of course the data center industry is no different. The promise of DCIM and almost every DCIM vendor is the proverbial “single pane of glass,” which should help alleviate the issues and not become the new “pain.” It is supposed to be the ultimate data repository of all the assets and status of the physical facility infrastructure (power and cooling, security, etc.), as well as IT systems (servers, storage, and networking). Nonetheless, is the view through one looking glass into the data center really useful, and should the view it provides be the same for everyone?

The basic data that most facilities oriented DCIM systems monitor is pretty straightforward: total facility power (now annualized energy) and critical load (usually at the output of the UPS) — this was the original early sales driver for DCIM to provide a PUE dashboard. As the product offerings began to mature, they also became more granular and offer additional analytics by delving into the energy usage and efficiency details of the sub-systems, such as individual CRACs, CRAHs, and chillers, pumps, etc. In addition, some vendors also offer monitoring of airflow and temperatures and related heat loads (via Delta T) and also integrating branch circuit power monitoring from distribution panels, and cross-mapping them as projected heat loads of each rack to create pseudo computational fluid dynamics-(CFD) style graphic representations of the heat load map of data center floor. Going one step further, the latest crop of DCIM offerings purport to deliver actual CFD modeling.

The IT side of the house also has long had asset management and network management systems to give them visibility and control of their servers, storage, and network equipment. They extended that into power monitoring to meet their own requirements at the rack level and began using “intelligent” power distribution units (PDUs) in the rack to simply measure current, primarily to avoid overloading a circuit when they wanted to add more equipment into the rack. The PDU vendors became smarter (as well as their products) and offered more granular monitoring and management at the rack level by offering control of individual outlets.

Moreover, as heat densities rose and cooling became a major problem, PDU vendors also offered plug-in sensors for temperature, humidity, and even airflow and static pressure under the floor — a clear trespass of the facilities domain. In effect, IT could now have better visibility into what facilities was supposed to be responsible for. The latest IT-oriented DCIM systems can poll the IT equipment directly for the power and intake air temperature, thereby avoiding the cost of the intelligent PDU.

And, of course, each individual vendor’s management software platform tended to be proprietary and uses entirely different communications protocols — on the facilities side, traditionally BACnet, ModBus, and LonWorks. On the IT side, Ethernet network transport using TCP/IP and SNMP is the unified standard. Moreover, the cost of trying to extract and interchange information from these systems (both the sensors and the databases) has long been a longstanding problem, even if the involved groups are willing to allow folks from outside their group to touch their systems.

When it comes to adding DCIM to existing data centers installing the required basic energy monitoring components (CT and PTs) at the major critical systems electrical distribution panels can become an issue, since it may require de-energizing live panels to install them safely. However, if you are building a new data center, at minimum it is a must to pre-install the energy monitoring components as part of the base building, even if you have not finalized your DCIM management platform.

If you have spent any time with the current crop of DCIM products on the market and the myriad of vendor presentations of their product features, as well as their future roadmap of how their next generation will include whatever feature or function can think of (a warp drive efficiency module?), you know what I am talking about. Nonetheless, there is a real need for products that can collect and aggregate information from facilities and IT systems and then be able to display it in a manner that is meaningful and correlates to some actionable items for both camps. If done right, when shopping for a DCIM system, any organization should have a sorted out a list of the primary requirements before calling the legions of sales reps for the dog and pony show. That is not to say that they will not see a useful or “must have” feature or function from each vendor presentation and then consider adding it to the requirements list (perhaps 3-D glasses?).


To what degree, as well as who will decide to fund and be primarily responsible for implementing, operating, and managing the DCIM system, can be an impediment to success in itself. That is especially true if the old politics of IT vs. facilities come into play during the early investigation into the requirements phase of what each group expects the system to do and how it can impact their own systems (or perhaps even their jobs).

In the end, DCIM is not just an amalgamation of hardware and software. It is a philosophical commitment to a holistic approach by facilities and IT to work together to improve the overall energy efficiency, operations, and availability of the data center, and then perhaps even singing (or at least humming) Kumbaya.