The 4 main things to keep in mind about virtualization with regard to the effects on DCPI are:
- High Density: Virtualization by its very nature will lead to an increase in CPU utilization which will increase the per rack power draw. CPU utilization can go from 10-20% pre-virtualization to >50% post virtualization. The resulting server power draw does not increase linearly but will still increase around 20% depending on the manufacturer. The most significant issue caused by high density is that of heat removal. There are many cooling methodologies available today to address this but I won’t discuss that here.
- Increasing PUE: When there is a reduction in IT load with no change in the DCPI, a data center’s PUE will get worse even though overall energy usage is decreasing. In the case of virtualization, we have significantly increased the “IT efficiency” but have decreased the physical infrastructure efficiency. This is due to the now oversized data center which in turn means fixed losses in the DCPI are playing a bigger role. Fixed losses are basically power that is consumed by the power and cooling systems regardless of what the IT load is. The more power and cooling capacity that exists, the more fixed losses will exist. Again, there is much that can be done to address this with some solutions being more practical than others.
- Dynamic hot spots: In combination with high density hot spots, virtualization can cause IT loads to vary in location and time, essentially creating dynamic hot spots. One of the great benefits of virtualization is the ability to move VM’s (virtualized machines) as needed. Imagine a rack going from a 3KW power draw to a 10KW power draw as VM’s are moved to this rack. If not designed properly, downtime can result. One potential way to address this is to design the power and cooling to handle the maximum feasible per rack power draw for each and every rack in the data center. As noted above, this will lead to a poor PUE and excessive cost. What are needed are DCPI systems that can respond dynamically and in synch with the IT load, especially the cooling. DCIM (not to be confused with DCPI) software can also play a big role in the solution here. Not only can DCIM software monitor and control DCPI based on changing IT loads, it can also interact with IT management systems in an intelligent way. For example, DCIM software can notify a platform such as VMware that certain VM’s are being powered by a UPS that’s on battery or has a fault of some sort. It can also tell VMware the physical locations to which the VM’s can be safely moved. This can all happen in an automated fashion.
- Lower Redundancy Required: While the 3 effects listed above could be seen as negative if not addressed properly, virtualization does create DCPI opportunities. In a data center that has a high level of IT fault-tolerance through virtualization there may be less of a need for redundancy in DCPI areas such as power and cooling. The opportunity here may be to design a Tier 3 or Tier 3+ data center rather than a Tier 4 which will result in significant savings.
With everyone’s eyes on what virtualization and cloud computing can do for their IT, it’s easy to overlook the effects on DCPI. Overlooking these effects can compromise availability and lead to lost dollars.
Next week I am going to give some suggestions on how these effects can be avoided or at least lessened. If you are interested in learning more about these and other similar issues take a look at one of Schneider Electric’s newest white papers, white paper 118