Figure 1. Application growth planning


Originally, sizing data center power was a very simple calculation: watts per square foot. Essentially, the size of the data center dictated its power capabilities. That, however, was a very long time ago when space was more limited than power. As the power demand of servers increased, electrical power sizing became more complicated. For obvious reasons, many of the newer methods fail to met the need.

Historically, applications ran on mainframes and were entirely developed by the IT department. Distributed computing drove a steady shift of application support and enhancement functions into business units, closer to the customers and their requirements. This shift led to a major disconnect between the primary consumers of the power infrastructure (i.e., application owners) and those who must supply it (i.e., facilities). In fact, a survey conducted by PBS Research of 200 IT leaders indicated that 40 percent lack portfolio management or other means of aligning IT decisions with business priorities.

Data centers exist to host computing infrastructure, computing infrastructure exists to run business applications, and business applications enable business processes, which run the company. So from that perspective, applications, which enable business functions, drive facilities requirements, particularly power requirements. It stands to reason, then, that applications analysis should be a major component of power planning.

Generally, most companies are sophisticated enough to notify IT when a new application is coming so that IT can procure the hardware and support the deployment, although the notifications are not always timely. It’s still uncommon to see business units supply their upcoming year’s business plan to IT as input into IT’s planning processing. Only one out of ten Acumen clients attempted to receive, review, and incorporate business unit plans into its IT plans.

What’s often missing from calculations, and also given short shrift in most business plans, is anticipated business as usual (BAU) growth. Today storage is the number one growth area for most IT shops. Email and business intelligence-based applications are the next two biggest. Even after archiving and de-duplication to reduce data size, growth is inevitable.

The use of virtualization is also growing dramatically. Because virtualization decouples hardware from operating system instances, virtual machines spring up readily with little thought of future business requirements. When a virtual host reaches capacity, new physical servers are purchased, which happens far outside of the business or application planning cycles. Many companies are just now starting to grapple with their virtual server sprawl.

The increased application of SaaS and outsourcing is making business units and application owners into more sophisticated customers. The service level agreements (SLAs) model commonly used in outsource arrangements is creating demand for internal IT organizations to meet external SLA levels at market costs, which is causing all parties to better understand the relationship between criticality, SLAs, and costs. Part of the solution to achieving higher availability, arguably the most common SLA metric, is to allocate more robust hardware and more of it. Disaster recovery may require duplicate (i.e., replicated) copies of data or additional recovery servers. All this means higher facilities requirements and certainly more power consumption.

Figure 2. Power forecast

About a year and a half ago, most companies were claiming power capacity limitations that would require new data centers or substantial upgrades to existing facilities. The economic downturn changed all of that but not because demand was substantially reduced. Instead, the downturn caused IT organizations to be far more creative in their solutions. In a Network Instruments survey of 450 CIOs, IT managers, and network engineers worldwide, 75 percent said they would invest in virtualization by year’s end. Forrester’s annual survey of 2,600 technology decision makers in the U.S. and Europe indicated that 44 percent of enterprises had already implemented server virtualization or planned to do so within the following 12 months. IDC predicted that software-as-a-service (SaaS) would grow 40.5 percent in 2009 worldwide. Despite these trends towards SaaS and virtualization, power is still the most costly resource of data center operations and certainly the primary limiting factor in IT growth. Furthermore, the economic downturn means continued constraints on capital. Consequently, appropriately planning and managing power consumption is critical to containing costs and ensuring that the most important applications have the right resources.

In terms of how application growth impacts site selection or other data center factors, certainly the most obvious is user demographics. The location of users, the quantity of users, expectations for performance-all these things and many more can impact the selection of a data center. In general, data intensive applications work better when they are closer to the users. It’s all about latency, which is an unavoidable physical characteristic of data transmission over distances, and also bandwidth, which can be adjusted but can also be more expensive over longer distances. Applications like electronic design automation (EDA) tools that are interacting with large design files do not perform well over long distances even with Citrix, VNC, or other application access aids. Technologies such as WAN accelerators may improve the situation, and while certain operations may be painful, virtually none will cause an application to break.

While every IT organization should have a technical process to manage power at a tactical level, they also should have a strategic process to plan how power is allocated with business unit input. As shown in figure 1, this sample process includes business unit involvement both in terms of their projected growth, but also their assistance in determining the appropriate course of action should demand exceed capacity. In reality, there are only a few ways to address excess demand: build more capacity (typically a data center power expansion), reduce demand (in this case through the business units adjusting priorities to not exceed their previously allocated capacity), or borrow/buy capacity from another business unit (similar to how carbon credits are traded in cap-and-trade markets). This is not a detailed process but a sample of the major touch points and factors that would make power planning and allocation at the business level successful.

So what does it mean to “appropriately plan and manage power consumption?” While IT planning including business planning is a step in the right direction, it falls far short of a lasting solution. Solutions to these fundamental IT dilemmas tend revolve around transparency. In the case of power management, the best course of action is to take all power capabilities in the data centers and allocate, at the top level, blocks of power to each of the business units (including IT) to manage. It then becomes the business unit’s job to distribute its power resources across its suite of applications however it wishes. Of course, IT must assist by providing data in terms of current use as well as projected use based on each business unit’s plans, but no longer should IT try to juggle excessive demand in its customers’ steads.

Virtually every company measures the total energy consumed in the data center, but far fewer can project future consumption or measure consumption by business unit. It’s not impossible. As shown in figure 2, projections of data center energy use just take some detective work to map equipment to applications and then applications to business units. From there, most equipment can provide its own power consumption information. If not, branch circuits can be measured and power consumption per device can be deduced. Even 90 percent accuracy is enough to drive a new approach to power management and change behavior. The most important thing is to share the available data as widely as possible

To ensure that all facilities are operating at a safe capacity, a company should never allocate more than 75 percent of the total power capacity of any facility. Also, the IT group must have sufficient allocation to meet general use needs of such applications as the networking core, centralized backups and storage, email, Active Directory, and others. These resources should be allocated (financially speaking) across all business units according to their head count. Any applications associated with a single business unit should be “assigned” to that business unit for purposes of power reservation calculations, including power for growth or disaster recovery.

Of course there are some companies not organizationally situated to handle such a structure. If most applications are shared and thus managed more fully by IT, it may not make much sense to attempt an allocation-based resource management structure. In those instances, business units must still involve themselves in the planning cycle, to provide basic expectations (in terms of applications or functions) of resources and which of the previous year’s activities absorbed the most capacity. In fact, that should be done more than annually, such that the business is not surprised when capacity is fully reserved. This requires IT to have a relationship with the business units and a willingness to meet with them on a regular basis. These meetings invariably lead to other topics, such as SLAs and costs. It’s very difficult to discuss resource constraints or the cost of resolving such constraints without also discussing value.

Managing power requires metrics, and metrics require data. Too few data centers produce meaningful data and here’s why: too few data center managers know what’s running in their facilities. Sure, anyone can do a physical inventory and know what IT infrastructure is present, but mapping that to applications requires diligence, tenacity, and a lot of time. And with all of that, there’s still no guarantee that it will be accurate or that it can or will be maintained. Application to infrastructure management requires as much process as technology, and both are critical to its success. The place to start in better power management is to inventory infrastructure and than map to applications. Simultaneously, business and IT stakeholders must develop a process to maintain this information (it’s commonly known as change management, and all the data would theoretically go into a change management database – CMDB). Many tools that can help are very cheap or even free. More expensive programs usually automate the collection/maintenance function. The CMDB and infrastructure to application mapping will most likely be far more difficult to obtain than the power consumption data. UPSs and/or static transfer switches can provide at least branch circuit monitoring. Newer power distribution units can provide more granularity. Server power supplies can also provide consumption data.

The actual power management tasks follow the data collection and CMDB.

Helping the business understand what power capacity costs, how much is consumed, who is consuming what – these are all very helpful in making a case the next time a business wants to deploy the next major business tool and thereby potentially exceeding all available capacity. The context of the discussion is no longer about how come IT didn’t know. Instead, the context shifts focus to conserving, prioritizing, and/or expanding. When the IT department can have those types of discussions, the art of managing power, and indeed managing computing resources has been mastered.