Every single day, it seems, we hear about another great innovation and another step in the right direction for our “data centers of the future.” CyrusOne has standardized on indirect evaporative cooling (IEC) systems to take advantage of free air cooling without the risks of introducing contamination into the IT spaces. Apple and EBay have installed fuel cells, and NetApp and Qualcomm and many others operate natural gas engines all to reduce reliance on the grid and to take advantage of abundant our natural gas resources. Microsoft is developing microsites that will operate in remote locations powered with bio-gas fuels derived from local waste water treatment plants and solid waste management sites. And many of the data centers powered by onsite generators as the primary and secondary source of power require no backup power like UPS, batteries, and diesel generators.

 

And, at the same time, information technology continues to push the limits at an accelerating rate in order to improve both performance and efficiency (i.e., Moore’s Law continues to drive us to new extremes). Multi-core central and graphics processors and high-performance computers are more energy efficient and more energy intensive at the same time. New standards like ASHRAE’s TC 9.9 guidelines for cooling encourage us to take a closer look at real operating conditions and to remove the factors of safety that we have historically employed in our data center designs, and to consider water cooled systems to provide for power densities that air cannot effectively cool. And, as commendable and workable as these improvements really are, they leave us with much less room for error as we face the inevitable failures of power and cooling in our much needed data centers.

While developing the design concepts for TDC’s self-powered data center facilities in Delaware (“Zinc Whiskers,” Mission Critical, November/December 2012), I really struggled to find a controls system that would reliably integrate a critical power plant with a high-performance data center. Of course, we will need a system that is proven to be highly responsive, accurate, and absolutely reliable to perform the function of the “brain and nervous system” of an independently powered high-density data center like this one. Issues related to effective energy management and systems availability are also key to the success of the project.

AUTOMATION FOR EFFECTIVE AND RELIABLE PERFORMANCE

Automation is destined to play a significant role in the future of data center energy management and in the efficient operations of high-reliability facilities, as is already the case in power plant and industrial environments. Automation improves energy management and reliability at the same time, allowing systems to easily fail into their safest condition for emergency conditions (e.g., dampers open and fans on full for airflow), and ensures instantaneous change and the most efficient operating conditions possible during normal operations.

Programmable logic controllers (PLC) and their counterpart human machine interfaces (HMI) are often much better suited for most any large, critical operation modern data center environments than are the traditional building management systems (BMS) originally developed for commercial high-rise buildings that use direct digital controls (DDC) technologies.

Traditional DDC systems may be a little lower in cost, but they are really best suited for “comfort cooling” in spaces intended for human occupancy and always operate with proprietary protocols and controls structures. PLCs are more robust and offer a higher level of “critical” redundancy. They are well suited for the Tier II and III operations of chilled water plants, high-demand HVAC, and power generation. They are moderate in cost, easy to configure with open protocols, flexible and highly adaptable to modular, scalable, and interoperable systems.

Distributed control systems (DCS) may be best suited for Tier III+ and IV environments especially where equipment is expected to respond to intermittent changes in demand and operating conditions. DCS systems are extremely fault-tolerant and scalable and are commonplace in highly mission critical applications like nuclear power plants, oil and gas refining, semiconductor fabrication, and federal government SKIF facilities. They are more expensive and require more highly skilled designers and operators.

In order to achieve the kind of automated performance offered by PLC and DCS technologies, I recommend that you call upon a technology-agnostic services provider capable of developing sophisticated controls systems that go well beyond the data center infrastructure management (DCIM) asset and information management systems that we are developing today. You might discover, as have I, the kinds of solutions that have come from more HVAC controls-intensive industries such as large central plants and campus systems that operate multiple plants and facilities.

Troy Miller, vice president of Energy Solutions at Dallas-based Glenmount Global Solutions (GGS), has been doing this for years in power and industrial plants and can effectively deliver automated controls systems with failsafe performance. According to Miller, GGS has provided electrical and control systems consulting and design services followed by full turnkey implementation of BMS / BCS HVAC and utilities control and monitoring systems for some of the largest data centers in the US, including the implementation of industrial DCS, providing over 4,000 “hard” I/O of fully redundant, automated operation for a Tier IV critical facility.

RELIABILITY IS STILL KING

A few years ago, Lawrence Berkeley National Labs identified building controls and control systems as the #1 cause of data center HVAC systems problems — and the #1 potential threat to data center availability  — as cited by the National Building Controls Information Program study titled, “Building Energy Use and Control Problems: Defining the Connection.” I have personally witnessed countless data center problems and even critical facilities outages directly related to poorly conceived and maintained controls systems. Controls programming and inflexible protocols are too often found to be a “single point of failure” in our critical facilities, and even our best commissioning agents sometimes misjudge the precision needed for the programming and long-term operations of the brain and nervous system of our data centers. In this era of change, we cannot afford to have controls systems be the weak link in our critical facilities.

A data center construction project executive recently commented, “I agree wholeheartedly that (controls) is one of the systems that is usually designed, coordinated, and then implemented inadequately. I lose more sleep over controls than anything else.” Clearly there seems to be a need for someone to take full ownership for an end-to-end solution of our data center controls systems to ensure that operators and builders alike are confident in their operation. I believe that two issues lie at the root of these circumstances.

First, many of the data center design (MEP / AE) firms have evolved out of the commercial world, and are accustomed to designing data centers based upon capacity and reliability requirements and, until recently, accustomed to designing to support a constant load. Only recently have we realized that our monitoring and controls need to offer the same redundancy and accuracy as do our capital equipment. And, as we further develop an appetite for dynamic systems to respond to more sophisticated operating strategies in order to achieve better energy efficiencies and lower power usage effectiveness (PUEs), we need to take much greater care with these issues.

Secondly, today’s data center operators are pushing the thermal limits of fragile electronics by increasing server supply air temperatures and more variable humidities, and by delivering more precise volumes of air and water, all in order to save energy. And that means that the allowable time for recovery of critical cooling systems after an equipment failure is reduced to a mere few seconds before temperatures rise to well above the 105°F failure temperatures of our servers. The only reasonable method of effectively managing these outages is through an “automated” response to quickly put the facility into a fail-safe condition. And, operators tell me that automated controls are also the most reliable and safe approach for operating their critical facilities during normal modes of operation.

All of these circumstances lead me to believe that the simplistic BMS that were developed to manage electrical and environmental controls for commercial office building have a limited future in the critical facilities space. As we continue to push the limits to become more efficient, more productive, and more independent, we will utilize these more robust, accurate, and reliable systems to keep us up and running. We will need to find someone to take more “ownership” for the correct delivery of our systems, and we will find them in services organizations that have served a broad spectrum of more controls-intensive facilities.

Glenmount Global Solutions (www.glenmountglobal.com) is just such a systems integrator and full services controls engineer. According to Miller, “we consult, develop operating strategies, design, specify, fabricate, implement, test and commission, train operators and maintain our systems to assure that nothing is lost in the process of delivering 100% uptime along with effective energy management.”

GGS seems to be a “go to” provider of Tier III/ IV and other complex data center controls and support systems. For example, they recently implemented “turnkey” a comprehensive facilities monitoring and control system (FMCS), a 5 MW cogeneration turbine BOP, and an energy monitoring application for the world headquarters campus and new Tier IV corporate data center of a Fortune 50 in Texas. With its fully redundant BMS, BCS, network and supervisory control and monitoring applications, it is one of the first LEED® Platinum certified facilities of its kind.

GGS also provides controls systems upgrades and retrofits including integrated switchgear and power systems solutions that enable the client to avoid extensive downtime and capital project expenses due to power systems technology obsolescence. Projects are implemented as “live” upgrades of the power distribution, management, and monitoring SCADA systems while the data center is fully operational.

ENERGY OPTIMIZATION — THE LOWEST POSSIBLE MECHANICAL PUE

It has become evident to me how important it is to bring a qualified controls engineer into the picture early in the design process. Data center designers have historically selected systems and equipment based upon maximum capacity and reliability criteria that result in a gross over-estimation of the number and sizes of virtually all systems components. The ongoing “modular movement” is effectively the first step in right-sizing and improving project efficiencies.

Now, I think, the next inevitable step to best design practices to data center cooling design is to develop a thorough operating strategy first, and only then to design a real-time dynamically controlled facility that will deliver power and cooling in the most effective and most reliable manner. Next-generation design will include the development of operating strategies that anticipate all possible modes of operation and select equipment with performance curves and efficiencies that best suit those strategies.

The first generation of such a solution for airside cooling, known as dynamic smart cooling, was introduced by HP Labs around 2004. It was heralded to operate as a real-time computational fluid dynamics model much more capable than any of today’s DCIM systems. However, the concept was ahead of its time, and the data center world failed to embrace the value of the concept.

Since then, we have learned a lot and now do a respectable job of managing our airside systems with contained aisles, free air cooling, and VFD controls on CRAC and fan motors. Solutions providers like SynapSense (www.synapsense.com) and Vigilant (www.vigilent.com) successfully provide DCIM and energy-savings controls systems that include wireless monitors and efficient variable-speed fans controlled by predetermined setpoints of pressure and temperature sensors. We sometimes add variable-frequency drives (VFDs) to our chillers and waterside components as well, to improve the efficiency of our chilled water plants. However, we don’t yet do a good of balancing the operations of the waterside and the airside of our HVAC systems.

More holistic solutions have been developed in other industries, and we now have an opportunity to step up to a new generation of energy efficiency in our data center cooling. I expect that they will be integrated into DCIM programs to provide an automated energy management system that is more reliable than the “play it safe” manual controls approach still so prevalent in our IT spaces. With the flexibility of PLC controls, it will be easy to incorporate high-value packages to improve our energy management. My favorite amongst those is optimum energy (OE), a package that optimizes HVAC performance through an innovative system design and series of optimization algorithms and approaches.

Optimum Energy (www.optimumenergyco.com) utilizes algorithms developed by Ph.D. engineers specializing in central plant operations with chillers, boilers, generators, and the like, and they own exclusive rights to them. The algorithms consider and combine the performance curves of each piece of equipment in a system. So, the “performance vs. efficiency” equations are defined for the chiller, the primary water pump, the cooling tower fans, and other waterside equipment along with the same for each CRAC fan, air handler, dampers, and similar components. It communicates to all of the controllers in the data center and allows the system to automatically settle into an optimized operating condition. It also determines the most efficient incremental change in equipment operations to respond to a change in load or environmental conditions. This is theoretically the most energy-efficient operation possible and will really give you the best PUE possible in an air and water system.

Glenmount Global and Optimum Energy are already working together to provide the systems and controls expertise required to deliver much improved total cost of ownership solutions to world class data centers across the United States. In the next issue of “Zinc Whiskers,” I hope to describe the operations of this holistic solution in detail and to share some of the quantitative results that they have achieved.

CRITICAL FACILITIES ROUNDTABLE

CFRT will meet in May of 2013 in Silicon Valley to hear presentations by power generation equipment manufacturers, consultants, and operators to demonstrate how data centers are powered by on-site generators, and to consider the merits and challenges of alternative energies for the data center. CFRT is a non-profit organization based in the Silicon Valley that is dedicated to the open sharing of information and solutions amongst our members made up of critical facilities owners and operators. Please visit the website at www.cfroundtable.org or contact us at 415-748-0515 for more information.