Integrating Automated Controls For Data Centers: ABMS/PLC Network Design Ap-proach
Read on to learn how to design a control and monitoring network with a long life.
The controls industry has recognized the increased need for component, equipment, and system interrelatedness, and has moved to more open communication protocols that facilitate easier integration into local and/or distributed facility monitoring control systems. Moving away from proprietary, vendor specific communication protocols to implement web-based, enterprise-friendly standard protocols such as SOAP, XML, SNMP, etc., will allow for more flexibility as facilities will install intelligent control devices onto existing computerized control systems.
Many of the technological benefits of system integration and interoperability can be implemented in critical facilities design. Supporting these protocols begins with developing a clear philosophy and approach to building management systems (BMS)/programmable logic controllers (PLC) network design that embraces new technology trends as they evolve.
BMS design professionals need to keep abreast of building automation system (BAS) design trends, especially those that include the ability to communicate on a common communication infrastructure or “structured cable plant.” Early on in the design phase of any project, the design team must determine if independent networks will be specified or if an engineered structured cable plant will be implemented. Many vendors and owners prefer to have full control over their own network, built to their corporate and operational standards. When this is the case, an engineered structured cable plant can be used to allow several building systems to share one IT infrastructure. It is possible for multiple monitoring and control systems that typically serve data centers, such as PLCs, commercial BMS, direct digital panels (DDP), electrical power monitoring systems (EPMS), and fuel oil systems (FOS) to reside on the same network; yet operate independently in a secure, independent manner.
This network design strategy takes advantage of the fact that data communications for many data center systems utilize the industry standard of Ethernet TCP/IP (IEEE standard 802.3). Designers should consider designing around a copper or fiber optic 100/1,000 Mb Fast Industrial Ethernet medium, which has many advantages including increased speed, better interoperability and, easy integration with transmission control protocol (TCP)/internet protocol (IP) based networks. This topology supports critical time-sensitive applications such as electrical event time stamping for EPMS. The network design should support a minimum “two-tier” configuration, providing independent primary and secondary system data communications.
When considering using copper or fiber optic Ethernet for the primary network (i.e., servers/workstations) during the initial design phases of the BMS, be aware that optical fiber has many advantages over copper solutions. While copper-based communication links are susceptible to electromagnetic (EM) fields and emit EM noise which may interfere with other instrumentation, fiber optic links are immune to EM fields and do not generate them. This is important since the primary network typically extends throughout the facility. Other advantages of the use of fiber over copper include fiber’s low weight, easy field termination and maintenance, as well as ease of installation due to short bending radius and better performance over temperature.
Copper Ethernet (minimum Category 6) is best used between local BMS/PLC controllers and BMS network switches and as the medium to connect equipment such as computer room air handlers (CRAHs) or computer room air conditioners (CRACs) to BMS network switches. Network connection lengths between switches and equipment are typically shorter, reducing occurrences of EM noise fields.
Once the BMS network medium(s) are selected a network topology must be designed to meet the performance and critical operating attributes required by the project. The final BMS network ultimately must be designed on the basis of increasing overall system reliability. This is typically done using a combination of design methods. The primary method being duplication. Duplication of critical hardware components such as servers, network switches, routers, power supplies serving the network level, controller level and finally, duplication of the communication network where all redundant servers, network switches, workstations, and controllers will connect and reside.
A redundant Ethernet ring topology (dual ring) meets this criterion. Two independent Ethernet connections from the ring topology are made to each server, network switch, and workstation. Designing the BMS network around a redundant Ethernet Ring topology serving critical components increases the reliability of the overall system by providing another means for BMS backup and failsafe operations. The ultimate design goal is to build a fault tolerate control network.
Selecting the proper network switch is one of the keys for successful BMS data center network design. Design engineers should select industrial network switches that can support one or more system simultaneously without a single point of failure. Select switches that are capable of being dual powered (A & B). Select industrial grade switches with features that can support a redundant Ethernet ring topology, with features such as: ring coupling, dual homing, IEEE spanning tree and rapid spanning tree protocol STP, RSTP over a ring topology, virtual local area network (VLAN), and quality of service (QoS) based on IEEE Ethernet standards.
Industrial network switches, upon detection of any breaks in the primary loop, are capable of automatically switching to the redundant link. A redundant Ethernet ring topology using industrial switches can support multiple servers, which can operate in a fully redundant synchronized manner. One server can operate as the primary server; the other as the backup. If a fault occurs in the primary server, the associated industrial network switch will move over to the backup server and provide all the functions of the primary server within milliseconds. When the primary server fault has been corrected and the initial primary server restarted, the system shall automatically update the primary server and switch over from backup to primary server upon completion of the update.
Select network switches with VLAN (minimum 802.1q) and QoS (minimum 802.1p) features that can connect/merge multiple system networks. What does that mean in simple terms? Each system, BMS/ PLC, EPMS, or FOS, would be tagged as a “unique” VLAN. Only the members of the VLAN will receive traffic from fellow VLAN members. This means that each system’s controllers will physically connect to the same network, but be virtually “separated” or “grouped.”
BMS or PLC = VLAN#1
EPMS = VLAN#2
FOS = VLAN #3
Once all systems are assigned with a VLAN number, then QoS priorities shall be assigned to each. The QoS (minimum 802.1p) will ensure that the VLAN# labeled with the highest priority will get the highest network communication priority. Most likely the QoS protocol will sit idle during normal day-to-day network utilization. However, during emergencies when traffic is very high (e.g., users trying to diagnose problems, alarms are generated, and system tries to respond to failures), properly configured QoS is very crucial. During high volume network traffic situations, the QoS will manage the network traffic in accordance to assigned priorities. The following is an example of QoS priority settings for EPMS, BMS/PLC, and FOS network merge:
QoS Priority Settings:
Highest- FOS (labeled as VLAN#3 in the above example)
Second highest- BMS/PLC (labeled as VLAN#1 in the above example)
Second lowest- EPMS (labeled as VLAN#2 in the above example)
In this example, FOS is labeled with highest QoS priority because the FOS day tanks communicate with the FOS PLC via the Ethernet network. In case of utility power failure, generators will start and the FOS system becomes the most important system in the facility.
Designing using the proper industrial network switches will provide the structure to support redundant controllers and control servers, independent historian servers, and workstations. It is important that from the very beginning during the initial network setup, ongoing network management and maintenance is done periodically.
Once the network is setup (dual network, industrial switches, multiple servers, and dual power), it will provide a foundation to implement active and passive controller techniques typically used in a data center control design.
Site network communication coordination is a challenging and yet essential task in designing for facility monitoring control systems serving critical facilities. Project coordination becomes immediately apparent when optimizing multiple systems to function in a holistic manner. Through coordination efforts, designers should seek to maximize available resources. Along with the traditional coordination of electrical power for control and monitoring equipment, coordination efforts should also:
Make sure that all discussions and decisions are formally documented for all active parties to read and understand.
Determine the driving factors and through group discussion, develop realistic network goals. For instance, system network design for data centers does not have a direct correlation to energy reduction, however, energy reduction control applications and energy monitoring using a dedicated energy historical server through the use of data center infrastructure management (DCIM) software can be applied to a data center project.
When an existing IT infrastructure may be utilized, coordinate all data drops for each BMS/PLC controller. Provide floor plans to the IT department indicating all equipment locations. This will ensure that connectivity will be allocated to the network.
Often times data center projects require that information from multiple manufacturers be visually displayed in a similar manner. This allows facility personnel to quickly familiarize themselves with different systems because “visually” information appears in a format they recognize. Always coordinate “how” different systems will be displayed. Design engineers should always request graphical user interface (GUI) screen samples in contract specifications. Ultimately, all controllers and the areas/spaces they serve and the final network design will be graphically displayed.
The use of programmable logic controllers to monitor and control mission critical equipment has become the de facto application. A new generation of commercial BMS controllers, however are currently being manufactured with a more compact modular approach similar to PLCs. It’s becoming possible to configure/arrange these newer BMS commercial controllers in applications requiring continuous operation during any single failure event. These new controllers can be arranged and programmed to hold “last commanded state” or switch to a secondary master controller upon failure of primary controller. This has become more possible since commercial controllers are now “Ethernet-ready” and can be attached to industrial network switches. One manufacturer has even added the ability to add an additional power supply.
Designers should consider a hybrid design utilizing both commercial BMS and industrial PLC-based control equipment. This works well when industrial PLCs are employed solely for areas serving the white space and commercial controllers are used for all back-of-the-house and administrative areas.
Commercial BMS and industrial PLC-based control equipment can be physically separated by industrial grade managed network switches, operating independently from each other, such that if communication to the back-of-the house BMS controls are lost, PLC controller’s resident logic will continue to control white space (central plant/data hall areas, etc.) equipment. All PLC controls serving white space equipment should be powered by two independent electrical power sources. Although separate, the back-of-the-house BMS system and the PLC systems should be interconnected for command and control.
In preparing to design a network for BMS/PLC serving data centers, consider the power of selecting equipment that facilitates systems integration. Always think in terms of designing a control and monitoring network with a long life. The network communication infrastructure must have the capability to take advantage of cutting edge technology. Since many facilities must remain alert to productivity and operating costs, the final BMS/PLC network design must provide value both now and in the future.