Five Things You Never Want to See an IT Administrator Do

1.Introduce liquids into the data center (in canned or bottled form). Liquids can damage equipment, but cans and bottles can cause problems even without being opened. A six-pack, twelve-pack, or a full case of any beverage in the data center likely means someone intends to keep drinks cold in there. Usually the culprit will pull up a raised floor tile. These simple actions are not dangerous in and of themselves; however it means someone is not considering the physics of raised-floor air distribution. Usually just stashing a 12 pack (this goes for holiday decorations too) under the tiles won’t cause a cooling problem as long as the CRAC is running and kicking out cold air, or will it? Altering the environment under a raised floor alters the airflow, which can have a significant impact on performance. There are much better (and easier) places to chill employee beverages than under a very expensive and highly engineered raised floor.
 

 

2. Attempt to solve problems with a floor tile puller. There is absolutely nothing wrong with an IT person using a floor tile puller; most need routine access to cabling and J-boxes to do their jobs. The issue here is more of education. IT personnel need to understand that the pressurization/airflow of a raised floor “system” is vital to maintain the performance envelope of the cooling system. Airflow is unpredictable, and the cause of the slightest change to its path can take weeks to identify and correct. There are also many other sensors under the floor (rope leak detectors, smoke detectors, etc.) that are easy to disturb. Every time a leak detector is inadvertently triggered, a poorly terminated J-box is disturbed, or a data cable dead ended, a facilities engineer or contractor may wind up digging around for an hour or two to fix it. The operational pain is one thing, but the unscheduled service call just cost someone real money.
 

 

3. Place a fan in the hot aisle. Putting a fan in the data center could solve some problems, but more often it does nothing but create inefficiencies. Hot aisles are specifically designed to prevent hot and cold air from mixing, improving the predictability and efficiency of the cooling system. Cooling the hot aisle actually reduces the efficiency of the CRAC. IT administrators need to look at the physical infrastructure and the network environment as a whole. The reality here is that introducing a fan into the data center disturbs the airflow which impacts the entire system.
 

 

4. Plug things into a Rack PDU (extra outlets do not mean you have extra capacity). It’s time to worry when at IT admin asks, “How come there are so many outlets, and what are the funny looking outlets on this one for?” Overloading one phase of a three-phase rack PDU can cause an entire server room to go very dark and very quiet, very quickly by tripping the upstream breaker in the room PDU. Human error, especially in smaller organizations, often leads to an unbalanced phase condition that can trip breakers. It is very easy to address human error nightmares through education: How many IT administrators know that a C13 plug can be 120 or 208 volts? More downtime can be attributed to someone opening the wrong breaker or plugging into the wrong outlet than almost any other human activity in a data center. Operational staff is often more versed in the danger of arc flash and the risks in the battery room, than in the different rack PDUs, outlets, and phase balance.
 

 

5. Try to figure it out alone. There are facilities and IT resources for a reason! A rigorous set of procedures, fully documented test cases, and a completely thought out and engineered project to move a row of equipment can come to a crashing halt if the people who care for the power, cooling, protection, and management of the capacity available in the data center are not involved. The data center is a system, and all the subsystems must be coordinated. IT administrators, facility engineers, server admins, mechanical systems operators, network technicians, and generator mechanics (the list goes on) need to work together to manage the hardware, software, and support of a data center environment.
 

Five Things All IT Staff Should Know

1.Data Centers are engineered assets. Changes affect performance.

Q. “We’ve set up a hot aisle/cold aisle arrangement, plugged up holes in the raised floor, and bought three new blade chassis, but only put them in one rack. How could my entire data center be experiencing heat problems?”

A. Easy. The data center was likely designed for a static load, with overhead for growth. What was it? Without knowing this information up front, the probability of cooling issues is very high. As most existing data centers are built to accommodate a specific load (expressed in watts/square foot, or more currently watts/rack) any change to the load must be carefully examined. In this example, the data center could have exceeded a traditional build metric by a factor of 20. It is likely not that bad everywhere; however, relocating racks, albeit in an optimized layout such as hot aisle/cold aisle, can affect air distribution patterns, especially if the crew left that case of soda under the raised floor.


 

 

2.Ohm's Law (E=I x R). Understanding the physics behind basic electricity is not necessary for IT staff, but knowing the interrelationship of voltage, resistance, and amperage can make even the most vexing power issues easier to understand. While everyone is concerned about heat in the data center because it reduces the server lifecycle and can cause outages, many IT administrators do not know that the facilities manager is pointing the thermo scan gun at the open panel board looking for heat. Why look for heat in a panel board and what does that have to do with Ohm's law? Heat (actually a spike or difference in heat amongst terminations) is generated when the terminations are not secure. What most IT staff does not consider is that the heat is actually a by-product of increased current. If IT notices that PDU voltage starts creeping down and its current sneaking up, that is a great opportunity to consult with a facilities engineer to scan the connections or schedule maintenance availability.
 

 

3. How the power makes it from pole to the rack. “Three phase, single phase, what is the difference?” If IT understood what happens to the power coming into the building-where it is distributed, what voltage it is as it makes its way to the UPS, PDU, and rack-the conversation around growing data center capacity would be easier. Beside what is going into the rack, extra heat load must be considered. If power is inadequate, more must be allocated for the additional CRAC capacity (but then there is chiller load, but that’s a different topic). Many people look at the UPS load, compare it with nameplate ratings for capacity on the UPS and the new servers, and make some calculations. What about PDU capacity? Is there enough left to pull another branch to the new zone? Do you have enough pole positions left for the type of breaker you need? It is important for data center operators to have at least a basic understanding of electrical distribution in data centers.
 

 

4.Know where your energy goes? Ask your facilities manager. Just as no IT person would start removing networking equipment, patch cords, or servers without understanding the impact to the network, it comes as a great surprise that many IT people still unplug servers or disrupt air distribution systems without understanding what that does to the whole system. The reasons why some IT personnel take things into their own hands vary; lack of resources or limited relationship with the facilities team are often problems. Regardless of why communication isn't happening, most facilities staff would much rather consult over the direction things are going, than react and repair problems that arise over a project. There are likely strong reasons why supplying a power feed to a new storage array is delayed-there may be limited power or cooling to the building-not just the data center. Opening the lines of communication can bring a solution to the project faster than that speedy new storage array.
 

 

5. You can get killed in here! Low voltage is a funny thing, when you come between it and a ground potential, it doesn’t seem so low! In all seriousness, the number of very dangerous 480 Vac and 500 Vdc systems in an average data center and mechanical rooms must be acknowledged. But IT staff and all others must always remain vigilant for danger in today’s data center.