Seattle-based TeleCommunication Systems (TCS) saw plans for its new data center run into a wall when the local utility company said the grid didn’t have enough available power to support the expansion. But in this case, it seemed running into a wall was exactly what TCS needed.

Utilizing a “cooling wall” introduced by the design-build firm of McKinstry, TCS’ new data center not only overcame the problem of limited power availability, it made TCS one of the most efficient data centers in the Pacific Northwest. Deployed through an evaporative-only cooling wall and airflow containment provided by Chatsworth Products (CPI) Passive Cooling® Solutions, the TCS data center expansion went on to earn two ASHRAE awards — one national and one regional — and an average PUE of 1.15.

Challenge

If you call 911 from a cell phone located anywhere in the U.S., the signal is likely to pass through TCS’ data center before it ever reaches emergency personnel. If you get lost, TCS gets you home by managing navigation systems for some of the nation’s largest cell phone service providers. These aren’t responsibilities that can be taken lightly … they are literally a matter of life and death.

“The Seattle switch sends the information to us and then we look you up, see where you are and provide your location,” said Jeri Ginnaty, manager, data facilities group, TCS. “Then it takes the data from your phone and the information we compiled and calculated, and sends it to the public safety answering point. They would put those two together and send the emergency responder to your location. For all of that to happen it takes about 600 milliseconds.”

As cell phone use rises, so does the need for these critical services. TCS has seen that demand continually grow in its legacy data center and in 2009 they decided it was time to expand. By keeping its legacy data center — an open-rack design using seismic frames arranged in hot and cold aisles — in operation, an additional data center in the same building would mean faster service, the ability to serve more customers and the creation of a new testing lab for software development. But with the legacy data center and proposed expansion both relying on CRAC units for cooling, it would also mean the use of more power … a lot more power.

“We found out we had absorbed every spare ounce of energy the building had available,” said Stephen Walgren, data facility engineer, TCS. “For us to grow we would have to go to Seattle City Light and have them put in a new primary feed.”

Limited power availability was only half the problem. A data center expansion using traditional mechanical cooling would only allow about 30% of the building’s power supply to be used for the servers. And with cooling units siphoning off nearly 70% of the building’s energy, that left little room to increase computing capacity or plan for other expansions in the future.

“We were driven by two reasons,” said Ginnaty. “One was the amount of power available to us in the building and the other was the cost of maintaining the data center. To do it the old way, we wouldn’t have had enough power to make it efficient to build. We couldn’t run all the servers. So we looked at less energy to cool this system and now we have over 70% of our power available for the servers.”

A Different Approach to Cooling

TCS needed a cooling strategy that was ambitious, innovative and above all, it had to be incredibly efficient. McKinstry, which operates across the U.S. but is headquartered in Seattle, was up to that challenge and joined the project in 2010 with a brave new approach. Rather than find a way to accommodate the energy usage that CRACs would put on TCS’ limited power availability, McKinstry opted for a design that not only eliminated CRACs, it eliminated mechanical cooling altogether.

Recognizing that Seattle has a relatively mild climate, Kyle Victor, project manager, McKinstry, and Joe Wolf, operations manager, McKinstry, worked with primary designers Jeff Sloan and Alireza Sadigh to set aside one wall in the data center to become the “cooling wall.” Acting as the connecting point between the data center and a controlled flow of the Pacific Northwest’s signature cool air, this “cooling wall” resulted in a system that uses 100% evaporative airside economization. Optimized by airflow containment within CPI’s F-Series TeraFrame® Cabinet System with Vertical

Exhaust Duct and carefully monitored air pressure levels, McKinstry’s design delivered a precise and effective answer to TCS’ efficiency needs.

“CPI’s cabinets are totally passive and that’s very efficient,” said Ginnaty. “All the work is being done by the servers and the room is maintained at the supply temperature we’re feeding the servers.”

Because Seattle’s weather can be unpredictable at times, this raised concerns early on that a few warm summer days could heat the data center to dangerous levels and raise humidity. However, when weighed against equipment specs on the servers and a proposed temperature setpoint of 71°F to 73°F (21°C to 23°C), Seattle’s weather and temperature fluctuations were deemed manageable and unlikely to disrupt the cooling system.

“We do allow the temperature set point to reset to a max of 80˚F (27˚C)” said Victor. “There’s potential for a 100˚F (38˚C) day in Seattle with high humidity and the temperature in here might get up to 80˚F (27˚C) with some humidity. We discussed early in the project that this would be a possibility. But after looking at equipment specs we see they’re designed with those tolerances and a lot of equipment is designed to go higher.”

While CPI’s cabinets and vertical exhaust ducts work to keep hot air isolated from the room by passively directing it into the plenum, the evaporative cooling system’s exhaust fans still require energy to keep the room supplied with cool air. Ensuring that this fan energy is minimized, the entire system is maintained with a finite pressure of 1/100 of an inch of pressure difference between the room, the building and outside. This tightly monitored pressure level is fully automated, allowing each fan to adjust whenever servers in the data center are added or removed.

“You put in a new server that wasn’t in here previously and it draws 100 CFM (cubic feet per minute), the pressure transducers that control the fans can sense that” explained Victor. “They’ll adjust their speed by exactly that amount, automatically, to compensate for that additional 100 CFM requirement. So you’re never using more fan energy than you need to be using for that exact load.”

Primary elements of the cooling system were finally set and expected to bring TCS’ energy consumption well below levels seen in typical mechanical cooling, but one final piece of the system still remained — how to deploy IT equipment without exhaust air having an impact on the room.

Cooling with Cabinets

To meet the telecommunications needs and expansion hopes of TCS’ new data center, the core IT infrastructure demanded a solution that included quality engineered equipment cabinets and flexible cabling solutions. Such a precise cooling system also needed cabinets that satisfied equally critical efficiency and airflow containment needs. Rather than merely exist in the data center’s white space, these cabinets would need to be incorporated into the room’s design as a primary element of IT infrastructure and the cooling system.

Wolf had already seen an example of this approach while touring through King County’s data center, also located in Seattle. King County used a slightly different approach to evaporative airside cooling, but the finished data center still experienced dramatic savings by deploying CPI Passive Cooling within rows of F-Series TeraFrame Cabinets with Vertical Exhaust Duct. This system worked by passively removing hot exhaust air out of the cabinets and into the overhead plenum, leaving it completely contained and separate from the room’s cool supply air.

“That was one of the first places I saw the (ducted) cabinets,” said Wolf. “I talk with Casey Scott, Northwest Regional sales manager, CPI, pretty frequently and he’s always keeping me up to speed on what’s new and what’s innovative.”

Known throughout the industry for being the pioneers that first introduced passive cooling into data center environments, CPI and its ducted cabinets were the obvious choice for King County’s data center — they could remove the heat, isolate it from supply air and help the airside system maintain efficient pressure levels. Further optimized with a comprehensive sealing strategy that closes empty rack-mount unit (RMU) space, cable openings and the bottoms of cabinets, this cabinet-level approach to airflow containment was also what the TCS/McKinstry design needed.

“In order to control these fans, which are based on very fine pressure differentials, these cabinets had to be totally sealed,” said Victor. “We wanted to have everything sealed except the opening at the server that’s pulling the air through.”

When deployed as a total thermal solution, CPI Passive Cooling has the proven ability to eliminate hot spots, allow higher set points on cooling equipment and reduce a data center’s total energy costs by up to 40%. For King County the use of CPI Passive Cooling had already resulted in an average PUE of 1.5 — but for TCS the efficiency potential was even greater.

Connections and Collisions

A solid plan to save energy was in place but one of the data center’s most critical elements still needed to be deployed — a reliable cabling infrastructure that optimized connectivity, airflow, and convenience. Specifically, TCS needed a cabinet and cable management solution that easily accommodated cable changes, and a design that deployed across the data center without conflicting with factors like seismic protection, fire suppression and airflow.

“We had huge collision checks,” said Wolf. “It’s tight, from sprinkler heads to smoke detectors to lights to up above the ceiling to the conduits … everything is really crammed in there.”

The final design included a slab floor and cabinets on ISO-Bases, which would require an overhead cabling approach that was flexible enough to move at least eight inches in either direction during a seismic event. This also tied into how the vertical exhaust duct would meet the plenum. Rather than have the duct penetrate the ceiling tiles, they extended to the tile surface, allowing them to easily move across the ceiling if the cabinets shifted from seismic events. CPI helped accommodate this design by manufacturing customized top panels and rear doors that could house grommets that allowed cabling to pass through, while airflow remained contained.

“The cabinet needs to be able to move and not yank on anything,” said Wolf. “We didn’t want to run these cords through the center and into the front because then you’re creating a pinch point and something is going to give. So the grommets were custom —basically the tops and backs were custom.”

Extra length would need to be added to many of the cables to accommodate potential shifts during seismic events. However, too much extra cabling in a cabinet could obstruct airflow. CPI solved that problem with its F-Series TeraFrame cabinets, which were 51U in height and deep enough to address cabling concerns by providing ample room between the frame and equipment rails.

“Day to day our cabinets are fairly full of servers,” said Ginnaty. “But on the backside of the cabinet, because of cable management, it’s large enough so you can easily manage all the cables. You’re not blocking any of the airflow. In a smaller cabinet all that would be jam packed across the back and you would have a lot of airflow issues.”

Extra cabling would also be an issue at the data center’s intermediate distribution frames (IDF), which were deployed across the data center on glacier white standard (two-post) racks by CPI equipped with Evolution Vertical cable managers, also in glacier white.

“We have lots of space to work with at the IDF and if the cable is a little long there’s a great place to hide it in the IDF,” said Walgren. “It’s been good for us and the cabinets work so well. After touring a number of data centers we were convinced these were the cabinets we wanted. The difference is huge and I can’t say enough good things about that M6 rail set — it’s meaty enough

Added Incentive

Seattle City Light had already established that TCS would not be able to expand its data center with a traditional cooling approach. However, Seattle City Light was willing to balance that limitation with an incentive that gave TCS the opportunity to present two separate approaches and compare energy usage for each one. If TCS went with the more efficient approach, the utility company would reward that effort with a cash rebate.

Now operational for nearly two years, TCS has been carefully documenting its energy usage and is inching closer and closer to reaping those rewards.

“It comes down to measurement and verifying how you’re operating,” said Victor. “What we’re proving is that we’re operating more efficiently than the code required baseline efficiency and by how much. We had to build a speculative case that said, ‘Here’s how many kilowatt hours per year the mechanical system would utilize if we just installed a baseline code compliant mechanical system. And here’s how many kilowatt hours a year we use with this more efficient system. Then there’s a delta, we’re using far fewer kW a year, so they’re incentivizing those saved kilowatt hours. I think it’s 23 cents per kW saved.”

CPI Passive Cooling is further reducing that load by allowing the server fans to push heat through the vertical exhaust duct and into the plenum without the use of additional energy.

“It runs very efficiently,” said Ginnaty. “The only power being used for air movement is the server fans. And in conjunction with the cabinets isolating and directing the airflow, they are doing all the work. That’s where the biggest savings is. Right now we’re about 70% full and I think it’s about 30 kW for the IT equipment — not including DC, it’s probably 50 kW with that.”

White Cabinets … A Brighter Future

Choosing a cabinet’s color may be a matter of aesthetics for some data centers but for others that decision revolves around practicality. This is especially true for data centers joining the gradual shift from traditional black cabinets to the bright, clean look of a white cabinet. By having a surface that reflects light instead of absorbing it, data center technicians are finding it easier to see inside the cabinet for equipment changes and maintenance.

“You do anything you can to create light,” said Walgren. “You put dark cabinets in there and the room is going to get darker … especially with having the ISO-bases. The cabinets in the other data center, if you drop a screw in there it’s gone forever. We’ve had screwdrivers that disappeared in there.”

Already leaning toward a full deployment of white cabinets, racks and cable management, the design found its final nudge in the form of yet another reduction in power usage.

“We were able to reduce lighting by 30% by going to white cabinets,” said Ginnaty. “We were going with white because it looks really nice and the other reason is working inside the cabinet. But the room doesn’t require the same amount of light.”

For a design team that adamantly worked to uncover every possible avenue toward savings, the choice to use white cabinets helped bring the design one step closer to meeting the efficiency goals set by the local utility.

“There was actually a rebate component to having the white cabinets because it reduced the total lumens required to light the space,” said Victor.

Conclusion

If there is one common thread this data center has seen, it is a persistent commitment to efficiency. Spanning from the early planning phases and on through today, nearly two years after coming online, TCS, McKinstry and CPI have proven that an efficient data center design can overcome power limitations and drastically reduce energy consumption — in this case by an estimated 513,590 kW per year. ASHRAE recognized those efforts with a first- place regional technology award in 2011 and second place honors on the national level in 2013 for Category III – Industrial Facilities or Processes – Existing.

“We’ve seen drastic savings over our other data center,” said Ginnaty. “Right now you can see we’re pushing 72° into the room and it’s 84° inside the back of this cabinet. But you can see none of this warm air is coming into the room. It’s all in the cabinet.”

Standing near the “cooling wall,” Victor adds, “The air velocity is much higher here, but the temperature here is the same as the temperature down there on the far end and you have all these servers in between.”

TCS’ issue of limited power capacity is long gone and has since evolved into a total power capacity of 400 kW. And with this efficient design only using about 250 kW of that load, TCS’ data center expansion now has a very different limitation … floor space.

“We’re looking at bringing in at least 10 more cabinets, possibly 20, in the first quarter,” said Ginnaty. “And our next step would be to fill up this area, which would give us another 40 or 50 cabinets.”