Owners are becoming more and more comfortable using a single facility to house several different operations. This has resulted in the need for multiple levels of redundancy and different types of cooling systems operating side by side in the same space. This trend is so strong now that we may soon characterize it as the era of the flexible data center.
Today’s information technology and computing environments fall into four functional categories:
• Enterprise data centers
• Internet data centers
• High-performance/supercomputing facilities
• R&D electronic labs
Each of these facility types houses different kinds of server, storage, and network technologies that operate in different ways. So, each type of facility requires unique infrastructure and support systems to achieve their functional objectives, resulting in very distinct design requirements. The major differences between these facility types are seen in the levels of reliability (backup electrical power) and the types of cooling systems (air and liquid) that are designed into their support infrastructure.
Data centers originated in commercial environments such as banking centers and technology back offices. Historically, they have been designed to contain enterprise and volume servers that operate with intermittent processor loads and often experience extreme variations in utilization on a daily basis. This is exactly why legacy data centers have been so energy inefficient, because cooling systems operating at fixed speeds overcooled the space when loads were low. And now we have solved this problem from two directions: 1) by installing variable-speed equipment on the HVAC plant and 2) by employing server virtualization and the cloud to eliminate IT load fluctuations. Our newest data centers now operate with more constant and efficient loads.
For these reasons, data centers usually operate at low compute densities and also at low power densities, ranging from 50 to 250 watts per square foot, or 2 to 10 kilowatt (kW) per rack. The electrical and cooling distribution systems are usually made up of commercial grade equipment such as 120 and 240-volt electrical systems and computer room air conditioners.
Enterprise data centers are designed to provide extremely high levels of reliability to ensure the integrity and availability of the vital business information on their servers. Reliabilities, measured in terms of nines (e.g., 99.999 percent), are achieved by deploying “concurrently maintainable” backup power systems with UPS and diesel generators, as well as other fail-safe features. Internet data centers often operate with lower levels of reliability because their network architectures allow them to maintain their data processing by failing over to other data center locations.
On the other hand, supercomputing facilities, also known as high-performance computing centers, originated in somewhat more industrial environments where government and other R&D organizations required massive amounts of computing power to study complex problems. These facilities now house many clusters of high-speed processors and data storage systems connected together by low latency networks and are designed to operate at full throttle everywhere in the space to maximize their computing performance and efficiencies.
HPC centers achieve very high compute densities and efficiencies and minimize their power and operating costs because of the way they are configured. However, their compact nature also results in extremely high power densities requiring industrial grade power and cooling systems. Medium-voltage (480 volt) electrical distribution systems are common in these environments, as are liquid-based cooling systems with high-efficiency refrigerants and water. Power densities of up to 40 kW per rack (1,000 watts per square foot) are achieved with water-cooled air circulation systems.
Recently developed conductive-convective heat removal systems can cool computers with power densities of up to 100-kW per rack. A supercomputing installation using these racks will be installed in a research computing facility in Silicon Valley later this year. A combination of air-cooled and “warm” water-cooling systems often proves to be the most economical solution for removing the heat from supercomputers, at least when measured by “total cost of ownership” measures.
Another difference in HPC facilities is their backup electrical power systems. Over the years, most government and research organizations have developed policies to invest their money in high-performance computers instead of expensive UPS and generator equipment. It is now common to provide just enough back up battery power to bring the storage devices down in a safe manner in order to preserve data that has already been processed, while servers are allowed to come down and stop operating until systems are restarted and ready to proceed with their computations.
Finally, HPC facilities require unique planning for changes in future technology because supercomputers are so quickly improving in energy efficiency and in performance. The newest of supercomputers contain multicore central processing units (CPUs) surrounded by an array of graphic processing units (GPUs) that deliver as much as ten times the computing efficiencies as did their predecessors, and they are dramatically changing the power densities and heat loads to be removed.
The next generation supercomputers are expected to contain three-dimensional lattices where processors and electronics will be cooled with liquids flowing through micro-channels in the processor lattice. Operating temperatures will increase dramatically and produce a high quality source of hot water exhaust capable of delivering heating for floor slabs, boiler water pre-heat, and other applications. All of these possibilities can be planned for in your design to provide a sustainable environment ready to accommodate these changes for years to come. n
Reprints of this article are available by contacting Jill DeVries at email@example.com or at 248-244-1726.