Figure 1. Students and lab personnel employ more powerful computers these days, either stationary provided by the school or laptops they bring in. All data need to go to the data center so students and personnel can access this information out of the class or next time in class.


 

The need for high-performance computing is forcing today’s academic laboratories into the 21st century. Once the home of notepads, pencils, calculators, and ruled paper, these laboratories can no longer function without a data center to support today’s high-powered computer needs. At the same time, today’s limited academic budgets cannot tolerate an enterprise-type solution to every need. How are today’s successful academic laboratories meeting their ever-expanding computing needs with limited academic budgets? Balance and focus on flexibility to accommodate future growth are keys to success.

Balancing data center availability against the budget and needs of the facility is an art form, not a science. As availability needs increase, the size, complexity, and cost of the data center electrical/mechanical/fire protection infrastructure also increase dramatically. The Uptime Institute provides some often-quoted budget prices for data center infrastructure using their Tier level definitions and based on redundant UPS capacity: Tier 1-$11,500/kilowatt (kW), Tier 2-$12,500/kW, Tier 3-$25,000/kW, and Tier 4-$28,000/kW. Taking an enterprise-type approach where everything must be Tier 4 will not fit academic budgets. So, how does academia balance its needs and budget?

The balancing act requires understanding the various applications needed in today’s academic laboratories. These divide into two groups (higher availability and lower availability). Assigning the applications into the proper group requires input from with the academic institution’s IT staff. Often, IT will place list modeling, graphics, financials, and similar applications that require high data security or have a high cost to recreate data in case of a computer outage as higher availability applications. The higher availability applications are limited in number and are most often supported by a Tier 3 infrastructure and sometimes by a Tier 4 infrastructure. The lower availability applications are non-critical office functions such as emails and blogs or batch-type calculations that are automatically backed up on a regular schedule. High-performance computing applications typically are considered lower-availability applications. These lower availability applications are most often supported by Tier 1 or Tier 2 infrastructure but sometimes are placed on utility power without UPS or generator support.

Complicating the balancing act is the trend for students and staff to employ ever more powerful computers, either desktops or laptops. This increased computer power plus the added applications are forcing the applications out of a server closet in an unused corner of the laboratory and into a real data center. At the same time, users are clamoring for 24 hours/day, 7 days/week, and 365 days/year (24/7/365) access to the applications, and educational institutions are getting away from the traditional agrarian concept of 8 a.m. until 5 p.m., Monday through Friday, nine months/year concept to to make more efficient use of the relatively expensive laboratory and data center facilities.

In addition, the popular use of virtual-classroom and distance-learning programs encourage the full time use of facilities,. The drive toward 24/7/365 access drives applications toward higher and more expensive tier levels, further complicating the balancing act.

Syska Hennessy sees a number of institutions installing or planning to install super-computers to handle massive computing needs including modeling, high speed, and graphics for such applications as weather forecasting. These applications typically back themselves up on a regular schedule, like high-performance computing, and are typically lower availability since, if they fail, the only loss is the calculations performed since the last backup.
 

Figure 2. The need for high-performance computing has brought today’s academic labs into the 21st century. These labs can no longer function without a data center to support their high-powered computer needs

 

Given the pressures and the balancing act required, the designer should focus on providing:

  • Flexible design to accommodate hardware changes over facility life
     
  • Scalable, modular design that can be implemented in stages as needs develop and change
     
  • Design that minimizes the total cost of ownership
     
  • Design that meets sustainability goals
     
  • Design that is integrated into the overall laboratory

An inflexible design is a classic trap into which many designers fall, with their clients paying the price. The first misstep is to focus the design on the initial hardware fit out, giving no consideration to the generations of future hardware to follow over the 15- to 20-year data center lifespan. Designers should make provisions for future changes and not put handcuffs on the design. For example, a second input circuit breaker included for a power distribution unit (PDU) supplying the computer equipment provides the means to move that PDU to a new electrical upstream system in the future without a computer outage. This will permit the now-obsolete original UPS system to be replaced in the future without losing the data center function.

Scalable, modular designs are all the rage these days, under the name plug and play. The concept is to grow the expensive data center capacity as it is needed, when it is needed, without spending capital dollars prematurely.

It sounds easier than it is. One caveat that many want to ignore is that the more scalable and modular the design, the more it will cost when completely built out. Another caveat is that the designer should master plan the ultimate design and include in the construction necessary items for future phases (e.g., under-slab conduits) that need be included in initial construction because adding them later will be disruptive or risky.

Most owners today focus on total cost of ownership (TCO), within the limitations of their initial construction and design budgets. Academic institutions are no different, since initial one-time capital expense funds are often more available yearly operating expenses. Planning for minimal TCO involves detailed consideration of initial (CAPEX) and operating (OPEX) expense for alternative systems to determine which fits the institution’s particular situation. This places additional burdens on the designer and an accomplished cost estimator.

Academic institutions seem to be on the vanguard of the sustainability movement, and most institutions have specific goals. Suitable sustainability goals should not be set in a vacuum but should be considered in conjunction with the TCO analyses. For example, the type of data center cooling that produces the lowest PUE may have a higher TCO than another system with a slightly higher PUE. It is ill advised to focus a data center design on providing the lowest achievable PUE unless the CAPEX budget can be increased.

An old saying in the engineering profession is that oftentimes, if a design looks right, it is right. To look right, the data center must integrate well into the entire facility. It shouldn’t be put in the basement, below the kitchen and between the elevators and the boiler room. It should be accessible to the laboratory so that changes in electrical, mechanical, and IT infrastructure are easily accomplished. It should be accessible for maintenance and repair. It should be thoughtfully planned with the eyes of the designer who comes after you in 15 to 20 years and must upgrade the infrastructure without a computer outage and with the limited budget constraint.

In summary, we first identified the pressures forcing academic institutions to provide 21st century data centers in their laboratories. We then discussed the balancing act that must be performed to determine which level of electrical/mechanical/fire protection infrastructure is provided for the computer applications. We finally discussed the attributes that data center designers should bring to their designs to facilitate the balancing act over the usable life of the data center. We hope that this article will provide some guidance to those designing today’s 21st century data centers for academic laboratories.