Home » Integrating Data Centers Into Academic Laboratories
The need for high-performance computing is forcing today’s academic laboratories into the 21st century. Once the home of notepads, pencils, calculators, and ruled paper, these laboratories can no longer function without a data center to support today’s high-powered computer needs. At the same time, today’s limited academic budgets cannot tolerate an enterprise-type solution to every need. How are today’s successful academic laboratories meeting their ever-expanding computing needs with limited academic budgets? Balance and focus on flexibility to accommodate future growth are keys to success.
Balancing data center availability against the budget and needs of the facility is an art form, not a science. As availability needs increase, the size, complexity, and cost of the data center electrical/mechanical/fire protection infrastructure also increase dramatically. The Uptime Institute provides some often-quoted budget prices for data center infrastructure using their Tier level definitions and based on redundant UPS capacity: Tier 1-$11,500/kilowatt (kW), Tier 2-$12,500/kW, Tier 3-$25,000/kW, and Tier 4-$28,000/kW. Taking an enterprise-type approach where everything must be Tier 4 will not fit academic budgets. So, how does academia balance its needs and budget?
The balancing act requires understanding the various applications needed in today’s academic laboratories. These divide into two groups (higher availability and lower availability). Assigning the applications into the proper group requires input from with the academic institution’s IT staff. Often, IT will place list modeling, graphics, financials, and similar applications that require high data security or have a high cost to recreate data in case of a computer outage as higher availability applications. The higher availability applications are limited in number and are most often supported by a Tier 3 infrastructure and sometimes by a Tier 4 infrastructure. The lower availability applications are non-critical office functions such as emails and blogs or batch-type calculations that are automatically backed up on a regular schedule. High-performance computing applications typically are considered lower-availability applications. These lower availability applications are most often supported by Tier 1 or Tier 2 infrastructure but sometimes are placed on utility power without UPS or generator support.
Complicating the balancing act is the trend for students and staff to employ ever more powerful computers, either desktops or laptops. This increased computer power plus the added applications are forcing the applications out of a server closet in an unused corner of the laboratory and into a real data center. At the same time, users are clamoring for 24 hours/day, 7 days/week, and 365 days/year (24/7/365) access to the applications, and educational institutions are getting away from the traditional agrarian concept of 8 a.m. until 5 p.m., Monday through Friday, nine months/year concept to to make more efficient use of the relatively expensive laboratory and data center facilities.
In addition, the popular use of virtual-classroom and distance-learning programs encourage the full time use of facilities,. The drive toward 24/7/365 access drives applications toward higher and more expensive tier levels, further complicating the balancing act.
Syska Hennessy sees a number of institutions installing or planning to install super-computers to handle massive computing needs including modeling, high speed, and graphics for such applications as weather forecasting. These applications typically back themselves up on a regular schedule, like high-performance computing, and are typically lower availability since, if they fail, the only loss is the calculations performed since the last backup.
Given the pressures and the balancing act required, the designer should focus on providing: