Recently, I had the pleasure to work on a HPC project with a colleague of mine, Mike Thomas. While I’m comfortable in the colocation and enterprise data center markets, I was impressed by Mike’s knowledge technically within the high-performance compute  sector. I’ve asked Mike to contribute to today’s column; here’s what he wrote.

 

Autumn is my favorite season not only because of caramel apples, the brilliant colors, and the crisp light but also because November brings SC — The International Conference for High Performance Computing, Networking, Storage and Analysis. Annually, I look forward to being with 10,000+ super-smart, super-resourceful research scientists and IT folks, catching up on innovations that more often than not will trickle down into mainstream enterprise adoption, and marvel at the progression from terascale to petascale to exascale computing.

Supercomputer performance is measured in floating-point operations per second (FLOPS), instead of million instructions per second (MIPS). As of June 2016, the world’s fastest supercomputer was the Sunway TaihuLight in mainland China, with a Linpack benchmark of 93 PFLOPS, which headed up the TOP500 supercomputer list rankings. U.S.-built computers held 10 of the top 20 positions. Petascale supercomputers can process one quadrillion (1,015) FLOPS. Given the current speed of progress, industry experts estimate that supercomputers will reach Exascale or 1 EFLOPS (1,018, 1,000 PFLOPS or one quintillion FLOPS) by 2018.

To put this into context, Sandia National Laboratories theorizes that a ZettaFLOPS (1,021) computer is required to accomplish accurate weather modeling for a two-week period. (No meteorologist jokes, please.) Such systems might be built around 2030.

HPC usage continues to rise and is no longer the domain of special research centers like NCSA, TACC, and national laboratories. The mainstreaming of HPC places demands on the environment needed to house it — power, cooling, structure. Supercomputers are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, climate research, oil and gas exploration, cryptanalysis, molecular modeling, and physical simulations. The ability to speed up data analytics bolsters many companies’ results.

The annual SC conference typically begins the weekend after we in the U.S. set back our clocks from Daylight Savings Time. SC often coincides with Intel’s and other hardware vendor’s new product launches and forecasts. For nearly a decade, Intel followed a tick-tock innovation cycle — “tick” being a shrinking of process technology and “tock” being a new microarchitecture. The 24-month deployment schedule adhered to Moore’s Law, which anticipated that transistor density would double roughly every two years. Earlier this year, Intel said it is retiring its tick-tock development model and replacing it with a 30-month process, architecture, optimization model.

High-performance computers generally have an expected life cycle of about three years, and in data centers housing one or more HPC deployments a different type of tick-tock scenario plays out: namely, the challenge of developing a TCO analysis and facilities roadmap to power, cool, and support overlapping yet disparate architectures.

Often without warning or proper planning, HPC loads are added to data centers unable to accommodate them. Power consumption will continue to increase as more transistors and higher clock speeds result in more power consumption. No data center solution is one size fits all, especially when it comes to HPC deployments.

 

Thank you Mike.

In the last project I did with Mike, the client required a TCO to identify five different options of build. The entire load of 40MW was distributed over a mere 22,000 sq ft. That’s a whopping 1,800 W/sq ft. Another factor that astonished me was the amount of water it took to cool that load. We estimated a total of 250,000,000 gallons of water a year into the TCO. Greenpeace would not be happy.

Recently, Mike Thomas presented “Data Center Design and Planning for HPC Folks” at SC16 in Salt Lake City on Nov. 14 with two fellow panelists. Mike has participated in the annual conference since 2006 and has worked with 30+ HPC clients on data center infrastructure deployment initiatives.

Once again thank you Mike, and I look forward to several more HPC projects in the future.