For many decades, when people talked about computer performance, it would have been taken for granted that they were referring to speed. In the high-octane world of today’s tennis court-sized supercomputers, this is measured in FLOPS, or floating-point operations per second. Defined by speed, the world’s highest-performing computer right now is Japan’s Fugaku, operating at 415 petaFLOPS.
But there are problems with using speed as the only metric to compare one supercomputer to another. An arms race, based around FLOPS ratings, has seen the emergence of a generation of supercomputers that burn through colossal amounts of electricity and generate so much heat that hugely elaborate cooling systems must be deployed at all times to keep them from melting down. An over-reliance on speed in the benchmarking of computers also downplays other vital qualities, such as reliability, availability, and usability. And then there’s the economics of the thing. Making speed the primary measurement of success has seen the total cost of ownership of supercomputers hit unprecedented heights while, at the same time, driving up their negative impact on the environment.