The first module of the exascale supercomputer Jupiter, named JEDI, is ranked first place in the Green500 list of the most energy-efficient supercomputers worldwide, as announced today by Forschungszentrum Jülich and EuroHPC Joint Undertaking, together with the ParTec-Eviden supercomputer consortium at the International Supercomputing Conference (ISC) in Hamburg. The Jupiter Exascale Development Instrument, or JEDI, was installed in April by the German–French consortium and has the same hardware as the Jupiter booster module, which is currently being built at Forschungszentrum Jülich.
The rapid pace of digitalisation and the increasing use of artificial intelligence requires an increasing amount of computing power and, in turn, energy. Data centers now account for 4 % of German electricity consumption, and this trend is increasing. As a result, efficient computing has become an increasingly important issue in recent years. Research as well as measures to increase energy efficiency have also been on the rise.
The Jupiter supercomputer procured by the European supercomputing initiative EuroHPC Joint Undertaking is a true pioneer in this field. The first module installed in April, the Jupiter Exascale Development Instrument (JEDI), is capable of 72 billion floating-point operations per second per watt. In contrast, the previous leader achieved around 65 billion.
The decisive factor for the module’s outstanding efficiency is its use of graphics processing units (GPUs) and the fact that it is possible to optimize scientific applications for calculations on GPUs. Today, virtually all leading systems on the Green500 ranking rely heavily on GPUs, which are designed to perform calculations with much greater energy efficiency than conventional central processing units (CPUs).
The JEDI development system is one of the first systems in the world to use the latest generation of accelerators from NVIDIA: the NVIDIA GH200 Grace Hopper Superchip, which combines the NVIDIA Hopper GPU and the NVIDIA Grace CPU on a single module. Based on Eviden’s latest BullSequana XH3000 architecture, the system includes its highly efficient hot water cooling system, Direct Liquid Cooling, which requires significantly less energy than conventional air cooling, and allows the heat generated to be reused downstream.
The Jupiter precursor JEDI already has the same equipment as the subsequent JUPITER booster module. Scientists are able to access the hardware at an early stage of development as part of the JUPITER Research and Early Access Program (JUREAP) in order to optimize their codes. In doing so, they are supported by experts from the Jülich Supercomputing Centre.
Jupiter exascale supercomputer
Jupiter is set to be the first supercomputer in Europe to surpass the threshold of one exaflop, which corresponds to one quintillion (“1” followed by 18 zeros) floating-point operations per second. The final system will be installed in stages in the second half of this year, and will initially be made available to scientific users as part of the early access program before it goes into general user operation at the beginning of 2025.
JUPITER’s enormous computing power will help to push the boundaries of scientific simulations and to train large AI models. The modular exascale system uses the dynamic modular system architecture (dMSA) developed by ParTec and the Jülich Supercomputing Centre. The JUPITER booster module, which is currently installed, will have around 125 BullSequana XH3000 racks and around 24,000 NVIDIA GH200 Superchips, interconnected by NVIDIA Quantum-2 InfiniBand networking. For 8-bit calculations, which are common for training AI models, the computing power is set to increase to well over 70 exaflops. As of today, this would make Jupiter the world’s fastest computer for AI.
According to estimates, Jupiter's energy requirements will average around 11 megawatts. Further measures will help to use energy even more sustainably. The modular data centre in which JUPITER will be housed is designed to extract the heat generated during cooling and to then use it to heat the buildings on the Forschungszentrum Jülich campus.
All hardware and software components of Jupiter will be installed and managed by the unique JUPITER Management Stack. This is a combination of ParaStation Modulo (ParTec), SMC xScale (Atos/Eviden), and software components from JSC.
Jupiter development system: JEDI
The Jupiter development system JEDI is much smaller than the final exascale computer. It consists of a single rack from the latest BullSequana XH3000 series, which currently contains 24 individual computers, known as compute nodes. These are connected to each other via four NVIDIA Quantum-2 InfiniBand switches and will be complemented with 24 additional computing nodes over the course of May.
During measurements for the Green500 ranking of the most energy-efficient supercomputers, the JEDI system achieved a computing power of 4.5 quadrillion floating-point operations per second, or 4.5 petaflops, with an average power consumption of 66 kilowatts. During optimized operation, the power consumption was reduced to 52 kilowatts.
"By using the dynamic Modular System Architecture (dMSA), the central European technology to build modular supercomputer and quantum computer, as well as ParaStation Modulo, the MSA-enabling software suite both developed by ParTec, JUPITER will achieve an outstanding level of computing power while decreasing the energy consumption. On top it will be by far the fastest AI-super computer at FP 16” says Bernhard Frohwitter, CEO of ParTec AG.