GCS offers the most powerful supercomputing infrastructure in Europe, aimed at serving a broad range of scientific and industrial research activities in various disciplines.
Each GCS centre hosts a flagship supercomputing system along with a host of smaller systems suited for data analytics, machine learning, and other artificial intelligence workflows, among other research areas. All three individual institutions among the most powerful computing centres in the world. With its combined performance, GCS provides the largest supercomputing infrastructure in all of Europe. The system architectures implemented at the three GCS centres are complementary in order to accomodate the broadest range of scientific disciplines.
The GCS flagship systems are co-financed by the German Federal Ministry for Education and Research and the respective state governments hosting our centers: Baden-Württemberg (HLRS), Bavaria (LRZ), and North Rhine-Westphalia (JSC).
Copyright: Julian Holzwarth, HLRS,
Hunter is the new next-generation supercomputer at the High-Performance Computing Center Stuttgart (HLRS). The system is in full Operation since February 2025.
Hunter is based on the HPE Cray EX4000 supercomputer, which is designed to deliver exascale performance to support large-scale workloads across modeling, simulation, artificial intelligence, and high-performance data analytics. Hunter is transitioning away from the past emphasis on CPU processors to make greater use of more energy-efficient GPUs. It is based on the AMD Instinct™ MI300A accelerated processing unit (APU), which combines CPU and GPU processors together with high-bandwidth memory into a single package, enabling fast data transfer speeds, impressive HPC performance, easy programmability, and great energy efficiency. This slashes the energy required to operate Hunter in comparison to its predecessor Hawk by approximately 80% at peak performance. Hunter is conceived as a transitional system that will enable HLRS system users to prepare for its upcoming exascale system, Herder, currently scheduled to arrive in 2027.
After the Cluster Module of the Jülich Supercomputing Centre’s (JSC's) HPC system JUWELS (Jülich Wizard for European Leadership Science) went into operation in July 2018, the Booster Module was installed in summer 2020 complementing the cluster system beginning in November 2020. JUWELS consists of 2511 nodes in the Cluster Module and 936 Booster nodes. Cluster nodes are equipped with dual-socket Intel Skylake Platinum 8168 CPUs and InfiniBand EDR interfaces. In addition, 56 Dual Intel Xeon Gold 6148 nodes are equipped with 4 additional NVIDIA Volta V100 GPUs. Each Booster node is equipped with two AMD EPYC Rome 7402 CPUs with 512 GB DDR memory, 4 NVIDIA Ampere A100 GPUs and 4 HDR 200 Gb/s InfiniBand links. JUWELS combines the fat-tree-structured Cluster topology with the Dragonfly+ Booster network in a single high-speed fabric allowing concurrent use of nodes from both modules. The Cluster contributes 12 petaflops to JUWELS massive compute power of 85 petaflops, while the Booster accounts for the majority share with a peak performance of 73 petaflops.
JUPITER, the “Joint Undertaking Pioneer for Innovative and Transformative Exascale Research, will be the first exascale supercomputer in Europe. The system is provided by a ParTec-Eviden supercomputer consortium and was procured by EuroHPC Joint Undertaking in cooperation with the Jülich Supercomputing Centre (JSC). It will be installed in 2024 at the Forschungszentrum Jülich campus in Germany. JUPITER is being financed by The German Federal Ministry for Education and Research, the state of North-Rhine Westphalia, as well as the EuroHPC Joint Undertaking.
In September 2018, the Leibniz Supercomputing Centre’s (LRZ’s) latest edition to its series of SuperMUC supecomputers was officialy introduced: SuperMUC-NG (“next generation”). With its peak performance of 26.7 Petaflops—an almost fourfold increase of the computing power previously available at LRZ—SuperMUC-NG is currently the fastest supercomputer in Germany. It features an Intel-Lenovo OceanCat platform equipped with 6,336 compute nodes (more than 300,000 compute cores) with Intel Skylake processors and OmniPath interconnects, 700 terabytes of main memory, and 70 petabytes of disk storage.
The current leadership class system at LRZ, SuperMUC-NG, is divided into two phases. While phase one has already supported scientists for several years, Phase 2 is getting closer to being accepted and beginning production runs. Whilst Phase 1 is a predominantly homogeneous system based on Intel CPUs for classical simulation and modelling, Phase 2 has been developed to accelerate computations by integrating AI methods into established HPC workflows that are increasingly in high demand by researchers. The two systems achieve a peak performance of just under 27 and 28 quadrillion floating point operations per second (26.87 and 27.96 PetaFLOPS/s) and enable calculations, simulations, and visualizations for cutting-edge research.
At the beginning of 2024 LRZ began pilot operations on SuperMUC-NG phase 2. Participation is only for selected projects that help us to identify flaws and fix problems. We expect benchmarking and tuning activities during this period with reservations of the resources and frequent reboots of login and compute nodes. Once the system has been fully accepted, it will be available to all users in a full production environment.