GCS Begins Next-Generation Architecture Transition, Approves More Than 1 Billion Computing Core Hours for Large-Scale Simulation Projects
Press Release 01/2018 –

The 17 ambitious research teams who recieved computing hours represent a wide range of scientific disciplines, including astrophysics, atomic and nuclear physics, biology, condensed matter physics, elementary particle physics, meteorology, and scientific engineering, among others.

BERLIN, Germany, May 9, 2018—The Gauss Centre for Supercomputing (GCS) continues serving the national scientific community by offering large-scale computing time allocations on its petascale high-performance computing (HPC) systems in support of outstanding national research proposals. With the 19th Call for Large-Scale Projects the GCS steering committee granted a total of more than 1 billion core hours to 17 ambitious research projects. The research teams represent a wide range of scientific disciplines, including astrophysics, atomic and nuclear physics, biology, condensed matter physics, elementary particle physics, meteorology, and scientific engineering, among others. Scientists awarded with computing time will have immediate access to the GCS HPC resources for a period of 12 months.

Of the 17 approved large simulation projects, four will be followed with particular interest, as they will be the first large-scale projects to run on the Jülich Supercomputing Centre's (JSC’s) new HPC system, the Jülich Wizard for European Leadership Science, or JUWELS. The system, which replaces JSC's IBM BlueGene/Q system JUQUEEN, will consist of multiple, architecturally diverse but fully integrated modules designed for specific simulations and data science tasks. The first module, built with a versatile cluster architecture based on commodity multi-core CPUs, is currently being set up at JSC. It consists of about 2,550 compute nodes with two Intel Xeon 24-core Skylake CPUs each, and about 2% of the compute nodes will feature four of the latest-generation NVIDIA Volta GPUs. JUWELS, which will deliver a peak performance of 12 petaflops (10.4 petaflops without GPUs), will go into operation in late June 2018, thus the time window for researchers using this HPC platform will be open until June 2019. The projects supported come from science fields as diverse as atomic and nuclear physics, condensed matter, elementary particle physics, and scientific engineering. “We are excited to further develop and implement the modular architecture concept with JUWELS,” says Professor Thomas Lippert, director of the Jülich Supercomputing Centre. “Our next challenge is to work with our users on their applications to use the new system in the most efficient way.”

The Leibniz Supercomputing Centre (LRZ) in Garching near Munich is using its SuperMUC system to deliver 340 million core hours during the 19th GCS call. Nevertheless, LRZ also will undergo a major technology shift during the course of this year. While users can continue to leverage the computing power of the current SuperMUC installations until the end of 2019, they can also gradually migrate their applications to LRZ's new supercomputer SuperMUC-NG ("next-generation"), which is based on the Intel Xeon Scalable Processor and is connected by Intel’s Omni-Path network. Installation work has already begun at LRZ. SuperMUC-NG is expected to begin operation in late 2018 and will deliver a peak performance of 26.7 petaflops, representing a five-fold increase in computing power over the current system. The largest allocations on SuperMUC for the current large-scale call support projects in life sciences (75 million core hours), condensed matter physics (60 million core hours), and scientific engineering (75 million core hours).

Of the 1,060 million core hours granted with GCS’ 19th large-scale call, more than half of the computing time granted will be delivered by HPC system Hazel Hen, the Cray XC-40 system installed at the High-Performance Computing Center Stuttgart (HLRS). The lion’s share of the 580 million core hours allocated on the HLRS supercomputer will support four computationally challenging fluid dynamics projects, a research area in which HLRS has always been very strong. Projects from the field of elementary particle physics (3 projects), biology and astrophysics complement the scientific tasks supported by Hazel Hen.

The complete list of the 19th GCS Large-Scale Call projects can be found here.

About GCS Large-Scale Projects In accordance with the mission of the Gauss Centre for Super-computing, all scientists and researchers in Germany are eligible to apply for computing time on the petascale HPC systems of Germany’s leading supercomputing institution. Projects are classified as "large-scale" if they require more than 35 million core-hours in one year on a GCS member centre's high-end system. Computing time on the GCS systems is allocated by the GCS Scientific Steering Committee to scientifically leading, ground-breaking projects which deal with complex, demanding, and innovative simulations that would not be possible without the GCS petascale infrastructure. The projects are evaluated via a strict peer-review process on the basis of the project’s scientific and technical excellence.

The GCS Calls for Large-Scale Projects application procedure and criteria for decision is described in detail here.

About GCS: The Gauss Centre for Supercomputing (GCS) combines the three national supercom-puting centres HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre, Garching near Munich) into Germany’s Tier-0 supercomputing institution. Together the three centres provide the largest and most powerful supercomputing infrastructure in all of Europe and serve a wide range of industrial and research activities across various disciplines. They also provide top-tier training and education for the national as well as the European High Performance Computing (HPC) community. GCS is the German member of PRACE (Partnership for Advanced Computing in Europe), an international non-profit association consisting of 25 member countries, whose representative organizations create a pan-European supercomputing infrastructure, providing access to computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level.

GCS is jointly funded by the German Federal Ministry of Education and Research and the federal states of Baden-Württemberg, Bavaria and North Rhine-Westphalia. It has its headquarters in Berlin/Germany. (


Regina Weigand, GCS Public Relations
+49 711 685-87261

This press release as a  PDF File (PDF, 318 kB)

Tags: Large-Scale Project Computing Time Allocation Hardware