GCS Centres Successfully Hold Extreme Scaling Workshops Gauss Centre for Supercomputing e.V.

PRESS RELEASES

GCS Centres Successfully Hold Extreme Scaling Workshops
Press Release 01/2016 –

The prime goal of these workshops, for which more than 20 application teams had qualified, was to improve the computational efficiency of applications by expanding their parallel scalability across the hundreds of thousands of compute cores of the GCS supercomputers JUQUEEN and SuperMUC.

BERLIN, Germany, March 18, 2016 – Two GCS centres--JSC (Jülich Supercomputing Centre) and LRZ (Leibniz Supercomputing Centre Garching/Munich)--once again invited their high performance computing (HPC) users to Extreme Scaling Workshops. The prime goal of these workshops, for which more than 20 application teams had qualified, was to improve the computational efficiency of applications by expanding their parallel scalability across the hundreds of thousands of compute cores of the GCS supercomputers JUQUEEN and SuperMUC. The Code_Saturne (Computational Fluid Dynamics) and the Seven-League Hydro Code (hydrodynamics in stellar evolution) were both able to show strong scalability across the entire JUQUEEN BlueGene/Q system and thereby qualified for membership of JSC's High-Q Club. The VERTEX program (simulation of supernovae explosions) emerged as winner of the LRZ challenge on SuperMUC, an IBM System X iDataPlex.

Following on the tremendous success of previous years, GCS centres had again set the stage for their most ambitious HPC users to investigate and improve the scalability of their applications. For a limited number of days, the centres' Tier-0 systems were entirely freed from their default daily compute business and made available exclusively to the participants of the extreme scaling workshops. Moreover, to enable best possible results, additional support was provided by hardware and software specialists as well as by HPC experts of the centres. The achieved results are worth the efforts:

  • Program VERTEX of the Max Planck Institute for Astrophysics, Garching/Munich, a code used to simulate supernovae explosions, had started out on 7,360 compute cores on SuperMUC with measured 53 seconds per compute step. After the successful code optimization, the run time per compute step now running parallel on SuperMUC's 144,000 compute cores merely measured 3 seconds–an almost twentyfold improvement.

  • Code_Saturne of UK's Daresbury Lab, a computational fluid dynamics (CFD) tool chain for billion-cell calculations, scaled two preconditioner+solver configurations to 1.75 million threads on JUQUEEN. An older version of the code using only MPI had scaled to 1.5 million processes, however, the latest version combining MPI+OpenMP is twice as scalable.

  • Seven-League Hydro Code of the Heidelberg Institute for Theoretical Studies (HITS), a code for multidimensional simulations of hydrodynamics in stellar evolution, was scaled beyond its previous maximum of 8 racks to all 28 racks of JUQUEEN with 1.75 million threads.

"More and more users jump at the opportunity to fully exploit the massive potential of our petascale HPC infrastructure by making their applications scale across the major parts of our HPC systems or even the entire machines. This is what our infrastructure has been created for. Our systems' capabilities provide our users unique opportunities to run calculations of dimensions that never could have been tackled before", emphasizes Professor Thomas Lippert, GCS Chairman of the Board and Director of the JSC. In addition to this, Professor Arndt Bode, Director of the LRZ, points out: "Software and application developers, too, are recognizing the opportunities offered by our state-of-the-art HPC technologies and are adapting their programs and codes to fully leverage the massive compute power of our Tier-0 infrastructure. This not only results in reduced computing time but also in verifiable factors such as lowered energy consumption per compute run and consequently in reduced operational costs–aspects that must not be neglected given our continued pursuit of improved energy-efficient supercomputing."

Detailed information on the 2016 Extreme Scaling Workshop at JSC can be found at http://www.fz-juelich.de/SharedDocs/Meldungen/IAS/JSC/EN/2016/2016-03-juqueen-extreme-scaling-workshop/, results and further information on the LRZ version of this session are available at https://www.lrz.de/presse/ereignisse/2016-03-03_extreme_scaling/#ExtremeScalingWorkshop.

The third GCS centre, the High Performance Computing Center Stuttgart (HLRS), will offer the users of its CRAY XC40 HPC system precisely the same application performance optimization opportunity. The HLRS Extreme Scaling Workshop is scheduled to take place in April this year (http://www.hlrs.de/training/2016/XC40-1/).

Performance improvements similar to the ones of the winners of the JSC and LRZ workshop challenges were also achieved by the other workshop participants featuring applications used in a wide range of scientific fields such as computational fluid dynamics (CFD), meteorology and climate, astrophysics, or life sciences/medicine. In all of these application areas, striking time factor improvements are of utmost importance. Time is of essence e.g. when it comes to natural disasters: Simulating a seaquake and identifying its resulting ravages not only realistically but also fast enough is key for issuing timely and accurate warnings. In the medical field, the fast and to-the-point simulation of a patient's aneurysm for example is inevitable to identify and initiate life-saving measures within the shortest possible period of time. While for the latter, at the present day, HPC technologies are required, simulations of this magnitude will be done in hospitals on site in the years to come given the digital and technological evolution. It is the mission of science and research to pave the way for a speedy transfer of cutting-edge technology from science into society and industry.

About GCS: The Gauss Centre for Supercomputing (GCS) combines the three national supercomputing centres HLRS (High Performance Computing Center Stuttgart), JSC (Jülich Supercomputing Centre), and LRZ (Leibniz Supercomputing Centre, Garching near Munich) into Germany’s Tier-0 supercomputing institution. Concertedly, the three centres provide the largest and most powerful supercomputing infrastructure in all of Europe to serve a wide range of industrial and research activities in various disciplines. They also provide top-class training and education for the national as well as the European High Performance Computing (HPC) community. GCS is the German member of PRACE (Partnership for Advance Computing in Europe), an international non-profit association consisting of 25 member countries, whose representative organizations create a pan-European supercomputing infrastructure, providing access to computing and data management resources and services for large-scale scientific and engineering applications at the highest performance level.

GCS is jointly funded by the German Federal Ministry of Education and Research and the federal states of Baden-Württemberg, Bavaria, and North Rhine-Westphalia.

GCS has its headquarters in Berlin/Germany.

Contact:

Regina Weigand, GCS Public Relations
+49 711 685-87261
r.weigand@gauss-centre.eu

This press release as pdf file:   GCS Centres Successfully Complete Extreme Scaling Workshops (PDF, 642 kB)

Tags: Workshop Code Development JSC HLRS LRZ