WALBERLA – A massively parallel framework for multi-physics simulations
Chair for System Simulation, Friedrich-Alexander-Universität Erlangen-Nürnberg
Local Project ID:
HPC Platform used:
SuperMUC and SuperMUC-NG of LRZ
The open-source massively parallel software framework waLBerla [1,2] (widely applicable lattice Boltzmann from Erlangen) provides a common basis for stencil codes on structured grids with special focus on computational fluid dynamics with the lattice Boltzmann method (LBM). Other codes that build upon the waLBerla core are the particle dynamics module MESA-PD and the finite element framework HYTEG.
Various contributors have used waLBerla to simulate a multitude of applications, such as multiphase fluid flows, electrokinetic flows, phase-field methods and fluid-particle interaction phenomena. The software design of waLBerla is specifically aimed to exploit massively parallel computing architectures with highest efficiency. In order to simulate real-world scenarios, waLBerla relies on using the immense compute power available on modern high performance computing systems such as LRZ’s SuperMUC-NG.
Results and Methods
In the SKAMPY DFG project, we extended waLBerla with phase-field code generation . We developed automatic program generation technology to create scalable phase-field methods for material science applications. To simulate the formation of microstructures in metal alloys, we employ an advanced, thermodynamically consistent phase-field method. A state-of-the-art large-scale implementation of this model requires extensive, time-consuming, manual code optimization to achieve unprecedented fine mesh resolution.
Our new approach starts with an abstract description based on free-energy functionals which is formally transformed into a continuous PDE and discretized automatically to obtain a stencil-based time-stepping scheme. Subsequently, an automatized performance engineering process generates highly optimized, performance-portable code for CPUs and GPUs. We demonstrate the efficiency for real-world simulations on large-scale GPU-based (PizDaint) and CPU-based (SuperMUC-NG) supercomputers (Figure 1).
Our technique simplifies program development and optimization for a wide class of models. We further outperform existing, manually optimized implementations as our code can be generated specifically for each phase-field model and hardware configuration.
Simulation study of particulate flows
Using direct numerical simulations to study particulate flows has become a promising alternative to laboratory experiments. They allow a deeper insight into physical properties, can be controlled more easily, and are more cost efficient for parametric studies. For that reason, we are constantly developing further our fluidparticle coupling module inside the waLBerla framework. As a result, the coupling module has been adapted to now also support waLBerla’s newly developed particle simulation module MESA-PD which now allows for much more flexibility regarding the particle-interaction algorithms. We used this gained flexibility to further improve the accuracy of the interaction algorithms. We employ these novel techniques to study particulate flow scenarios, like erosion processes of riverbeds.
To obtain statistically converged results, such systems often require a large amount of particles and long run times for which we employ the SuperMUC-NG supercomputer.
Another topic of our numerical studies is to track down the source that initiates particle erosion. In collaboration with Bernhard Vowinckel, TU Braunschweig, we revisit their recent study on the erosion of single particles in turbulent channel flow. As shown in Figure 2, a movable layer of spheres is placed on top of a fixed layer. An erosion process is characterized by a sphere leaving the top layer and traveling through the domain. It is supposedly triggered by strong fluid vertices hitting the sphere or a former collision by an already moving particle.
Besides the long run time and the fine spatial resolution, the required logging of all particle positions and velocities at all time steps is a major challenge for massively parallel execution.
This work has been presented at the PARTICLES conference 2019 in Barcelona .
Ongoing Research / Outlook
We are currently preparing for time-dependent runs also in the TERRANEO project , and are estimating, that during the next year, these experiments may together with the riverbed experiments, consume the majority of the remaining compute time.
We are grateful that during the friendly user phase, especially the phase-field simulations could be computed without accounting of core-h on the project. Therefore we saved a lot of resources on our account.
References and Links
 Bauer et al. “WALBERLA: A block-structured high-performance framework for multiphysics simulations”. In: Computers & Mathematics with Applications (2020). ISSN: 0898-1221. DOI: 10.1016/j.camwa.2020.01.007.
 Bauer et al. “Massively parallel phase-field simulations for ternary eutectic directional solidification”. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. ACM. 2015, p. 8.
M. Bauer1, S. Eibl1, J. Hönig1, H. Köstler1(PI), N. Kohl1, C. Rettinger1, C. Schwarzmeier1, D. Thönnes1, B. Vowinckel2
1Chair for System Simulation (Informatik 10), Friedrich-Alexander-Universität Erlangen-Nürnberg
2Technische Universität Braunschweig
Professor Dr.-Ing. Harald Köstler
Informatik 10 - Computational Engineering
Friedrich-Alexander Universität Erlangen-Nürnberg
Cauerstr. 11, D-91058 Erlangen (Germany)
e-mail: harald.koestler [@] fau.de
NOTE: This report was first published in the book "High Performance Computing in Science and Engineering – Garching/Munich 2020 (2021)" (ISBN 978-3-9816675-4-7)
Local project ID: pr86ma