Direct Numerical Simulation of Partially–filled Pipe Flow Gauss Centre for Supercomputing e.V.

COMPUTATIONAL AND SCIENTIFIC ENGINEERING

Direct Numerical Simulation of Partially–filled Pipe Flow

Principal Investigator:
Michael Manhart

Affiliation:
Professorship of Hydromechanics, Technical University of Munich

Local Project ID:
pn56ci

HPC Platform used:
SuperMUC and SuperMUC-NG of LRZ

Date published:

Introduction

In this project the flow in partially-filled pipes is investigated. This flow can be seen as a model flow for rivers and waste-water channels and represents a fundamental flow problem that is not yet fully understood. Nevertheless, there have neither been any high-resolution simulations nor well resolved experiments reported in literature to date for this flow configuration. In this project highly resolved 3D-simulations are performed which help further understanding narrow openduct flows. The analysis concentrates on the origin of the mean secondary flow and the role of coherent structures as well as on the time-averaged and instantaneous wall shear stress.

Results and Methods

For the direct numerical simulations within this project, the flow solver MGLET is employed. It uses a Finite Volume method to solve the incompressible Navier-Stokes equations on Cartesian grids with a staggered arrangement of the variables. A local grid refinement is implemented by adding refined grids in a hierarchical, overlapping way. An explicit third-order low-storage Runge-Kutta time step is used for time integration.

Curved surfaces are represented by an Immersed Boundary Method. MGLET is parallelized by a domain decomposition method using Message Passing Interface (MPI).

Recently, the code has been optimized for massively parallel computing architectures within three successive KONWIHR projects in 2015, 2017 & 2019, with their outcomes being published in [1,2]. These optimisation works were done in a close collaboration with the experts from the CFDLab at LRZ. In the latest KONWIHR project, we performed a SIMD optimisation to our two pressure solvers. This was motivated by the recent trend that the modern HPC processors are equipped with ever more powerful yet more energy-efficient internal vectorisation hardware. One important example of such systems for us is SuperMUC-NG at LRZ, that is based on Intel Skylake processors being equipped with 512-bit ultrawide vector registers. By exploiting the Skylake’s extensive SIMD capability, our optimised code shows up to 20% overall performance improvement proven for up to 3.2x104 processes and runs with reasonable efficiency up to O(105) MPI processes (see Fig. 1).

A fully-developed turbulent flow was simulated in a straight, partially-filled pipe with various filling ratios and Reynolds numbers ranging from marginally to moderately turbulent, see Table 1.

Periodic boundary conditions were applied in streamwise direction, no-slip for the side walls and the free surface for low Froude number was approximated by a slip condition. To achieve converging and independent statistics all simulations were run for at least 5,000∙tub/R within a domain length of 8π, see Fig. 2 for semifilled pipe flow. At small Reynolds numbers mixing of the flow is not as strong as at larger Reynolds numbers, hence the simulation time was doubled and the length of the domain was enlarged by the factor 1.5.

A variety of simulations were performed — from 84 cores with 48∙106 grid points and a time per cell update of about 2∙10-6 s for the smallest simulation to the biggest case including 1,056 cores with 750∙106 grid points and a time per cell update of ~3∙10-6 s — which lead to 80 M core-hours and 3,000 M core-hours per case, respectively.

The main results are summarised in the following. The maximum mean streamwise velocity (black cross, Fig. 3) can be found at ~70% of the flow depth. This has already been reported in the literature and can be found in other free surface flows as well. This so-called ‘dip phenomenon’ can be explained by the mean secondary flow which consists of a counter-rotating two vortex system by which at the free surface slow flow is being convected from the pipe’s wall to the center. Independent of the Reynolds number the so-called ‘inner secondary cell’, close to the free-surface-wall juncture, rotates towards the wall at the free-surface and the ‘outer secondary cell’ the other way around, see Fig. 3.

The position of the two vortex centers is dependent on the Reynolds number. At higher Re the vortex centers are moving towards the free surface, see Fig. 4. The size of the inner secondary vortex cell (wall-distance of the center) scales with wall units. The distance of the center of the outer secondary cell from the free surface seems to converge for large Reynolds numbers.

Comparing the friction factor λ with measurements and full pipe flow gives surprising results, see Fig. 5. For laminar flow the friction factor for semi-filled pipe flow matches with the full pipe flow. This was expected. For turbulent flow larger friction factors for semi-filled pipe have been reported in literature. Our results indicate, however, that semi-filled and full pipe flows have nearly identical friction factors.

Furthermore we started to investigate the link between the mean and the instantaneous flow fields, by analysing instantaneous snapshots for spatio-temporal dynamics [3], its coherent structures of different scales and their interaction. For instance, in Fig. 6 the second invariant of the velocity gradient tensor (Q-criterion) displays coherent structures, like quasi-streamwise vortices near the boundary growing towards the bulk flow and vortices being attached to the free-surface.

Currently, a publication is in preparation in which the mean flow and turbulence structures are documented in dependence of the Reynolds number. Furthermore, the generation of the secondary flow is analysed in terms of the balance equation of the mean kinetic energy.

References

[1] Y. Sakai, S. Mendez, H. Strandenes, M. Ohlerich, I. Pasichnyk, M. Allalen, M. Manhart, Performance Optimisation of the Parallel CFD Code MGLET across Different HPC Platforms, PASC '19 Proceedings of the Platform for Advanced Scientific Computing Conference, Article No. 6, 2019.

[2] H. Strandenes, M. Manhart, M. Allalen, I. Pasichnyk, W. Schanderl. Improving scalability for the CFD software package MGLET. InSiDE - Innovatives Supercomputing in Deutschland (2), 2016, 48 – 50.

[3] J. Brosda, M. Manhart: Dynamics of secondary currents in marginally turbulent semi-filled pipe flow. 17th European Turbulence Conference, 2019.

Research Team

Julian Brosda, Michael Manhart (PI), Yoshiyuki Sakai, Simon von Wenczowski (all: Professorship of Hydromechanics, Technical University of Munich)

Scientific Contact

Prof. Dr.-Ing. habil. Michael Manhart
Professorship of Hydromechanics
Arcisstraße 21, D-80333 München (Germany)
e-mail: michael.manhart [@] tum.de

NOTE: This report was first published in the book "High Performance Computing in Science and Engineering – Garching/Munich 2020 (2021)" (ISBN 978-3-9816675-4-7)

Local project ID: pn56ci

August 2021

Tags: LRZ TUM