The Largest Scales in Turbulent Pipe Flow Gauss Centre for Supercomputing e.V.


The Largest Scales in Turbulent Pipe Flow

Principal Investigator:
Christian Bauer

Institute of Aerodynamics and Flow Technology, German Aerospace Center (DLR)

Local Project ID:

HPC Platform used:
SuperMUC and SuperMUC-NG of LRZ

Date published:


A large amount of the energy needed to push fluids through pipes worldwide is dissipated by viscous turbulence in the vicinity of solid walls. Therefore the study of wall-bounded turbulent flows is not only of theoretical interest but also of practical importance for many engineering applications. In wall-bounded turbulence the energy of the turbulent fluctuations is distributed among different scales. The largest energetic scales are denoted as superstructures or very-large-scale motions (VLSMs). In our project we carry out direct numerical simulations (DNSs) of turbulent pipe flow aiming at the understanding of the energy exchange between VLSMs and the small-scale coherent structures [1].

While the near-wall small-scale structures scale in viscous units, the outer flow VLSMs scale in bulk units. Hence the range of scales increases as the Reynolds number of the flow increases. In order to study the interaction between these structures, we carried out DNSs of friction Reynolds numbers up to ReΤ=2880, where ReΤ=uΤR/ν is based on the friction velocity, the pipe radius and the kinematic viscosity. Besides a large Reynolds number, required for large scale separation, a sufficiently long computational domain is needed for VLSMs to settle. In a preliminary study the required computational domain length was estimated to L=42R [2].

Results and Methods

Our numerical method consists of a fourth-order finite volume DNS code, which is parallelised by means of the message passing interface (MPI). The number of required cores ranges from 64 for the smallest case (ReΤ=180) up to 2048 for the largest case (ReΤ=2880). The heart of our fortran90 simulation code is the Poisson solver, which solves a three-dimensional elliptic equation. Taking advantage of the homogeneity of the problem in two directions, Fast Fourier Transforms are performed in axial (z) and azimuthal (φ) direction, before NzNφ one-dimensional problems are solved by a fast direct tridiagonal matrix solver. Besides flow statistics, which are accumulated on-the-fly, a number of instantaneous flow field realisations is written out by the code in the netCDF format. With the problem sizes consisting of up to 32.5 billion finite-volume cells, the flow field snapshots consume most of the project memory of 40TB. Overall, our computations required 10 million core hours. From the five different Reynolds number simulations contributing to our analysis the two with the largest computational requirements were carried out on SuperMUC. An overview of these simulations is given in Table 1.

Table 1: Simulation cases on SuperMUC
CaseReΤGrid (NzxNφxNr)Cores


Resulting instantaneous streamwise velocity fluctuations for ReΤ=1500 are depicted as iso-volumes in Fig. 1.  As their small-scale counterparts, the VLSMs visible in the instantaneous picture occur alternately as high- and low-speed streaky structures. In azimuthal direction three low- and high-speed structures are clearly visible. The average extension of VLSMs can be extracted from velocity correlations, as shown in Fig. 2. With a threshold of 0.1, the average streamwise length of VLSMs is measured as approximately 7R. In addition, the velocity correlation shows an inclination towards the wall, a feature that is characteristic for both small-scale motions and VLSMs. We reported on the scaling and convergence of high-order statistical moments and contributions from VLSMs on these moments [2]. Particularly very large local wall-normal velocity fluctuations in the vicinity of the wall --- so called velocity spikes --- are modulated by outer flow VLSMs, which reflects in the wall-normal velocity flatness. Regarding the origin of the kinetic energy of VLSMs, we analysed the turbulent kinetic energy transport equation of the low-pass filtered velocity field, the latter basically consisting of VLSMs. By comparing our results with what is known from the small-scale near-wall cycle, we found that VLSMs are fed with energy by the mean velocity field via a turbulent production mechanism similar to their small-scale counterparts [4]. Regarding the transfer of energy between different scales, however, VLSMs behave strictly opposite to the small-scale motions. While for the last mentioned, the forward energy cascade --- energy transfer from larger to smaller scales --- correlates with low-speed structures and the backward transfer correlates with high-speed structures, for VLSMs the opposite is the case.

On-going Research / Outlook

Answering the question why the inter-scale energy transfer towards and away from VLSMs is so strikingly different from the energy transfer related to small-scale motions requires further investigations of turbulent pipe flow. Moreover, most of our analyses regarding the interactions between outer flow VLSMs and near-wall small-scale motions involved turbulent pipe flow DNSs at ReΤ=1500. In terms of real-life applications this Reynolds number is still low and VLSMs are known to become more dominant with increasing Reynolds numbers. Consequently we already started simulations at ReΤ=2880 during the current project. Statistics obtained from these simulations are, however, not fully converged. Therefore, we plan to continue these simulations in a follow-up project. Moreover, simulations at even higher Reynolds numbers are highly desirable to obtain better comparability with real-life engineering applications. SuperMUC provides both the computational power and the memory required to carry out such large-scale DNSs of turbulent pipe flow.


Christian Bauer1, Claus Wagner1

1Institute of Aerodynamics and Flow Technology, German Aerospace Center (DLR)

References and Links


[2] D. Feldmann, C. Bauer, and C. Wagner. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow. J Turbul 19, 274-295 (2018)

[3] C. Bauer, D. Feldmann, and C. Wagner. On the convergence and scaling of high-order statistical moments in turbulent pipe flow using direct numerical simulation. Phys Fluids 29, 125105  (2017)

[4] C. Bauer, A. von Kameke, and C. Wagner. Kinetic energy budget of the largest scales in turbulent pipe flow. Phys Rev Fluids 4, 064607 (2019)

Scientific Contact

Christian Bauer
German Aerospace Center (DLR)
Institute of Aerodynamics and Flow Technology
Linder Höhe, D-51147 Köln (Germany)
e-mail: christian.bauer [@]

Local project ID: pr62zu

October 2020