Simulating the observable Universe with MillenniumTNG
Principal Investigator:
Prof. Dr. Volker Springel
Affiliation:
Max Planck Institute for Astrophysics, Garching
Local Project ID:
pn34mo
HPC Platform used:
SuperMUC-NG at LRZ
Date published:
The amazing progress in observational cosmology over the last decades has brought many surprises. Perhaps the most stunning is that we live in a Universe where most of the matter (~85%) is comprised of yet unidentified collisionless dark matter particles, while ordinary baryons produced in the Big Bang make up only a subdominant part (~15%). The initial state of the universe is a hot, nearly featureless soup of dark matter and plasma. But as the cosmos expands and cools, gravity amplifies tiny density perturbations, until they collapse and virialize as non-linear dark matter halos. Baryons radiatively cool and settle in these small potential wells, forming rotationally supported disks. The relentless pull of gravity causes further collapse and fragmentation in the cold gas, leading to star formation and thus to the formation of galaxies. Furthermore, in the last 5 billion years or so, a “dark energy” component has progressively become stronger and begun to overwhelm the matter energy density, driving an accelerated expansion of the Universe. The real physical nature of dark energy, as well as the mass of the neutrinos which contribute a tiny admixture of “hot” dark matter, are profound and fundamental open questions in physics.
To make further progress, this firmly established standard cosmological model will be subjected to precision tests in the coming years that are far more sensitive than anything done thus far. Forthcoming cosmological mega galaxy surveys carried out by space missions such as Euclid and Roman, as well as new powerful telescopes on Earth such as Rubin, will map out billions of galaxies through extremely large regions of space. They will primarily use various measures of galaxy clustering and weak gravitational lensing to carry out precision tests of our cosmological model. The primary goals are to detect potential deviations of dark energy from a cosmological constant, signatures for a law of gravity different from general relativity, and new constraints on the mass of the light neutrino flavors.
To take full advantage of this rich data, precise theoretical predictions are needed in regions of space equally vast as probed by the observations. In principal, the tool of choice for this are direct hydrodynamical cosmological simulations of galaxy formation that track how the full physics of cosmic structure formation unfolds over more than 13 billion years of evolution. Particular numerical challenges in these computations arise from the non-linear coupling of a range of different physics, including processes such as star formation, supernova explosions, and accretion of gas onto supermassive black holes, combined with the long-range nature of gravity that ties the evolution of dark matter and baryonic material tightly together.
The problem however is that these calculations have thus far been restricted to comparatively small volumes, and simply doing them in the volumes required for the cosmological studies is computationally presently still infeasible. In our MillenniumTNG project we have therefore devised a novel, two-pronged approach to cope with this problem by combining key features of our iconic Millennium [1] and IllustristTNG [2] simulations.
The idea is to carry out both, a very high resolution dark matter only simulation in the Millennium’s original 740 Mpc volume, and a full hydrodynamical simulation using the same initial conditions and the state-of-the-art IllustrisTNG galaxy formation physics model. We then use a comparison of the two calculations to calibrate a so-called semi-analytic galaxy formation model, which can be subsequently applied to still much larger dark matter simulations of the universe with a size beyond 2 Gpc, but still with an accounting of the impact of baryonic physics on structure formation. This approach is unique in its ability to allow the use of physically based galaxy formation models in the comparison of theory to observations. The realism of the comparison is further boosted by our special strategy to output galaxy properties directly on a perfectly seamless backwards past lightcone, i.e. the hypersurface in space-time that we can actually observe. In addition, we produce special mass shell outputs to facilitate studies of the weak gravitational lensing effect with unprecedented precision. Figure 1 shows an example of such a weak lensing map, which allows us to study this key cosmological probe at an angular resolution of 0.26 arcsec, much finer than possible thus far.
Furthermore, we also carry out additional simulations that account for the presence of massive neutrinos as hot dark matter admixture. Neutrinos amount to only about 1% of the matter density, and their fast, relativistic motions at early times prevent them from responding to structure growth initially, but at late times they slow down and begin to accumulate first in galaxy clusters, and later in less massive structures. This leads to subtle, scale-dependent impacts on how cosmic structures grow. Again, the future observational data will be precise enough to detect these percent levels differences, and thus we need simulations which can accurately predict the corresponding scale- and time-dependent effects.
Figure 2: Impact of non-linear cosmic evolution on the baryonic acoustic oscillation on the largest scales. The peak positions are cosmological rulers which are affected at the percent level, an effect that our simulations can for the first time precisely measure.
For the present project, we have developed a new major version of our GADGET code, which is one of the most widely used cosmological codes in the field. GADGET-4 [3] uses a novel approach to shared-memory hybrid parallelization based on MPI-3, where all MPI ranks on a node can bypass the MPI stack and directly synchronize their work via shared memory. One MPI task per shared memory node is set aside to serve incoming communication requests from other nodes, thereby establishing a highly efficient one-sided communication model. The code contains sophisticated on-the-fly postprocessing routines, such as a substructure finder and merger-tree building routines.
The other code we have used in this project is our moving-mesh code AREPO, which is particularly well adapted to cosmological hydrodynamical simulations of galaxy formation. We use it to run the hydrodynamical twin of our GADGET-4 dark matter simulations, so that the impact of baryonic physics on cosmic structure formation can be studied in detail, and the semi-analytic galaxy formation code can be calibrated for these effects. The by far most expensive run of MillenniumTNG is a flagship hydrodynamic model in a 740 Mpc box, using 2560 SuperMUC-NG nodes with 122880 cores. It consumed slightly more than 100 million core hours to completion, and is by a long shot the largest high-resolution simulation of galaxy formation carried out to date worldwide. It improves on the volume of TNG300, the previous record holder, by a factor of about 15. The restart files of the calculation alone amount to a gigantic size of 70 TB. Indeed, a major challenge of doing the simulation was to fit it into the available memory, and to devise postprocessing strategies that still allow an effective scientific analysis of the hundreds of TB produced by the cosmological model.
But the results are rewarding. For example, we can finally study with precision the impact of mild non-linear evolution of the so-called baryonic oscillations measurable in the matter and galaxy distribution at different cosmic epochs. The precise location of these peaks are standard rulers that can be used for a reconstruction of the cosmic expansion history, which in turns gives access to constraining dark energy. Figure 2 shows a precision measurement from our MillenniumTNG simulations that demonstrates that by the present time the baryonic oscillations are slightly affected even in their first peak, and in particular in its precise location. This effect needs to be accounted for in future cosmological inferences.
The MillenniumTNG simulations are now complete and their analysis is in full swing. Their rich data sets, including merger trees that link more than 20 billion galaxies at different cosmic times, full matter lightcones of different geometry and redshift depths, as well as galaxy properties such as stellar masses, metallicities and galaxy morphologies, allow precision studies of galaxy clustering and weak gravitational lensing that we expect to be highly informative for theoretical cosmology. The very large data sets we created, of nearly 2 PB in size, will be mined for many years to come and will give rise to dozens of scientific publications. Also, we expect that the new methodologies we developed as part of the project, including the GADGET-4 code, the significant scaling improvements of the AREPO code we needed to realize to successfully scale beyond 105 MPI-ranks on SuperMUC-NG, as well as our postprocessing pipelines, will be instrumental for future simulation projects in the field.
[1] wwwmpa.mpa-garching.mpg.de/mtng
[2] www.tng-project.org
[3] Springel V. et al., Nature 435 (2005) 629-636
[4] Springel V. et al., MNRAS 475 (2018) 676-698
[5] Springel V. et al., MNRAS 506 (2021) 2871-2949
Prof. Dr. Volker Springel
Max Planck Institute for Astrophysics
Karl-Schwarzschild-Strasse 1
85748 Garching
vspringel@mpa-garching.mpg.de