Testing Neutrino Transport Treatments in 3D Supernova Simulations
Principal Investigator:
Hans-Thomas Janka
Affiliation:
Max-Planck-Institut für Astrophysik, Garching (Germany)
Local Project ID:
pr62za
HPC Platform used:
SuperMUC of LRZ
Date published:
Introduction
Supernova (SN) explosions (SNe) terminate the lives of stars that possess more than about nine solar masses. They are among the most spectacular phenomena in the universe, can become as bright as a whole galaxy for weeks and thereby release more energy than the sun will radiate during its 13 billion years of life. SNe are the birth sites of neutron stars and black holes. They play an important role in galaxy evolution because the matter expelled in the explosions is enriched with the chemical elements that allowed the earth and the life on it to form.
Only in recent years three-dimensional (3D) simulations of SNe have become feasible, enabled by the growing power of modern supercomputers and the availability of highly parallelized simulation programs. The team of the PI participates in this worldwide effort in a leading position, supported by an Advanced Grant of the European Research Council, entitled “Modeling Stellar Collapse and Explosion: Evolving Progenitor Stars to Supernova Remnants” [1]. Goal of this project is the consistent 3D modeling of SN explosions from the final phase of convective shell burning through stellar collapse and explosion towards the early SN remnant evolution.
One of the central questions in this context concerns the physical mechanism by which the catastrophic collapse of the stellar core to a compact object (a neutron star or black hole) is reversed to the powerful ejection of most of the star’s material in the SN blast. For more than 50 years it has been hypothesized that neutrinos are the crucial agents that can establish the energy transfer needed to drive this explosion. Less than one percent of all emitted neutrinos are sufficient to do this job, because these elementary particles carry away the gigantic gravitational binding energy of the compact remnant, which exceeds the SN energy more than hundredfold.
Results and Methods
Only recently modern 3D neutrino-hydrodynamical simulations, for the first time achieved by the PI and his team and rendered possible through GCS and PRACE computer-time grants also on SuperMUC of LRZ, have been able to provide quantitative support for this long-standing theoretical scenario [2,3].
SN modeling in 3D including full-fledged neutrino transport and a state-of-the-art treatment of the neutrino interactions is extremely CPU-time consuming. A single explosion run requires more than half a year of continuous parallel computing on over 16,000 SuperMUC processors. Nevertheless, approximations in the neutrino transport have to be accepted because present-day supercomputers are by far not powerful enough to solve the time-dependent Boltzmann transport equation in six-dimensional phase (i.e., 3D position and momentum) space for all three flavors of neutrinos and antineutrinos.
Therefore the Prometheus-Vertex code of the PI’s team makes use of the so-called ”ray-by-ray plus’’ (RbR+) approximation, in which the radiation intensity is assumed to be axially symmetric around the radial direction in every point of the spatial polar coordinate (or axis-free Yin-Yang) grid. Hereby nonradial components of the neutrino fluxes are ignored, assuming they are of minor importance in cases where the collapsing stellar core does not become globally deformed due to, e.g., contrifugal flattening caused by rapid rotation. The RbR+ approximation reduces the computational complexity by time integration of 1D transport problems for all angular (latitudinal-azimuthal) directions. For this purpose the Prometheus-Vertex code employs a solver for the neutrino two-moment (i.e., energy and momentum) equations with an accurate variable-Eddington closure deduced from the 1D Boltzmann equation. Thus computing >16,000 1D transport problems (each of them being dependent on radius, neutrino energy, and polar angle of neutrino propagation) with little communication facilitates highly efficient parallel implementation.
Of course, this approximation must be tested. To this end the PI’s team developed the ALCAR code, which solves the full multidimensional (FMD) neutrino transport by a two-moment (M1) scheme with an algebraic closure relation [4]. M1 schemes are currently a very popular approximation of neutrino transport, also replacing an unfeasible rigorous integration of the (6+1)-dimensional Boltzmann equation. They are complementary to RbR+, because nonradial flux components are taken into account. Based on time-dependent axisymmetric (2D) simulations with an M1 scheme and on stationary low-resolution solutions of the Boltzmann equation, RbR+ was criticised in recent literature to lack accuracy and to produce artificial SN explosions in 2D.
For this project we performed, for the first time, 3D full-sphere hydrodynamical simulations with the ALCAR code using the M1–FMD and RbR+ approximations for neutrino transport. We considered progenitor stars of 9 and 20 solar-masses, both with low (L) spatial resolution (320, 40, 80 grid cells in radial, lateral, and azimuthal directions) and high (H) resolution (640, 80, 160 cells) and with 15 energy groups for the neutrino transport in all runs. Also a corresponding set of 2D cases was calculated. For saving computer time in these test models, we applied various simplifications of the complex neutrino interaction physics. This allowed us to conduct the project with 30 million core hours, using up to 8000 cores and up to 1.5 TByte of SCRATCH space per single job. In total, 100 TBytes of data were generated.
With the chosen physics setup the 9 solar-mass star explodes whereas the 20 solar-mass progenitor fails to develop a SN explosion (Fig 1). Thus we could test the FMD against the RbR+ scheme both for successful and unsuccessful SN cases.
In 3D, the agreement between FMD and RbR+ results for “L” and “H” cases is extremely satisfactory (see Figs.1–3). This verifies and backs up the published 3D SN results of the Garching group produced with the Prometheus-Vertex code using RbR+ neutrino transport.
On-going Research and Outlook
Unfortunately, higher-resolution simulations for testing convergence of the 3D results are currently prohibited by their huge computational demands. ALCAR will be further upgraded by better neutrino interactions, an axis-free Yin-Yang grid, and a 3D gravity treatment to investigate globally deformed SNe expected for fast rotation.
Researchers
Robert Glas, Hans-Thomas Janka (PI), Oliver Just
Project partner
Astrophysical Big Bang Laboratory, RIKEN (Japan)
References and Links
[1] mpa.iwww.mpg.de/220337/Modeling-Stellar-Collapse-and-Explosion
[2] H.-Thomas Janka, Tobias Melson, and Alexander Summa. 2016. Physics of Core-Collapse Supernovae in Three Dimensions: A Sneak Preview. Annu Rev Nucl Part S, 66, 1 (2016), 341–375. DOI: https://doi.org/10.1146/annurev-nucl-102115-044747
[3] H.-Thomas Janka. 2018. Zündende Neutrinos. Physik Journal, 17, 3 (2018), 47–53. http://www.pro-physik.de/details/physikjournalArticle/10870177/Zuendende_Neutrinos.html
[4] Oliver Just, Martin Obergaulinger, H.-Thomas Janka. 2015. A new Multidimensional, Energy-dependent Two-moment Transport Code for Neutrino-hydrodynamics. Mon Not R Astron Soc, 453, 4 (2015) 3386–3413. DOI: https://doi.org/10.1093/mnras/stv1892
[5] Robert Glas, Oliver Just, H.-Thomas Janka, and Martin Obergaulinger. 2018. Three-Dimensional Core-Collapse Supernova Simulations with Multi-Dimensional Neutrino Transport Compared to the Ray-by-Ray-plus Approximation. Astrophys. J., in press. arXiv:1809.10146.
[6] Robert Glas, H.-Thomas Janka, Tobias Melson, Georg Stockinger, and Oliver Just. 2018. Effects of LESA in Three-Dimensional Supernova Simulations with Multi-Dimensional and Ray-by-Ray-plus Neutrino Transport. Astrophys. J., submitted.
Scientific Contact:
Prof. Dr. Hans-Thomas Janka
Max-Planck-Institut für Astrophysik, Garching
Karl-Schwarzschild-Straße 1
D-85748 Garching (Germany)
e-mail: thj [@] MPA-Garching.MPG.DE
NOTE: This report was first published in the book "High Performance Computing in Science and Engineering – Garching/Munich 2018".
LRZ project ID: pr62za
March 2019