Our research highlights serve as a collection of feature articles detailing recent scientific achievements on GCS HPC resources.
Using the Intel OSPRay engine and the VisIt visualizuation software, experts from LRZ, Intel, and the Australian National University created the largest-ever visualization of astrophysical turbulence. Image credit: LRZ.
Human imagination has long been piqued by images of space. As telescopes, low-frequency arrays, and other technologies evolve, humanity is brought ever closer to vibrant, exotic, and mysterious phenomena happening far from our home solar system.
For astrophysicists and cosmologists, new technology enables far more than just seeing beautiful images of far-off realms—technology plays an indispensable role in learning how stars are born, how they die, and how those massive explosions spread elements across the universe, ultimately bringing us closer to understand how the Earth as we know it came to be.
In the past decade, high-performance computing (HPC) has further accelerated and expanded the scope of astrophysics, enabling increasingly larger, more detailed simulations of supernovae and galaxy formation, among countless other research areas.
Recently, a team of researchers led by Australian National University (ANU) professor Christoph Federrath used HPC resources at the Leibniz Supercomputing Centre (LRZ) in Garching near Munich to run the largest magneto-hydrodynamic simulation of astrophysical turbulence ever performed to include magnetic fields. The team partnered with visualizations experts at LRZ and Intel to turn their massive calculation into a stunning visualization. The visualization, done using the Intel OSPRay rendering engine and VisIt software, is a finalist for the Best Scientific Visualization & Data Analytics Showcase award at SC19, the world’s premier supercomputing conference, taking place this year Nov. 17–22 in Denver, Colorado (USA).
Turbulent times
To the naked eye, the space between star systems like our own solar system seems to be a vast, featureless expanse, but this space is in reality filled by the interstellar medium (ISM), a gaseous environment that plays a major role in transporting elements across the universe, ultimately shaping galaxies by influencing the formation of new stars. Much of the energy traversing the ISM was ejected from violent supernovae and stellar winds, leading to chaotic, turbulent motions. Turbulence is one of the last major unsolved problems in physics, and understanding turbulent motions on such a massive scale requires large-scale computing resources.
“The general challenge with astrophysics simulations is that they are all multiscale problems,” said Dr. Salvatore Cielo, researcher at LRZ and leader of the visualization efforts surrounding the ANU-Intel-LRZ collaboration. “In order to understand star formation, we need to model objects often as large as galaxies, but with a resolution orders of magnitudes higher to properly account for the physics of the ‘building blocks’ of stars, such as turbulence. Turbulence is especially demanding to simulate, as it tends not to be localized, but rather volume-filling; there is no symmetry or geometric construct the physicist can take advantage of.”
The simulation, recently produced by Prof. Federrath, reached a resolution of more than 10,000 grids per each spatial dimension, for a total of more than 1,000 billion resolution elements, making it the largest such simulations ever performed. This was necessary in order to capture the transition scale between supersonic (or self-sustaining via shocks), and subsonic (more rapidly decaying), turbulence.
Besides being the first to ever measure this scale, the simulation reproduces very accurately the complexity of the internal structure of real star formation regions, as shown in the visualization.
Each snapshot required more than 23 terabytes of disk space, creating an enormous amount of data to visualize. Using the OSPRay engine and VisIt, the team was able to take advantage of nearly all of SuperMUC-NG’s 6,336 nodes. The team used ray tracing—a more realistic and, in turn, more demanding method of rendering graphics that lends itself well to being parallelized on HPC resources.
This new and novel method allows for direct access to the compute power of SuperMUC-NG for the calculations, visualization, and post-processing rather than transferring data to dedicated systems equipped with a traditional graphical environment.
The team will present its visualization work on Wed, Nov. 20 under the “Scientific Visualization & Data Analytics Showcase” at SC19. More information on the team’s presentation can be found here. The team’s technical paper can be found here.
-Eric Gedenk