Navigation and service

10 Years of Supercomputing Development

Supercomputers, or High Performance Computing (HPC) systems, provide the platform to attack and puzzle out the most challenging scientific and engineering problems. Whilst today’s technologies provide the tools to conduct studies and obtain results that would have been impossible just one decade ago, scientists eagerly await the availability of even more powerful HPC systems to continue solving major world problems.

Moore’s Law states that computer hardware doubles in capability every two years. Supercomputers sit at the pinnacle of this computational innovation, and their blistering speed is only getting exponentially faster. In 2002, Japan’s Earth Simulator machine was the fastest machine in the world, capable of 36 trillion calculations per second, by mid 2013, China’s Tianhe-2 supercomputer reached the theoretical peak of 55 quadrillion, or 55 followed by 15 zeros, per second.

Performance Development of SupercomputersCopyright: ©

Pure speed is not the only metric for measuring supercomputing success, though. As machines get faster and larger, they require a more intricate infrastructure for cooling, data storage, and data analysis. A well-developed and energy efficient infrastructure is essential for supercomputers to continue solving major world problems too big for experiment.

Over the last ten years, Germany has used superior technological and infrastructural innovation to position itself as the leader in European supercomputing. The 2007 inauguration of the Gauss Centre for Supercomputing organizationally combined the three main German supercomputing centers in Garching (LRZ), Jülich (JSC), and Stuttgart (HLRS), giving Germany a complementary portfolio of centers focused on research in fields ranging from fundamental research to life sciences and scientific engineering. Each GCS centre hosts a supercomputer well beyond the 1 Petaflops performance mark (1 petaflops = 1 quadrillion floating point operations per second, or a 1 with 15 zeros), placing all three institutions amongst the most powerful computing centres worldwide.

Supercomputing in Science and Research

Scientists rely on sophisticated computer simulations as highly effective tools for their research and development activities, and whilst today’s supercomputers provide them with tools of superior capability, their demand for faster and more powerful HPC systems is far from being met.


Scientific Engineering

Environment / Climate

Life Sciences

Materials Sciences

Supercomputing in Astrophysics

When massive stars die, they collapse on themselves before exploding. This event, called a core-collapse supernova, has been a computational challenge for astrophysicists for decades, as the complex dynamics and microphysical processes are impossible to measure through observation except for rare events close to earth, and require massive computing power to simulate. Scientists hope that gaining a better understanding of supernovae will help to explain how heavy elements have been created and spread throughout the universe.

In 2012, Dr. Ewald Müller and a team of researchers from the Max Planck Institute for Astrophysics and the Leibniz Supercomputing Center took a major step forward in understanding supernovae by creating the first fully three-dimensional simulation of a core-collapse supernova. The team was able to simulate several hours of the initial explosion. Astrophysics supercomputing codes are largely capable of pushing even the most powerful computers to their limits, and as computer scientists race toward the exaflop horizon, three orders of magnitude faster than current petaflop machines, computational astrophysicists will be expanding their computational codes to gain even more insight into these complex phenomena.


Supercomputing in Scientific Engineering

10 Years of SC - Scientific EngineeringCopyright: © courtesy of the Behr GmbH & Co. KG

One of the largest impediments to purely electric vehicles lies in batteries. Though performance has increased, electric cars are unable to travel long distances without spending time charging. To improve battery performance, researchers have needed better performing supercomputers. Research led by Dr. Jenny Kremser, project manager at Automotive Simulation Center Stuttgart (ASC-S), simulates lithium ion batteries on HLRS’ HPC systems. The researchers use computational fluid dynamics (CFD) simulations. Performance increases in CFD computational codes, coupled with the complexity of simulating a 2–3 hour electrothermal test has left these researchers desiring even more computing power to get a quick and accurate simulation for implementing more efficient batteries in vehicles.


Supercomputing in Climate Research

In the last 10 years, modeling the Earth’s weather and climate has become sharper, more accurate, and able to carry out larger and longer tests. Despite these advances, computational climate science has a lot of room to grow. A decade ago, professor Pier Luigi Vidale, currently of the United Kingdom’s University of Reading, was working on a climate model for Europe that broke the map into 50 kilometer “grid boxes”. His simulations were capable of providing such moderate-resolution results over a limited area, for a 30-year period.

Today, Prof. Vidale and his team are capable of not only simulating the entire globe, rather than just Europe, but can also make the resolution even sharper, by using 25 to 12 kilometer grid boxes on the map, and make use of ensembles (many simulations at the same time). Also, they can now include the simulation of the stratosphere and the physical processes that take place in it, up to a height of 85 kilometers. Simulations this large require anywhere between 4,000-12,000 processing cores each, and though there have been considerable advances in efficiency and scalability, there is plenty of room to grow. Computational climate scientists are looking to the exascale horizon to use grid boxes finer than 1km, helping to accurately simulate clouds, and allowing researchers to simulate both climate and weather pattern changes.



Supercomputing in Life Sciences

10 Years of SC - Life SciencesCopyright: © HLRS

Researchers increasingly turn to supercomputers to aid in developing medical technology. HLRS computational scientist Ralf Schneider and a team of researchers employ supercomputing to simulate bone implants—particularly for the femur—to increase the amount of successful surgeries. In the past, even supercomputers were unable to accurately simulate the complex physical processes associated with bone material and the “load trajectories” that come from walking.

Schneider’s current simulations can accurately simulate femur bones, implants, and the load trajectories acting on the healing bone. As computing power continues to increase, Schneider hopes these simulations can be brought down to the commercial level, allowing clusterbased computers to take CT-data and accurately simulate individual femurs for more personalized treatment options. The next step, though, is to further develop algorithms to take individual bones’ material makeup into account. By including massive amounts of statistical data relating to bone material, simulation complexity becomes exponentially greater.



Supercomputing in Materials Sciences

Nanoscience is a relatively young field, and supercomputing has provided a major catalyst for studying atomic and molecular behavior. By gaining a better understanding of molecular and atomic interactions, researchers can make big advances in chemistry, materials science, combustion, and climate science, among other research fields. Professor Jürg Diemand of the University of Zürich uses supercomputing technology to study the nanoscale processes involved in nucleation and phase transitions. Understanding of how vapors change to liquids at the molecular level can help researchers studying cloud formation and has practical implications for a number of industrial processes.

Molecular simulations such as Diemand’s require massive computing power, as he and his team are simulating up to 8 billion atoms interacting through millions of timesteps. Simulations on this scale would have been impossible ten years ago. Even 5 years ago, the number of molecules simulated would have been closer to 10,000. Most nucleation research relies on laboratory experiments. The new large scale simulations now allow direct comparisons to experiments, because they manage for the first time to resolve nucleation at the same temperature, densities and nucleation rates as under laboratory conditions. As supercomputing power continues to increase, Diemand hopes to expand his molecular simulations to cover entire macroscopic systems—such as micrometer size droplets or ice crystals in clouds, for example— and take on more complex chemistry. –by Eric Gedenk

Article as pdf-File:  Flyer: 10 Years of Supercomputing (PDF, 4 MB)