Hadronic corrections to the muon magnetic moment; Isospin breaking effects in QCD

Principal Investigator:
Prof. Szabolcs Borsanyi

Bergische Universität Wuppertal

Local Project ID:

HPC Platform used:
SuperMUC-NG at LRZ

Date published:


The muon is an elementary particle, a short-lived cousin of the electron. For many years the calculation of its magnetic moment has disagreed with its measurement, suggesting that a not-yet-known particle or force perturbs the muon. Such a discovery would have profound consequences on our understanding of Nature.

An experiment at the Fermi National Accelerator Laboratory (FNAL) near Chicago recently confirmed this puzzling discrepancy [3] at an event that received significant media attention.

On the same day, our ab-initio calculation was published in Nature [1], challenging previous computations and bringing the theoretical prediction closer to the experimental value: no new forces or particles may be needed to explain the FNAL measurement.

Ours is the most precise ab-initio calculation so far, with uncertainties comparable to those of the measurement and of the reference, data-driven computations. These computations were carried out -among other resources- using two allocations on SUPERMUC-NG, pn68ne  and  pn56bu.

Project overview

The ab-initio calculation means in the above context lattice quantum field theory. A space-time grid is introduced and and at every point of it the time evolution of various quantum operators are determined (to be more specific a path integral formalism is applied to that end). In some sense it reminds us to meteorology. Usually, people also introduce a three-dimensional grid, temperatures, pressures and wind velocities are measured and using the underlying equations the time evolution is determined. In both cases it is a heroic effort.

In this long term project, which lasted for 6 years and had two stages, we computed the leading order hadronic vacuum polarization contribution to the anomalous magnetic moment of the muon, aLOHVP, using lattice quantum field theory.

In the first stage [1] we were able to reach a relative precision of 2.7%. The results were published in Physical Review Letters, which was then highlighted by the editors as Editors’ Suggestion.

In the second stage [2] we were able to improve the precision by several means and we finally obtained the result: aLO−HVP = 707.5(2.3)(5.0) 5.5]  (1)

with statistical, systematic and total errors. The relative uncertainty is 0.8%, which is far less than that of other lattice determinations, and is comparable to the errors of dispersion relation-based computations. The result are published in Nature [2] and shown in the figure below.

The figure above shows a comparison of recent results for the leading-order, hadronic vacuum polarization contribution to the anomalous magnetic moment of the muon. Green squares are lattice results: the result obtained in the current project is denoted by BMWc’20 [2], whereas the result of the first stage of this project is denoted by BMWc’17 [1].

As a comparison, the results of other lattice collaborations are also listed: Mainz Fermilab-HPQCD-MILC and European Twisted Mass, all from 2019, and RIKEN-Brookhaven-Columbia from 2018.

Red circles were obtained using the pheonmenology based R-ratio method. The blue shaded region is the value that aLOHVP would have to have to explain the experimental measurement of the magnetic moment, assuming no new physics.

Key aspects of the computation

To achieve this improvement in precision, we addressed several important problems:


    • The largest uncertainty in the first stage result [1] originated from the finite volume correction. In the present accounting period we were able to finish dedicated simulations in order to determine the size of this contribution.
    • We reduced the statistical uncertainty of the light connected contribution by applying low mode averaging and increasing the number of configurations.
    • For the first time in the literature, we included all leading order QED and strong isospin breaking corrections in our calculations. These are essential in order to reach sub-percent accuracy.
    • In order to reach a sub-percent precision the lattice spacing has to be known to a few per-mil accuracy.

The first two of these were carried out in the framework of the “pn68ne: Hadronic corrections to the muon magnetic moment” application, the third using the “pn56bu: Isospin breaking effects in QCD”. The computer time for the last ingredient (scale-setting) was provided by PRACE on a supercomputer outside Germany.

The methods developed here will be useful to continue improving the accuracy of the standard model prediction, as will be required to pursue the search for new physics in ongoing and future experiments designed to measure the magnetic moment of the muon.

Finite volume effects

Here we computed the finite size correction that is to be added to our results obtained in a box size with spatial and temporal extents of Lref = 6.272 fm and Tref = 3 Lref . We call this the reference box.

In the present grant period we have finished the dedicated runs with the 4HEX action, designed to have small taste violations. One set of runs were performed on 56 × 84 lattices with the reference box size and another set on 96 × 96 lattices with box size Lbig = Tbig = 10.752 fm. We call this “big” box, it is much larger than what is used in contemporary lattice field theory simulations.

Using supercomputer resources we measured the aLOHVP on these two volumes and for the difference we find: aµ(big) − aµ(ref) = 18.1(2.0)(1.4) ,

where the first error is statistical, the second is systematic, this latter is our estimation for the size of lattice artefacts on this quantity.

The remaining, tiny difference between the big box and the infinite volume, is computed analytically in next-to-next-to- leading-order chiral perturbation theory (XPT). Then the complete finite volume correction, which is to be added to the result obtained in the reference box, is obtained as the sum [aµ(big) − aµ(ref)]4HEX + [aµ(∞) − aµ(big)]XPT

for which we find 18.7(2.5), where the error in the brackets is the combined, statistical and systematic, uncertainty of this correction. Compared to our work from the first stage [1] we reduced the uncertainty of the finite size error by more than a factor of five.

Noise reduction

In the present grant period we have finished the evaluation of the current correlator on our ensembles using the noise reduction technique utilizing the low modes of the Dirac operator.

We are projecting out the lowest eigenmodes up to around half the strange quark mass. In the reference of box around 6 fm this means the lowest 1000 eigenvectors, whereas on the lattice with size around 11 fm the number of modes required is around 6000.

The figure above shows a comparison of the conventional random source based technique, as we applied it in the first stage of this project [1] and the low mode utilizing technique of the second stage [2] for the case of upper and lower bounds  on aLOHVP.

As a consequence of increasing the number of configurations, and applying the noise reduction techniques, the statistical error on aLOHVP has reduced by about a factor of three. From 7.5 units obtained in the first stage in [1] to 2.3 units in the second stage [2].

Isospin breaking

Obviously, the most important goal is to reach an accuracy, which is compatible with the expected experimental errors. Only reaching this accuracy guarantees that the experimental findings of several hundred million dollars are fully utilized and only with this accuracy can we decide if and what sort of new physics is there. When we speak about precision a sub-percent error is needed.

Electromagnetic and strong isospin-breaking (IB) effects arise from: 

  1. the presence of the electromagnetic interaction,
  2. the mass difference between up and down quarks.

The most prominent consequence of these effects is the mass difference of the neutron and the proton.

IB effects are on the percent level in general. Thus any reasonable result needs the inclusion of these effects. This is a very hard task. The electromagnetic interaction is weak and long-ranged, whereas the strong interaction is strong and short-ranged. Keeping both of them in a system is more than just challenging.

In our work [2] IB effects are implemented by taking derivatives of QCD+QED expectation values with respect to the bare parameters, electromagnetic coupling and quark mass difference, and computing the resulting observables on isospin-symmetric configurations. The rationale behind this choice is the possibility to optimally distribute the computing resources among the various IB contributions.

IB effects are included in all the observables that enter our analysis: current-current correlators, meson masses needed to fix the physical point, and scale setting. Not only do we account for QED and strong isospin-breaking effects in our results, we also perform a separation of isospin symmetric and isospin breaking contributions. This separation is scheme dependent and requires a convention.

The figure above shows the various contribution, whose sum gives the final value aLOHVP, including examples of the corresponding Feynman diagrams. Solid lines are quarks and curly lines are photons. Gluons are not shown explicitly, and internal quark loops, only if they are attached to photons. Dots represent coordinates in position space, a box indicates the mass insertion relevant for strong-isospin breaking.

The contributions in the first line of the figure are isospin-symmetric, the other Feynman-diagrams correspond to the isospin-breaking terms. There are three different types electromagnetic IB contributions, depending on whether the photon connects two valence-quark lines, a valence-quark and a sea-quark line or two sea-quark lines. There is also a term representing strong IB effects. Each of the IB terms comes in two varieties: a connected and a disconnected one, the latter refers to contributions, in which there is sea-quark loop involved with single insertion of the external current.

The numbers give our result for each contribution, they correspond to our “reference” system size given by Lref = 6.272 fm spatial and Tref = 9.408 fm temporal lattice extents. We also explicitly compute the finite-size corrections that must be added to these results, these are given separately in the lower right panel. The first error is the statistical and the second is the systematic uncertainty; except for the contributions where only a single, total error is given.


Interestingly our computation, Equation (1), disagrees with previous phenomenology results, while it seems to require no new physics to explain the magnetic moment of the muon, which is in contrast with earlier belief. This discrepancy between our result and previous determinations were intensively discussed at many conferences, workshops and round tables.

The current status has been pointedly summed up by Nobel Prize Laureate Frank Wilczek at the Lattice conference, in July 2021: “The theoretical community is going to have to get its act together in coming years, to see which is correct, [the phenomenological or our lattice result], if either.”

Let us also cite here Edward Witten, another prominent theoretical physicist, from the December 2021 edition of the CERN Courier magazine: “I think it is very important to improve the lattice gauge theory estimates of the hadronic contribution to the muon moment, in order to clarify whether the fantastically precise measurements that are now available are really in disagreement with the SM.”

To this end we started to improve our previous lattice QCD determination in [2]. In the current grant period of pn56bu

we took the first steps to move the IB computations towards finer lattices.


[1] Borsanyi et al, Phys.Rev.Lett. 121 (2018) 2 [2] Borsanyi et al, Nature 593 (2021) 7857, 51-55

[3] Muon g-2 Collaboration, Phys.Rev.Lett. 126 (2021) 14