How Large is the Color Magnetic Field in a Proton? Gauss Centre for Supercomputing e.V.


How Large is the Color Magnetic Field in a Proton?

Principal Investigator:
Andreas Schäfer

Institute for Theoretical Physics, Regensburg University

Local Project ID:

HPC Platform used:
SuperMUC and SuperMUC-NG of LRZ

Date published:


With the advent of high luminosity spin polarized particle accelerators exploring the spin structure of hadrons, i.e. bound states from quarks and gluons, became a major research topic. In the beginning experimental results contradicted naïve expectations to such an extend that this physics was often summarized under the title “spin crisis”. To be precise, this term summarized the fact that the distribution of the total proton or neutron (collectively called nucleon) spin of ½ on quark and gluon spin as well as quark and gluon orbital angular momentum was hotly disputed.

Over the years, however, to a large extent thanks to lattice calculations, a quite detailed understanding was developed, such that today more subtle questions have moved into the focus of attention. Quantum Chromodynamics (QCD), the theory of quarks and gluons, has strong similarities with Quantum Electrodynamics (QED), the quantum theory of the electromagnetic interaction, but there exist also fundamental differences. As in QED, spin couples to the (color) magnetic fields and (color) charged quarks experience a (color) Lorentz force in such a (color) magnetic field. In contrast to QED, however, all quarks and gluons in a nucleon are “confined”, i.e. they cannot be extracted in isolation from a nucleon making access to the color analogs of well-known electromagnetic effects far more difficult. One can only observe certain color neutral correlations of quarks and gluon fields which are classified by abbreviations like d2 the correlation we are concerned with in this project. This correlation is judged so important that it motivated several large scale experiments and many theory investigations.

Figure 1 shows an up to date compilation, with various model predictions to the right and various experimental data as well as our old lattice results [2] to the left. Solid symbols denote results for the proton and open ones those for the neutron. The most important fact presented in this figure is that d2 is small. (Naively one could have expected numbers anywhere in the range from -0.02 to +0.02, as is illustrated by the scattering of model predictions.) Because d2 can be related to the color Lorentz force on quarks in a color magnetic field, which is in turn oriented along the spin vector direction of the studied hadron, this implies that this force is weaker than one could have expected.

Another remarkable feature of this plot is that our old lattice results [3] (plotted in green) are displayed together with experimental data, illustrating the status lattice QCD calculations have gained meanwhile in hadron physics. In contrast model predictions of various sorts are depicted by the symbols on the right. They scatter widely over the phenomenologically plausible region.

While this kind of plotting is kind of flattering for us it also made us feel a bit uneasy because at the time (2005) lattice QCD techniques were far less developed and the error analysis leading to the displayed error bars does not fulfill our present standards. In particular the most problematic uncertainty, caused by the required extrapolation to zero lattice constant, i.e. the physical space-time continuum, could not be determined at the time. (In lattice QCD continuous space-time is substituted by a four-dimensional hypercubic lattice of constant lattice spacing.) This we have now improved.

Results and Methods

The problem of controlling the continuum limit is not only one of required computer resources (simulations with smaller lattice constants require far more lattice points. Actually, their number scales like the fourth power of the inverse lattice spacing) but also a fundamental theoretical one. QCD has different topological sectors and different ones have to be sampled to get correct results. For lattice constants below 0.05 fm this becomes unfeasible, a problem known as “topological freezing”. On the other hand, for lattice spacings above 0.1 fm discretization errors become large such that the typical lever arm for a continuum extrapolation is only about a factor of two. (The radius of a proton is roughly 0.7 fm, so one needs fine lattices to resolve its structure.)  Within the very large CLS collaboration of collaborations we have implemented an algorithmic formulation which circumvents this problem. Figure 2 and Figure 3 show some of the preliminary results we obtained.

Such calculations proceed in two steps. First one generates large ensembles of field configurations (typically each ensemble contains several thousand configurations) which contain information on the structure of all hadrons (e.g. bound states of quarks and gluons). From these one then extracts information on specific aspects of specific hadrons, like d2 of the proton.

The main results of this project are updates of the green lattice points in Figure 1. Looking at our results in Figure 2 and 3 the most important observation is that the lattice constant dependence is significant for the proton, making this a showcase example for how important control of the continuum limit is and thus adding to the justification of the whole CLS effort.

Although, apart from all other systematic uncertainties, our results also contain the uncertainty of the continuum extrapolation the resulting error bars are significantly smaller than for the old simulations. In particular the result for the proton is now distinctly non-zero. (Its size actually fits now better to phenomenological expectations.)

Our results provide the most precise calculation of the color Lorentz force, e.g. the force exerted by the color magnetic field on a quark moving transversely to the color magnetic field direction which in turn is oriented along the hadron spin, as of today.

On-going Research / Outlook

While our result for the proton is not incompatible with the experimental data there is a certain degree of tension which adds to the motivation of future experiments and lattice studies. While along the lines of standard Operator Product Expansion techniques, which we used in this work, it will be very demanding to substantially improve on the precision reached, let us note that A. Schäfer is also part of a large scale US-Chinese-German collaboration called LPC which for a number of years has a completely different approach in Lattice QCD, going under the name of quasi- or pseudo-distributions which was shown in [4] to be most promising to study d2 and related quantities on the lattice. It would be most interesting to perform such simulations on the same configurations used in the present study and to compare the results. This project, which is expected to provide simultaneously much more novel information on nucleon structure, would probably require a comparable amount of computer resources. While SuperMUC-NG is an ideal HPC platform to serve our purpose, the number of core-hours available for our projects is the factor that is most important for us.

The software we use is largely public domain within the lattice QCD community. It goes under the name of Chroma and is written primarily in C++. It is based on the QDP++ library [5]. The main computational effort in lattice simulations goes into the inversion of huge sparse matrices for which one can choose between a large variety of highly optimized code using different algorithms (e.g. multigrid techniques) and usually programmed in machine language. We did not optimize these solvers but used them as black box.

The configurations analyzed were generated by the CLS collaboration (partly by us) in a large and long-term effort. These data are stored in Regensburg, Mainz and Berlin. Their generation was not part of this project. For data handling we used primarily HDF5. The project was funded by LRZ with 33 million core-hours of computing time and was additionally supported with an extra allocation of computing time granted to frequent LRZ users in the start-up phase of SuperMUC-NG. The underlying theory and algorithms were well understood from the very beginning. It just required the processing of 237,4 Terabytes of data.

References and Links

[1] W. Armstrong et al. (SANE Collaboration) Physical Review Letters 122 (2019) 022002


[3] M. Göckeler et al. Physical Review D72 (2005) 054507

[4] S. Bhattacharya et al. arXiv:2004.04130

[5]  []

Research Team

Gunnar Bali, Jacques Bloch, Simon Bürger, Sara Collins, Meinulf Göckeler, Marius Löffler, Andreas Schäfer (PI), Thomas Wurm. All: Institute for Theoretical Physics, Regensburg University

Scientific Contact

Prof. Dr. Andreas Schäfer
Institute for Theoretical Physics, Regensburg University
D-93040 Regensburg/Germany
e-mail: andreas.schaefer [@]

LRZ project ID: pn69ma

November 2020

Tags: LRZ Universität Regensburg EPP