Worlds between theory and experiment
Once Isaac Newton showed that a single gravitational law plus his rules of dynamics could reproduce the orbits of planets that Johannes Kepler had predicted, explain tides on Earth, and predict that a comet that had passed by once would return again, physicists considered Newtonian mechanics and gravitation to have been completely validated. After these successful tests, they didn’t wait to test every other possible prediction of Newton’s ideas before they considered them to be legitimate.
When Jean Perrin and others carefully measured Brownian motion and extracted Avogadro’s number in the early 20th century, they helped cement the science of the kinetic theory of gases and statistical mechanics that Ludwig Boltzmann and Josiah Willard Gibbs had developed. As with Newtonian mechanics, physicists didn’t also require every single consequence of kinetic theory to be rechecked from scratch. They considered them all to be fully and equally legitimate then on.
Similarly, in 1886-1889, Heinrich Hertz produced and detected electromagnetic waves in the laboratory, measured their speed and other physical properties, and showed that they behaved exactly as James Clerk Maxwell had predicted based on his (famous) equation. Hertz’s experiments didn’t test every possible configuration of charges and fields that Maxwell’s equations allowed, yet what they did test and confirm sufficed to convince all physicists that Maxwell’s theory could be treated as the correct classical theory of electromagnetism.
In all these cases, a theory won broad acceptance after scientists validated only a small (yet robust) subset of its predictions. They didn’t have to validate every single prediction in distinct experiments.
However, there are many ideas in high-energy particle physics that, even as they are derived from other theoretical constructs that have been tested to extreme precision, physicists insist on testing them anew as well. Why are they going to this trouble now?
“High-energy particle physics” is a four-word label for something you’ve likely already heard of: the physics of the search for the subatomic particles like the Higgs boson and the efforts to identify their properties.
In this enterprise, many scientific ideas follow from theories that have been validated by very large amounts of experimental data. Yet physicists want to test them at every single step because of the way such theories are built and the way unknown effects can hide inside their structures.
The overarching theory that governs particle physicists is called, simply, the Standard Model. It’s a quantum field theory, i.e. a theory that combines the precepts of quantum mechanics and special relativity*. Because the Standard Model is set up in this way, it makes predictions about the relations between different observable quantities, e.g. the mass of a subatomic particle called the W boson with a parameter that’s related to the decay of other particles called muons. Some of these relations connect measured quantities with others that have not yet been probed, e.g. the mass of the muon with the rate at which Higgs bosons decay to pairs of muons. (Yes, it’s all convoluted.) These ‘extra’ relations often depend on assumptions that go beyond the domains that experiments have already explored. New particles and new interactions between them can change particular parts of the structure while leaving other parts nearly unchanged.
(* Quantum field theory gives physicists a single, internally consistent framework in which they can impose both the rules of quantum theory and the requirements of special relativity, such as that information or matter can’t travel faster than light and that our spacetime conserves energy and momentum together, for example. However, quantum field theory does not unify quantum theory with general relativity; that’s the monumental and still unfinished purpose of the quantum gravity problem.)
For a more intricate example, consider the gauge sector of the Standard Model, i.e. the parts of the Model involving the gluons, W and Z bosons, and photons, their properties, and their interactions with other particles. The gauge sector has been thoroughly tested in experiments and is well-understood. Now, the gauge sector also interacts with the Higgs sector, and the Higgs sector interacts with other sectors. The result is new possibilities involving the properties of the Higgs boson, their implications for the gauge sector, and so on that — even if physicists have tested the gauge sector — need to be tested separately. The reason is that none of these possibilities follow directly from the basic principles of the gauge sector.
The search for ‘new physics’ also drives this attitude. ‘New physics’ refers to measurable entities and physical phenomena that lie beyond what the Standard Model can currently describe. For instance, most physicists believe a substance called dark matter exists (in order to explain some anomalous observations about the universe), but they haven’t been able to confirm what kind of particles dark matter is made of. One popular proposal is that dark matter is made of hitherto unknown entities called weakly interacting massive particles (WIMPs). The Standard Model in its contemporary form doesn’t have room for WIMPs, so the search for WIMPs is a search for new physics.
Physicists have also proposed many ways to ‘extend’ the Standard Model to accommodate new kinds of particles that ‘repair’ the cracks in reality left by the existing crop of particles. Some of these extensions predict changes to the Model that are most pronounced in sectors that are currently poorly pinned down by existing data. This means even a sizeable deviation from the Model’s structure in this sector would still be compatible with all current measurements. This is another important reason physicists want to collect more data and with ever-greater precision.
Earlier experience also plays an important role. Physicists may make some assumptions because they seem safe in some year but new data collected in the next two decades might reveal that they were mistaken. For instance, physicists believed neutrinos didn’t have mass, like photons, because that idea was consistent with many existing datasets. Yet dedicated experiments contradicted their belief (and won their performers the 2015 physics Nobel Prize).
(Aside: High-energy particle physics uses large machines called particle colliders to coerce subatomic particles into configurations where they interact with each other, then collect data of those interactions. Operating these instruments demands hundreds of people working together, using sophisticated technologies and substantial computing resources. Because the instruments are so expensive, these collaborations aim to collect as much data as possible, then maximise the amount of information they extract from each dataset.)
Thus, when a theory like the Standard Model predicts a specific process, that process becomes a thing to test. But even if the prediction seems simple or obvious, actually measuring it can still rule out whole families of rival theories offering to explain the same process. It also sharpens physicists’ estimates of the theory’s basic parameters, which then makes other predictions more precise and helps plan the next round of experiments. This is why, in high-energy physics, even predictions that follow from other, well-tested parts of a theory are expected to face experimental tests of their own. Each successful test can reduce the space for new physics to hide in — or in fact could reveal it.
A study published in Physical Review Letters on December 3 showcases a new and apt example of testing predictions made by a theory some of whose other parts have already survived testing. Tests at the Large Hadron Collider (LHC) — the world’s largest, most powerful particle collider — had until recently only weakly constrained the Higgs boson’s interaction with second-generation leptons (a particle type that includes muons). The new study provides strong, direct evidence for this coupling and significantly narrows that gap.
The LHC operates by accelerating two beams of protons in opposite directions to nearly the speed of light and smashing them head on. Its operation is divided into segments called ‘runs’. Between runs, the collaboration that manages the machine conducts maintenance and repair work and, sometimes, upgrades its detectors.
One of the LHC’s most prominent detectors is named ATLAS. To probe the interactions between Higgs bosons and leptons, the ATLAS collaboration collected and analysed data from the LHC’s run 2 and run 3. The motivation was to obtain direct evidence for Higgs bosons’ coupling to muons and to measure its strength. And in the December 3 paper, the collaboration reported that the coupling parameters were consistent with the Standard Model’s predictions.
So that’s one more patch of the Standard Model that has passed a test, and one more door to ‘new physics’ that has closed a little more.
Featured image: A view of the Large Hadron Collider inside its tunnel. Credit: CERN.