Mathematical ‘Hocus-Pocus’ Saved Particle Physics

In the 1940s, trailblazing physicists stumbled upon the next layer of reality. Particles were out, and fields—expansive, undulating entities that fill space like an ocean—were in. One ripple in a field would be an electron, another a photon, and interactions between them seemed to explain all electromagnetic events.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research develop­ments and trends in mathe­matics and the physical and life sciences.

There was just one problem: The theory was glued together with hopes and prayers. Only by using a technique dubbed “renormalization,” which involved carefully concealing infinite quantities, could researchers sidestep bogus predictions. The process worked, but even those developing the theory suspected it might be a house of cards resting on a tortured mathematical trick.

“It is what I would call a dippy process,” Richard Feynman later wrote. “Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent.”

Justification came decades later from a seemingly unrelated branch of physics. Researchers studying magnetization discovered that renormalization wasn’t about infinities at all. Instead, it spoke to the universe’s separation into kingdoms of independent sizes, a perspective that guides many corners of physics today.

Renormalization, writes David Tong, a theorist at the University of Cambridge, is “arguably the single most important advance in theoretical physics in the past 50 years.”

A Tale of Two Charges

By some measures, field theories are the most successful theories in all of science. The theory of quantum electrodynamics (QED), which forms one pillar of the standard model of particle physics, has made theoretical predictions that match up with experimental results to an accuracy of one part in a billion.

But in the 1930s and 1940s, the theory’s future was far from assured. Approximating the complex behavior of fields often gave nonsensical, infinite answers that made some theorists think field theories might be a dead end.

Feynman and others sought whole new perspectives—perhaps even one that would return particles to center stage—but came back with a hack instead. The equations of QED made respectable predictions, they found, if patched with the inscrutable procedure of renormalization.

The exercise goes something like this. When a QED calculation leads to an infinite sum, cut it short. Stuff the part that wants to become infinite into a coefficient—a fixed number—in front of the sum. Replace that coefficient with a finite measurement from the lab. Finally, let the newly tamed sum go back to infinity.

To some, the prescription felt like a shell game. “This is just not sensible mathematics,” wrote Paul Dirac, a groundbreaking quantum theorist.

The core of the problem—and a seed of its eventual solution—can be seen in how physicists dealt with the charge of the electron.

In the scheme above, the electric charge comes from the coefficient—the value that swallows the infinity during the mathematical shuffling. To theorists puzzling over the physical meaning of renormalization, QED hinted that the electron had two charges: a theoretical charge, which was infinite, and the measured charge, which was not. Perhaps the core of the electron held infinite charge. But in practice, quantum field effects (which you might visualize as a virtual cloud of positive particles) cloaked the electron so that experimentalists measured only a modest net charge.

Two physicists, Murray Gell-Mann and Francis Low, fleshed out this idea in 1954. They connected the two electron charges with one “effective” charge that varied with distance. The closer you get (and the more you penetrate the electron’s positive cloak), the more charge you see.

Their work was the first to link renormalization with the idea of scale. It hinted that quantum physicists had hit on the right answer to the wrong question. Rather than fretting about infinites, they should have focused on connecting tiny with huge.

Renormalization is “the mathematical version of a microscope,” says Astrid Eichhorn, a physicist at the University of Southern Denmark who uses renormalization to search for theories of quantum gravity. “And conversely you can start with the microscopic system and zoom out. It’s a combination of a microscope and a telescope.”

Magnets Save the Day

A second clue emerged from the world of condensed matter, where physicists were puzzling over how a rough magnet model managed to nail the fine details of certain transformations. The Ising model consisted of little more than a grid of atomic arrows that could each point only up or down, yet it predicted the behaviors of real-life magnets with improbable perfection.

Source

Author: showrunner