April 13, 2024

Nineteenth-century physics seemed complete. Kelvin thought otherwise

Key Takeaways

  • In the late 19th century, physicists tackled electricity and magnetism.

  • Other discoveries surrounding the atom led some to believe they were close to understanding the “great underlying principles” of physics in their entirety.

  • However, Lord Kelvin and others noticed two “clouds” hovering on the horizon of physics.

This was adapted from QUANTA and FIELDS: The Biggest Ideas in the Universe by Sean Carroll with permission from Dutton, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. © 2024 by Sean Carroll

As the 19th century drew to a close, we would have forgiven physicists for hoping they were on the right track to understanding everything. The universe, according to this provisional picture, was made of particles that were pushed around by fields.

The idea of ​​fields filling spaces took off throughout the 19th century. Previously, Isaac Newton presented a beautiful and convincing theory of motion and gravity, and Pierre-Simon Laplace showed how we could reformulate that theory in terms of a gravitational field that extends between all objects in the universe. A field is just something that has a value at every point in space. The value can be a simple number, or a vector, or something more complicated, but any field exists anywhere in space.

But if all that mattered was gravity, the field would seem optional—a point of view you could choose to adopt or not, depending on your preferences. It was equally acceptable to think like Newton, directly in terms of the force created on one object by the gravitational pull of others, with nothing extending between them.

This changed in the 19th century, when physicists tackled electricity and magnetism. Electrically charged objects exert forces on each other, which is natural to attribute to the existence of an electric field that extends between them. Experiments by Michael Faraday showed that a moving magnet could induce electric current in a wire without actually touching it, pointing to the existence of a separate magnetic field, and James Clerk Maxwell managed to combine these two types of fields into a single theory of electromagnetism , published in 1873. This was an enormous triumph of unification, explaining a diverse set of electrical and magnetic phenomena in a single compact theory. “Maxwell’s equations” plague undergraduate physics students to this day.

One of the triumphant implications of Maxwell’s theory was the understanding of the nature of light. Rather than a distinct type of substance, light is a wave that propagates in electric and magnetic fields, also known as electromagnetic radiation. We think of electromagnetism as a “force,” and it is, but Maxwell taught us that fields that carry forces can vibrate, and in the case of electric and magnetic fields, these vibrations are what we perceive as light. Light quanta are particles called photons, so we will sometimes say: “Photons carry the electromagnetic force.” But at the moment we are still thinking classically.

Take a single charged particle, like an electron. Left alone, it will have an electric field around it, with lines of force pointing toward the electron. The force will fall off as an inverse square law, just like in Newtonian gravity.

If we move the electron, two things happen: first, a moving charge creates a magnetic field and also an electric field. Secondly, the existing electric field will adjust its orientation in space so that it remains pointed at the particle. And together, these two effects (small magnetic field, small deviation in the existing electric field) ripple outward, like ripples from a stone thrown into a lake.

Maxwell discovered that the speed of these ripples is precisely the speed of light – because it is light. Light, of any wavelength, from radio to X-rays and gamma rays, is a vibration that propagates in electric and magnetic fields. Almost all the light you see around you now has its origin in a charged particle being stirred up somewhere, whether it’s in the filament of a light bulb or on the surface of the sun.

Simultaneously, in the 19th century, the role of particles also became clear. Chemists, led by John Dalton, defended the idea that matter was made of individual atoms, with a specific type of atom associated with each chemical element. Physicists were a late catcher when they realized that thinking of gases as collections of bouncing atoms could explain things like temperature, pressure, and entropy.

But the term atom, borrowed from the ancient Greek idea of ​​an indivisible elementary unit of matter, turned out to be a bit premature. Although they are the building blocks of chemical elements, modern atoms are not indivisible. A quick and dirty overview, with details to be filled in later: Atoms consist of a nucleus made of protons and neutrons surrounded by orbiting electrons. Protons have a positive electrical charge, neutrons have a zero charge, and electrons have a negative charge. We can form a neutral atom if we have equal numbers of protons and electrons, as their electrical charges will cancel each other out.

Nowadays we know that protons and neutrons themselves are made of smaller particles called quarks, which are held together by gluons, but there was no evidence of this at the beginning of the 20th century.

Sean Carroll

Protons and neutrons have approximately the same mass, with neutrons being slightly heavier, but electrons being much lighter, about 1/1,800 the mass of a proton. Therefore, most of the mass of a person or other macroscopic object comes from protons and neutrons. Light electrons are better able to move around and are therefore responsible for chemical reactions as well as the flow of electricity. Nowadays we know that protons and neutrons themselves are made of smaller particles called quarks, which are held together by gluons, but there was no evidence of this at the beginning of the 20th century.

This picture of atoms was assembled gradually. Electrons were discovered in 1897 by British physicist JJ Thompson, who measured their charge and discovered that they were much lighter than atoms. So somehow there must be two components to an atom: the light, negatively charged electrons and a heavier, positively charged piece. A few years later, Thompson suggested a picture in which tiny electrons floated inside a larger, positively charged volume. This came to be called the “plum pudding model,” with the electrons playing the role of the plums.

The plum pudding model did not flourish for long. A famous experiment by Ernest Rutherford, Hans Geiger, and Ernest Marsden fired alpha particles (now known as the nuclei of helium atoms) against thin gold foil. The expectation was that most of them would pass straight through, with their trajectories slightly deviated if they passed straight through an atom and interacted with the electrons (the plums) or the diffuse bubble of positive charge (the pudding). Electrons are too light to disturb the trajectories of alpha particles, and a scattered positive charge would be too diffuse to have much effect.

Smarter, Faster: The Big Think Newsletter

Sign up to receive counterintuitive, surprising and impactful stories delivered to your inbox every Thursday

But what happened was that, although most of the particles actually passed through unaffected, some ricocheted at wild angles, even backwards. This could only happen if there was something heavy and substantial for the particles to come out of. In 1911, Rutherford correctly explained this result by postulating that the positive charge was concentrated in a massive central nucleus. When an incoming alpha particle was lucky enough to hit the small but heavy nucleus directly, it would be deflected at a sharp angle, which is what was observed. In 1920 Rutherford proposed the existence of protons (which were just hydrogen nuclei, so had already been discovered), and in 1921 he theorized the existence of neutrons (which were eventually discovered in 1932).

Other physicists, starting with Maxwell himself, recognized that the known behavior of collections of particles and waves did not always conform to our classical expectations.

Sean Carroll

So far so good, our imagination thinks end of the century physicist. Matter is made of particles, particles interact through forces, and these forces are transported by fields. The entire mechanism would work according to rules established by the structure of classical physics. For particles this is quite familiar: we specify the positions and momenta of all the particles and then use one of our classical techniques (Newton’s laws or equivalent) to describe their dynamics. Fields work essentially the same way, except that a field’s “position” is its value at each point in space, and its “momentum” is how quickly it changes at each point. The general classical framework applies in both cases.

The suspicion that physics was close to being fully understood was not entirely wrong. Albert Michelson, at the opening of a new physics laboratory at the University of Chicago in 1894, proclaimed: “It seems probable that most of the great principles underlying [of physics] have been firmly established.”

He was completely wrong.

But he was also in the minority. Other physicists, starting with Maxwell himself, recognized that the known behavior of collections of particles and waves did not always conform to our classical expectations. William Thomson, Lord Kelvin, is often the victim of a misattributed quote: “There is nothing new to be discovered in physics now. All that remains is an increasingly precise measurement.” His real view was opposite. In a lecture in 1900, Thomson highlighted the presence of two “clouds” hovering over physics, one of which would eventually be dispersed by the formulation of the theory of relativity, the other by the theory of quantum mechanics.

Leave a Reply

Your email address will not be published. Required fields are marked *