In the Beginning
by Neil deGrasse Tyson
From Natural History Magazine, September 2003
Winner, American Institute of Physics, 2005 Science Writing Prize.
Physics describes the behavior of matter, energy, space, and time, and the interplay among them in the universe. From what scientists have been able to determine, all biological and chemical phenomena are ruled by what those four characters in our cosmic drama do to one another. And so everything fundamental and familiar to us earthlings begins with the laws of physics.
In almost any area of scientific inquiry, but especially in physics, the frontier of discovery lives at the extremes of measurement. At the extremes of matter, such as the neighborhood of a black hole, you find gravity badly warping the surrounding space-time continuum. At the extremes of energy, you sustain thermonuclear fusion in the ten-million degree cores of stars. And at every extreme imaginable, you get the outrageously hot, outrageously dense conditions that prevailed during the first few moments of the universe.
Daily life, we're happy to report, is wholly devoid of extreme physics. On a normal morning, you get out of bed, wander around the house, eat something, dash out the front door. And, by day's end your loved ones fully expect you to look no different than you did when you left, and to return home in one piece. But imagine arriving at the office, walking into an overheated conference room for an important 10:00 a.m. meeting, and suddenly losing all your electrons—or worse yet, having every atom of your body fly apart. Or suppose you're sitting in your office trying to get some work done by the light of your desk lamp, and somebody flicks on the overhead light, causing your body to bounce randomly from wall to wall until you're jack-in-the-boxed out the window. Or what if you went to a sumo wrestling match after work and saw the two spherical gentlemen collide, disappear, then spontaneously become two beams of light?
If those scenes played out daily, then modern physics wouldn't look so bizarre, knowledge of its foundations would flow naturally from our life experience, and our loved ones probably would never let us go to work. Back in the early minutes of the universe, though, that stuff happened all the time. To envision it, and understand it, one has no choice but to establish a new form of common sense, an altered intuition about how physical laws apply to extremes of temperature, density, and pressure.
Enter the world of E = mc2.
Albert Einstein first published a version of this famous equation in 1905 in a seminal research paper titled
On the Electrodynamics of Moving Bodies. Better known as the special theory of relativity, the concepts advanced in that paper forever changed our notions of space and time. Einstein, then just twenty-six years old, offered further details about his tidy equation in a separate, remarkably short paper published later the same year:
Does the Inertia of a Body Depend on Its Energy Content? To save you the effort of digging up the original article, designing an experiment, and testing the theory, the answer is
Yes. As Einstein wrote,
If a body gives off the energy E in the form of radiation, its mass diminishes by E/c2 . . . . The mass of a body is a measure of its energy-content; if the energy changes by E, the mass changes in the same sense
Uncertain as to the truth of his statement, he then suggested,
It is not impossible that with bodies whose energy-content is variable to a high degree (e.g. with radium salts) the theory may be successfully put to the test.
There it is. The algebraic recipe for all occasions when you want to convert matter into energy or energy into matter. In those simple sentences, Einstein unwittingly gave astrophysicists a computational tool, E = mc2, that extends their reach from the universe as it now is, all the way back to infinitesimal fractions of a second after its birth.
The most familiar form of energy is the photon, a massless, irreducible particle of light. You are forever bathed in photons: from the Sun, the Moon, and the stars to your stove, your chandelier, and your night light. So why don't you experience E = mc2 every day? The energy of visible light photons falls far below that of the least massive subatomic particles. There is nothing else those photons can become, and so they live happy, relatively uneventful lives.
Want to see some action? Start hanging around gamma-ray photons that have some real energy—at least 200,000 times more than that of visible photons. You'll quickly get sick and die of cancer, but before that happens you'll see pairs of electrons—one matter, the other antimatter; one of many dynamic duos in the particle universe—pop into existence where photons once roamed. As you watch, you will also see matter-antimatter pairs of electrons collide, annihilating each other and creating gamma-ray photons once again. Increase the light's energy by a factor of another 2,000, and you now have gamma rays with enough energy to turn susceptible people into the Hulk. But pairs of these photons now have enough energy to spontaneously create the more massive neutrons, protons, and their antimatter partners. High-energy photons don't hang out just anywhere. But the place needn't be imaginary. For gamma rays, almost any environment hotter than a few billion degrees will do just fine.
The cosmological significance of particles and energy packets transmuting into each other is staggering. Currently the temperature of our expanding universe, calculated from measurements of the microwave bath of light that pervades all of space, is a mere 2.73 degrees Kelvin. (On the Kelvin scale, zero is the temperature at which molecules have the lowest possible energy, room temperature is about 295 degrees, and water boils at 373 degrees.) Like the photons of visible light, microwave photons are too cool to have any realistic ambitions to become a particle via E = mc2; in fact, there are no known particles they can spontaneously become. Yesterday, however, the universe was a little bit smaller and a little bit hotter. The day before, it was smaller and hotter still. Roll the clocks backward some more—say, 13.7 billion years—and you land squarely in the primordial soup of the big bang, a time when the temperature of the cosmos was high enough to be astrophysically interesting.
The way space, time, matter, and energy behaved as the universe expanded and cooled from the beginning is one of the greatest stories ever told. But to explain what went on in that cosmic crucible, you must find a way to merge the four forces of nature into one, and find a way to reconcile two incompatible branches of physics: quantum mechanics (the science of the small) and general relativity (the science of the large). Spurred by the successful marriage of quantum mechanics and electromagnetism in the mid twentieth century, physicists set off on a race to blend quantum mechanics and general relativity (into a theory of quantum gravity). Although we haven't yet reached the finish line, we know exactly where the high hurdles are: during the
Planck era. That's the phase up to 10-43 seconds (one ten-million-trillion-trillion-trillionths of a second) after the beginning, and before the universe grew to 10-35 meters (one hundred billion trillion-trillionths of a meter) across. The German physicist Max Planck, after whom these unimaginably small quantities are named, introduced the idea of quantized energy in 1900 and is generally credited with being the father of quantum mechanics.
Not to worry, though. The clash between gravity and quantum mechanics poses no practical problem for the contemporary universe. Astrophysicists apply the tenets and tools of general relativity and quantum mechanics to very different classes of problems. But in the beginning, during the Planck era, the large was small, and there must have been a kind of shotgun wedding between the two. Alas, the vows exchanged during that ceremony continue to elude us, and so no (known) laws of physics describe with any confidence the behavior of the universe during the brief interregnum.
At the end of the Planck era, however, gravity wriggled loose from the other, still-unified forces of nature, achieving an independent identity nicely described by our current theories. As the universe aged through 10-35 seconds it continued to expand and cool, and what remained of the unified forces split into the electroweak and the strong nuclear forces. Later still, the electroweak force split into the electromagnetic and the weak nuclear forces, laying bare the four distinct forces we have come to know and love—with the weak force controlling radioactive decay, the strong force binding the nucleus, the electromagnetic force binding molecules, and gravity binding bulk matter.
By now, the universe was a mere trillionth of a second old. Yet its transmogrified forces and other critical episodes had already imbued our universe with fundamental properties each worthy of its own book.
While the universe dragged on for its first trillionth of a second, the interplay of matter and energy was incessant. Shortly before, during, and after the strong and electroweak forces parted company, the universe was a seething ocean of quarks, leptons, and their antimatter siblings, along with bosons, the particles that enable their interactions. None of these particle families is thought to be divisible into anything smaller or more basic. Fundamental though they are, each come in several species. The ordinary visible-light photon is a member of the boson family. The leptons most familiar to the nonphysicist are the electron and perhaps the neutrino; and the most familiar quarks are. . . . well, there are no familiar quarks. Each species has been assigned an abstract name that serves no real philological, philosophical, or pedagogical purpose except to distinguish it from the others: up and down, strange and charmed, and top and bottom.
Bosons, by the way, are simply named after the Indian scientist Satyendranath Bose. The word lepton derives from the Greek leptos, meaning
Quark, however, has a literary and far more imaginative origin. The physicist Murray Gell-Mann, who in 1964 proposed the existence of quarks, and who at the time thought the quark family had only three members, drew the name from a characteristically elusive line in James Joyce's Finnegans Wake:
Three quarks for Muster Mark! One thing quarks do have going for them: all their names are simple—something chemists, biologists, and geologists seem incapable of achieving when naming their own stuff.
Quarks are quirky beasts. Unlike protons, each with an electric charge of +1, and electrons, with a charge of –1, quarks have fractional charges that come in thirds. And you'll never catch a quark all by itself; it will always be clutching on to other quarks nearby. In fact, the force that keeps two (or more) of them together actually grows stronger the more you separate them—as if they were attached by some sort of subnuclear rubber band. Separate the quarks enough, the rubber band snaps and the stored energy summons E = mc2 to create a new quark at each end, leaving you back where you started.
But during the quark-lepton era the universe was dense enough for the average separation between unattached quarks to rival the separation between attached quarks. Under those conditions, allegiance between adjacent quarks could not be unambiguously established, and they moved freely among themselves, in spite of being collectively bound to each other. The discovery of this state of matter, a kind of quark soup, was reported for the first time in 2002 by a team of physicists at the Brookhaven National Laboratories.
Strong theoretical evidence suggests that an episode in the very early universe, perhaps during one of the force splits, endowed the universe with a remarkable asymmetry, in which particles of matter barely outnumbered particles of antimatter by a billion-and-one to a billion. That small difference in population hardly got noticed amid the continuous creation, annihilation, and re-creation of quarks and antiquarks, electrons and antielectrons (better known as positrons), and neutrinos and antineutrinos. The odd man out had plenty of opportunities to find someone to annihilate with, and so did everybody else. But not for much longer. As the cosmos continued to expand and cool, it became the size of the solar system, with a temperature dropping rapidly past a trillion degrees Kelvin. A millionth of a second had passed since the beginning.
This tepid universe was no longer hot enough or dense enough to cook quarks, and so they all grabbed dance partners, creating a permanent new family of heavy particles called hadrons (from the Greek hadros, meaning
thick). That quark-to-hadron transition soon resulted in the emergence of protons and neutrons as well as other, less familiar heavy particles, all composed of various combinations of quark species. The slight matter-antimatter asymmetry afflicting the quark-lepton soup now passed to the hadrons, but with extraordinary consequences.
As the universe cooled, the amount of energy available for the spontaneous creation of basic particles dropped. During the hadron era, ambient photons could no longer invoke E = mc2 to manufacture quark-antiquark pairs. Not only that, the photons that emerged from all the remaining annihilations lost energy to the ever-expanding universe and dropped below the threshold required to create hadron-antihadron pairs. For every billion annihilations—leaving a billion photons in their wake—a single hadron survived. Those loners would ultimately get to have all the fun: serving as the source of galaxies, stars, planets, and people.
Without the billion-and-one to a billion imbalance between matter and antimatter, all mass in the universe would have annihilated, leaving a cosmos made of photons and nothing else—the ultimate let-there-be-light scenario.
By now, one second of time has passed.
The universe has grown to a few light-years across, about the distance from the Sun to its closest neighboring stars. At a billion degrees, it's still plenty hot—and still able to cook electrons, which, along with their positron counterparts, continue to pop in and out of existence. But in the ever-expanding, ever-cooling universe, their days (seconds, really) are numbered. What was true for hadrons is true for electrons: eventually only one electron in a billion survives. The rest get annihilated, together with their antimatter sidekicks the positrons, in a sea of photons.
Right about now, one electron for every proton has been
frozen into existence. As the cosmos continues to cool—dropping below a hundred million degrees—protons fuse with protons as well as with neutrons, forming atomic nuclei and hatching a universe in which 90 percent of these nuclei are hydrogen and 10 percent are helium, along with trace amounts of deuterium, tritium, and lithium.
Two minutes have now passed since the beginning.
Not for another 380,000 years does much happen to our particle soup. Throughout these millennia the temperature remains hot enough for electrons to roam free among the photons, batting them to and fro. But all this freedom comes to an abrupt end when the temperature of the universe falls below 3,000 degrees Kelvin (about half the temperature of the Sun's surface), and all the electrons combine with free nuclei. The marriage leaves behind a ubiquitous bath of visible-light photons, completing the formation of particles and atoms in the primordial universe.
As the universe continues to expand, its photons continue to lose energy, dropping from visible light to infrared to microwaves.
Today, everywhere astrophysicists look we find an indelible fingerprint of 2.73 degree microwave photons, whose pattern on the sky retains a memory of the distribution of matter just before atoms formed. From this we can deduce many things, including the age and shape of the universe. And although atoms are now part of daily life, Einstein's equilibrious equation still has plenty of work to do—in particle accelerators, where matter-antimatter particle pairs are created routinely from energy fields; in the core of the Sun, where 4.4 million tons of matter are converted into energy every second; and in the cores of every other star.
It also manages to occupy itself near black holes, just outside their event horizons, where particle-antiparticle pairs can pop into existence at the expense of the black hole's formidable gravitational energy. Stephen Hawking first described that process in 1975, showing that the mass of a black hole can slowly evaporate by this mechanism. In other words, black holes are not entirely black. Today the phenomenon is known as Hawking radiation, and is a reminder of the continued fertility of E = mc2.
But what happened before all this? What happened before the beginning?
Astrophysicists have no idea. Or, rather, our most creative ideas have little or no grounding in experimental science. Yet certain type of religious person tends to assert, with a tinge of smugness, that something must have started it all: a force greater than all others, a source from which everything issues. A prime-mover. In the mind of such a person, that something is, of course, God.
But what if the universe was always there, in a state or condition we have yet to identify—a multiverse, for instance? Or what if the universe, like its particles, just popped into existence from nothing?
Such replies usually satisfy nobody. Nonetheless, they remind us that ignorance is the natural state of mind for a research scientist on the ever-shifting frontier. People who believe they are ignorant of nothing have neither looked for, nor stumbled upon, the boundary between what is known and unknown in the cosmos. And therein lies a fascinating dichotomy.
The universe always was goes unrecognized as a legitimate answer to
What was around before the beginning? But for many religious people, the answer
God always was is the obvious and pleasing answer to
What was around before God?
No matter who you are, engaging in the quest to discover where and how things began tends to induce emotional fervor—as if knowing the beginning bestows upon you some form of fellowship with, or perhaps governance over, all that comes later. So what is true for life itself is no less true for the universe: knowing where you came from is no less important than knowing where you are going.
Neil deGrasse Tyson, an astrophysicist, is Director of New York City's Hayden Planetarium and author of the forthcoming book Origins: Fourteen Billion years of Cosmic Evolution.