I'm a theoretical physicist, specializing in quantum mechanics, gravitation, cosmology, statistical mechanics, and foundations of physics, with occasional dabblings elsewhere. This site contains information about me, including research and my CV, plus news, books, videos, and other activities.
I talked a bit on Twitter last night about the Past Hypothesis and the low entropy of the early universe. Responses reminded me that there are still some significant misconceptions about the universe (and the state of our knowledge thereof) lurking out there. So I’ve decided to quickly list, in Tweet-length form, some true facts about cosmology that might serve as a useful corrective. I’m also putting the list on Twitter itself, and you can see comments there as well.
The Big Bang model is simply the idea that our universe expanded and cooled from a hot, dense, earlier state. We have overwhelming evidence that it is true.
The Big Bang event is not a point in space, but a moment in time: a singularity of infinite density and curvature. It is completely hypothetical, and probably not even strictly true. (It’s a classical prediction, ignoring quantum mechanics.)
People sometimes also use “the Big Bang” as shorthand for “the hot, dense state approximately 14 billion years ago.” I do that all the time. That’s fine, as long as it’s clear what you’re referring to.
The Big Bang might have been the beginning of the universe. Or it might not have been; there could have been space and time before the Big Bang. We don’t really know.
Even if the BB was the beginning, the universe didn’t “pop into existence.” You can’t “pop” before time itself exists. It’s better to simply say “the Big Bang was the first moment of time.” (If it was, which we don’t know for sure.)
The Borde-Guth-Vilenkin theorem says that, under some assumptions, spacetime had a singularity in the past. But it only refers to classical spacetime, so says nothing definitive about the real world.
The universe did not come into existence “because the quantum vacuum is unstable.” It’s not clear that this particular “Why?” question has any answer, but that’s not it.
If the universe did have an earliest moment, it doesn’t violate conservation of energy. When you take gravity into account, the total energy of any closed universe is exactly zero.
The energy of non-gravitational “stuff” (particles, fields, etc.) is not conserved as the universe expands. You can try to balance the books by including gravity, but it’s not straightforward.
The universe isn’t expanding “into” anything, as far as we know. General relativity describes the intrinsic geometry of spacetime, which can get bigger without anything outside.
Inflation, the idea that the universe underwent super-accelerated expansion at early times, may or may not be correct; we don’t know. I’d give it a 50% chance, lower than many cosmologists but higher than some.
The early universe had a low entropy. It looks like a thermal gas, but that’s only high-entropy if we ignore gravity. A truly high-entropy Big Bang would have been extremely lumpy, not smooth.
Dark matter exists. Anisotropies in the cosmic microwave background establish beyond reasonable doubt the existence of a gravitational pull in a direction other than where ordinary matter is located.
We haven’t directly detected dark matter yet, but most of our efforts have been focused on Weakly Interacting Massive Particles. There are many other candidates we don’t yet have the technology to look for. Patience.
Dark energy may not exist; it’s conceivable that the acceleration of the universe is caused by modified gravity instead. But the dark-energy idea is simpler and a more natural fit to the data.
Dark energy is not a new force; it’s a new substance. The force causing the universe to accelerate is gravity.
We have a perfectly good, and likely correct, idea of what dark energy might be: vacuum energy, a.k.a. the cosmological constant. An energy inherent in space itself. But we’re not sure.
We don’t know why the vacuum energy is much smaller than naive estimates would predict. That’s a real puzzle.
Neither dark matter nor dark energy are anything like the nineteenth-century idea of the aether.
Feel free to leave suggestions for more misconceptions. If they’re ones that I think many people actually have, I might add them to the list.
For a change of pace this year, I went to Twitter and asked for suggestions for what to give thanks for in this annual post. There were a number of good suggestions, but two stood out above the rest: @etandel suggested Noether’s Theorem, and @OscarDelDiablo suggested the moons of Jupiter. Noether’s Theorem, according to which symmetries imply conserved quantities, would be a great choice, but in order to actually explain it I should probably first explain the principle of least action. Maybe some other year.
And to be precise, I’m not going to bother to give thanks for all of Jupiter’s moons. 78 Jovian satellites have been discovered thus far, and most of them are just lucky pieces of space debris that wandered into Jupiter’s gravity well and never escaped. It’s the heavy hitters — the four Galilean satellites — that we’ll be concerned with here. They deserve our thanks, for at least three different reasons!
Reason One: Displacing Earth from the center of the Solar System
Galileo discovered the four largest moons of Jupiter — Io, Europa, Ganymede, and Callisto — back in 1610, and wrote about his findings in Sidereus Nuncius (The Starry Messenger). They were the first celestial bodies to be discovered using that new technological advance, the telescope. But more importantly for our present purposes, it was immediately obvious that these new objects were orbiting around Jupiter, not around the Earth.
All this was happening not long after Copernicus had published his heliocentric model of the Solar System in 1543, offering an alternative to the prevailing Ptolemaic geocentric model. Both models were pretty good at fitting the known observations of planetary motions, and both required an elaborate system of circular orbits and epicycles — the realization that planetary orbits should be thought of as ellipses didn’t come along until Kepler published Astronomia Nova in 1609. As everyone knows, the debate over whether the Earth or the Sun should be thought of as the center of the universe was a heated one, with the Roman Catholic Church prohibiting Copernicus’s book in 1616, and the Inquisition putting Galileo on trial in 1633.
Strictly speaking, the existence of moons orbiting Jupiter is equally compatible with a heliocentric or geocentric model. After all, there’s nothing wrong with thinking that the Earth is the center of the Solar System, but that other objects can have satellites. However, the discovery brought about an important psychological shift. Sure, you can put the Earth at the center and still allow for satellites around other planets. But a big part of the motivation for putting Earth at the center was that the Earth wasn’t “just another planet.” It was supposed to be the thing around which everything else moved. (Remember that we didn’t have Newtonian mechanics at the time; physics was still largely an Aristotelian story of natures and purposes, not a bunch of objects obeying mindless differential equations.)
The Galilean moons changed that. If other objects have satellites, then Earth isn’t that special. And if it’s not that special, why have it at the center of the universe? Galileo offered up other arguments against the prevailing picture, from the phases of Venus to mountains on the Moon, and of course once Kepler’s ellipses came along the whole thing made much more mathematical sense than Ptolemy’s epicycles. Thus began one of the great revolutions in our understanding of our place in the cosmos.
Reason Two: Measuring the speed of light
Time is what clocks measure. And a clock, when you come right down to it, is something that does the same thing over and over again in a predictable fashion with respect to other clocks. That sounds circular, but it’s a nontrivial fact about our universe that it is filled with clocks. And some of the best natural clocks are the motions of heavenly bodies. As soon as we knew about the moons of Jupiter, scientists realized that they had a new clock to play with: by accurately observing the positions of all four moons, you could work out what time it must be. Galileo himself proposed that such observations could be used by sailors to determine their longitude, a notoriously difficult problem.
Danish astronomer Ole Rømer noted a puzzle when trying to use eclipses of Io to measure time: despite the fact that the orbit should be an accurate clock, the actual timings seemed to change with the time of year. Being a careful observational scientist, he deduced that the period between eclipses was longer when the Earth was moving away from Jupiter, and shorter when the two planets were drawing closer together. An obvious explanation presented itself: the light wasn’t traveling instantaneously from Jupiter and Io to us here on Earth, but rather took some time. By figuring out exactly how the period between eclipses varied, we could then deduce what the speed of light must be.
Rømer’s answer was that light traveled at about 220,000 kilometers per second. That’s pretty good! The right answer is 299,792 km/sec, about 36% greater than Rømer’s value. For comparison purposes, when Edwin Hubble first calculated the Hubble constant, he derived a value of about 500 km/sec/Mpc, whereas now we know the right answer is about 70 km/sec/Mpc. Using astronomical observations to determine fundamental parameters of the universe isn’t easy, especially if you’re the first one to to it.
Reason Three: Looking for life
Here in the present day, Jupiter’s moons have not lost their fascination or importance. As we’ve been able to study them in greater detail, we’ve learned a lot about the history and nature of the Solar System more generally. And one of the most exciting prospects is that one or more of these moons might harbor life.
It used to be common to think about the possibilities for life outside Earth in terms of a “habitable zone,” the region around a star where temperatures allowed planets to have liquid water. (Many scientists think that liquid water is a necessity for life to exist — but maybe we’re just being parochial about that.) In our Solar System, Earth is smack-dab in the middle of the habitable zone, and Mars just sneaks in. Both Venus and Jupiter are outside, on opposite ends.
But there’s more than one way to have liquid water. It turns out that both Europa and Ganymede, as well as Saturn’s moons Titan and Enceladus, are plausible homes for large liquid oceans. Europa, in particular, is thought to possess a considerable volume of liquid water underneath an icy crust — approximately two or three times as much water as in all the oceans on Earth. The point is that solar radiation isn’t the only way to heat up water and keep it at liquid temperatures. On Europa, it’s likely that heat is generated by the tidal pull from Jupiter, which stretches and distorts the moon’s crust as it rotates.
Does that mean there could be life there? Maybe! Nobody really knows. Smart money says that we’re more likely to find life on a wet environment like Europa than a dry one like Mars. And we’re going to look — the Europa Clipper mission is scheduled for launch by 2025.
If you can’t wait for then, go back and watch the movie Europa Report. And while you do, give thanks to Galileo and his discovery of these fascinating celestial bodies.
Sir Michael Atiyah, one of the world’s greatest living mathematicians, has proposed a derivation of α, the fine-structure constant of quantum electrodynamics. A preprint is here. The math here is not my forte, but from the theoretical-physics point of view, this seems misguided to me.
Caveat: Michael Atiyah is a smart cookie and has accomplished way more than I ever will. It’s certainly possible that, despite the considerations I mention here, he’s somehow onto something, and if so I’ll join in the general celebration. But I honestly think what I’m saying here is on the right track.
In quantum electrodynamics (QED), α tells us the strength of the electromagnetic interaction. Numerically it’s approximately 1/137. If it were larger, electromagnetism would be stronger, atoms would be smaller, etc; and inversely if it were smaller. It’s the number that tells us the overall strength of QED interactions between electrons and photons, as calculated by diagrams like these. As Atiyah notes, in some sense α is a fundamental dimensionless numerical quantity like e or π. As such it is tempting to try to “derive” its value from some deeper principles. Arthur Eddington famously tried to derive exactly 1/137, but failed; Atiyah cites him approvingly.
But to a modern physicist, this seems like a misguided quest. First, because renormalization theory teaches us that α isn’t really a number at all; it’s a function. In particular, it’s a function of the total amount of momentum involved in the interaction you are considering. Essentially, the strength of electromagnetism is slightly different for processes happening at different energies. Atiyah isn’t even trying to derive a function, just a number.
This is basically the objection given by Sabine Hossenfelder. But to be as charitable as possible, I don’t think it’s absolutely a knock-down objection. There is a limit we can take as the momentum goes to zero, at which point α is a single number. Atiyah mentions nothing about this, which should give us skepticism that he’s on the right track, but it’s conceivable.
More importantly, I think, is the fact that α isn’t really fundamental at all. The Feynman diagrams we drew above are the simple ones, but to any given process there are also much more complicated ones, e.g.
And in fact, the total answer we get depends not only on the properties of electrons and photons, but on all of the other particles that could appear as virtual particles in these complicated diagrams. So what you and I measure as the fine-structure constant actually depends on things like the mass of the top quark and the coupling of the Higgs boson. Again, nowhere to be found in Atiyah’s paper.
Most importantly, in my mind, is that not only is α not fundamental, QED itself is not fundamental. It’s possible that the strong, weak, and electromagnetic forces are combined into some Grand Unified theory, but we honestly don’t know at this point. However, we do know, thanks to Weinberg and Salam, that the weak and electromagnetic forces are unified into the electroweak theory. In QED, α is related to the “elementary electric charge” e by the simple formula α = e2/4π. (I’ve set annoying things like Planck’s constant and the speed of light equal to one. And note that this e has nothing to do with the base of natural logarithms, e = 2.71828.) So if you’re “deriving” α, you’re really deriving e.
But e is absolutely not fundamental. In the electroweak theory, we have two coupling constants, g and g’ (for “weak isospin” and “weak hypercharge,” if you must know). There is also a “weak mixing angle” or “Weinberg angle” θW relating how the original gauge bosons get projected onto the photon and W/Z bosons after spontaneous symmetry breaking. In terms of these, we have a formula for the elementary electric charge: e = g sinθW. The elementary electric charge isn’t one of the basic ingredients of nature; it’s just something we observe fairly directly at low energies, after a bunch of complicated stuff happens at higher energies.
Not a whit of this appears in Atiyah’s paper. Indeed, as far as I can tell, there’s nothing in there about electromagnetism or QED; it just seems to be a way to calculate a number that is close enough to the measured value of α that he could plausibly claim it’s exactly right. (Though skepticism has been raised by people trying to reproduce his numerical result.) I couldn’t see any physical motivation for the fine-structure constant to have this particular value
These are not arguments why Atiyah’s particular derivation is wrong; they’re arguments why no such derivation should ever be possible. α isn’t the kind of thing for which we should expect to be able to derive a fundamental formula, it’s a messy low-energy manifestation of a lot of complicated inputs. It would be like trying to derive a fundamental formula for the average temperature in Los Angeles.
Again, I could be wrong about this. It’s possible that, despite all the reasons why we should expect α to be a messy combination of many different inputs, some mathematically elegant formula is secretly behind it all. But knowing what we know now, I wouldn’t bet on it.
For anyone who hasn’t been following along on other social media, the big news is that I’ve started a podcast, called Mindscape. It’s still young, but early returns are promising!
I won’t be posting each new episode here; the podcast has a “blog” of its own, and episodes and associated show notes will be published there. You can subscribe by RSS as usual, or there is also an you can sign up for. For podcast aficionados, Mindscape should be available wherever finer podcasts are served, including iTunes, Google Play, Stitcher, Spotify, and so on.
As explained at the welcome post, the format will be fairly conventional: me talking to smart people about interesting ideas. It won’t be all, or even primarily, about physics; much of my personal motivation is to get the opportunity to talk about all sorts of other interesting things. I’m expecting there will be occasional solo episodes that just have me rambling on about one thing or another.
We’ve already had a bunch of cool guests, check these out:
White House Press Secretary Sarah Sanders went to have dinner at a local restaurant the other day. The owner, who is adamantly opposed to the policies of the Trump administration, politely asked her to leave, and she did. Now (who says human behavior is hard to predict?) an intense discussion has broken out concerning the role of civility in public discourse and our daily life. The Washington Post editorial board, in particular, called for public officials to be allowed to eat in peace, and people have responded in volume.
I don’t have a tweet-length response to this, as I think the issue is more complex than people want to make it out to be. I am pretty far out to one extreme when it comes to the importance of engaging constructively with people with whom we disagree. We live in a liberal democracy, and we should value the importance of getting along even in the face of fundamentally different values, much less specific political stances. Not everyone is worth talking to, but I prefer to err on the side of trying to listen to and speak with as wide a spectrum of people as I can. Hell, maybe I am even wrong and could learn something.
On the other hand, there is a limit. At some point, people become so odious and morally reprehensible that they are just monsters, not respected opponents. It’s important to keep in our list of available actions the ability to simply oppose those who are irredeemably dangerous/evil/wrong. You don’t have to let Hitler eat in your restaurant.
This raises two issues that are not so easy to adjudicate. First, where do we draw the line? What are the criteria by which we can judge someone to have crossed over from “disagreed with” to “shunned”? I honestly don’t know. I tend to err on the side of not shunning people (in public spaces) until it becomes absolutely necessary, but I’m willing to have my mind changed about this. I also think the worry that this particular administration exhibits authoritarian tendencies that could lead to a catastrophe is not a completely silly one, and is at least worth considering seriously.
More importantly, if the argument is “moral monsters should just be shunned, not reasoned with or dealt with constructively,” we have to be prepared to be shunned ourselves by those who think that we’re moral monsters (and those people are out there). There are those who think, for what they take to be good moral reasons, that abortion and homosexuality are unforgivable sins. If we think it’s okay for restaurant owners who oppose Trump to refuse service to members of his administration, we have to allow staunch opponents of e.g. abortion rights to refuse service to politicians or judges who protect those rights.
The issue becomes especially tricky when the category of “people who are considered to be morally reprehensible” coincides with an entire class of humans who have long been discriminated against, e.g. gays or transgender people. In my view it is bigoted and wrong to discriminate against those groups, but there exist people who find it a moral imperative to do so. A sensible distinction can probably be made between groups that we as a society have decided are worthy of protection and equal treatment regardless of an individual’s moral code, so it’s at least consistent to allow restaurant owners to refuse to serve specific people they think are moral monsters because of some policy they advocate, while still requiring that they serve members of groups whose behaviors they find objectionable.
The only alternative, as I see it, is to give up on the values of liberal toleration, and to simply declare that our personal moral views are unquestionably the right ones, and everyone should be judged by them. That sounds wrong, although we do in fact enshrine certain moral judgments in our legal codes (murder is bad) while leaving others up to individual conscience (whether you want to eat meat is up to you). But it’s probably best to keep that moral core that we codify into law as minimal and widely-agreed-upon as possible, if we want to live in a diverse society.
This would all be simpler if we didn’t have an administration in power that actively works to demonize immigrants and non-straight-white-Americans more generally. Tolerating the intolerant is one of the hardest tasks in a democracy.