This NASA/ESA Hubble Space Telescope image shows a massive galaxy cluster, PLCK_G308.3–20.2, glowing brightly in the darkness. This is what huge swaths of the distant Universe looks like. But how far does the Universe as-we-know-it, including the unobservable part, go on for? (ESA/HUBBLE & NASA, RELICS; ACKNOWLEDGEMENT: D. COE ET AL.)If we know how big the observable Universe is, why can’t we figure out how big the unobservable part is?
13.8 billion years ago, the Big Bang occurred. The Universe was filled with matter, antimatter, radiation, and existed in an ultra-hot, ultra-dense, but expanding-and-cooling state. By today, the volume containing our observable Universe has expanded to be 46 billion light years in radius, with the light that’s first arriving at our eyes today corresponding to the limit of what we can measure. But what lies beyond? What about the unobservable Universe? That’s what Gray Bryan wants to know, as he asks:
We know the size of the Observable Universe since we know the age of the Universe (at least since the phase change) and we know that light radiates. […] My question is, I guess, why doesn’t the math involved in making the CMB and other predictions, in effect, tell us the size of the Universe? We know how hot it was and how cool it is now. Does scale not affect these calculations?
Oh, if only it were so easy.
The history of the Universe, as far back as we can see using a variety of tools and telescopes, has been well-determined. But our observations can only, tautologically, provide us with evidence about the observable parts. Everything else must be inferred, and those inferences are only as good as the assumptions which underlie them.(SLOAN DIGITAL SKY SURVEY)
The Universe is cold and clumpy today, but it’s also expanding and gravitating. When we look to greater and greater distances, we see things as they were not only far away, but also back in time, owing to the finite speed of light. The more distant Universe is less clumpy and more uniform, having had less time to form larger, more complicated structures that require more time for gravity’s effects to take place.
The early, distant Universe was also hotter. The expanding Universe causes all the light that travels through the Universe to stretch in wavelength. As the wavelength stretches, it loses energy, becoming cooler. This means the Universe was hotter in the distant past, a fact we’ve confirmed through observations of distant features in the Universe.
A 2011 study (red points) has given the best evidence to date that the CMB used to be higher in temperature in the past. The spectral and temperature properties of distant light confirms that we live in expanding space. (P. NOTERDAEME, P. PETITJEAN, R. SRIANAND, C. LEDOUX AND S. LÓPEZ, (2011). ASTRONOMY & ASTROPHYSICS, 526, L7)
We can measure the temperature of the Universe as it is today, 13.8 billion years after the Big Bang, by looking at the leftover radiation from that hot, dense, early state. Today, this shows up in the microwave portion of the spectrum, and is known as the Cosmic Microwave Background. Coming in with a blackbody spectrum and a temperature of 2.725 K, it’s easy to confirm that these observations match, with an incredible precision, the predictions that arise from the Big Bang model of our Universe.
The Sun’s actual light (yellow curve, left) versus a perfect blackbody (in grey), showing that the Sun is more of a series of blackbodies due to the thickness of its photosphere; at right is the actual perfect blackbody of the CMB as measured by the COBE satellite. Note that the “error bars” on the right are an astounding 400 sigma. The agreement between theory and observation here is historic.(WIKIMEDIA COMMONS USER SCH (L); COBE/FIRAS, NASA / JPL-CALTECH (R))
Moreover, we know how this radiation evolves in energy as the Universe expands. A photon’s energy is directly proportional to the inverse of its wavelength. When the Universe was half its size, the photons from the Big Bang had double the energy, while when the Universe was 10% of its current size, those photons had ten times the energy. If we’re willing to go back to when the Universe was just 0.092% its present size, we’ll find a Universe that’s 1089 times hotter than it is today: around 3000 K. At these temperatures, the Universe is hot enough to ionize all the atoms in it. Instead of solid, liquid, or gas, all the matter in the entire Universe was in the form of an ionized plasma.
A Universe where electrons and protons are free and collide with photons transitions to a neutral one that’s transparent to photons as the Universe expands and cools. Shown here is the ionized plasma (L) before the CMB is emitted, followed by the transition to a neutral Universe (R) that’s transparent to photons. (AMANDA YOHO)
The way we arrive at the size of the Universe today is through understanding three things in tandem:
How quickly the Universe is expanding today, something we can measure via a number of methods,
How hot the Universe is today, which we know from looking at the radiation of the Cosmic Microwave Background,
and what the Universe is made out of, including matter, radiation, neutrinos, antimatter, dark matter, dark energy, and more.
By taking the Universe we have today, we can extrapolate back to the earliest stages of the hot Big Bang, and arrive at a figure for both the age and the size of the Universe together.
The size of the Universe, in light years, versus the amount of time that’s passed since the Big Bang. This is presented on a logarithmic scale, with a number of momentous events annotated for clarity. This only applies to the observable Universe. (E. SIEGEL)
From the full suite of observations available, including the cosmic microwave background but also including supernova data, large-scale structure surveys, and baryon acoustic oscillations, among others, we get our Universe. 13.8 billion years after the Big Bang, it’s now 46.1 billion light years in radius. That’s the limit of what’s observable. Any farther than that, and even something moving at the speed of light since the moment of the hot Big Bang will not have had sufficient time to reach us. As time goes on, the age and the size of the Universe will increase, but there will always be a limit to what we can observe.
Artist’s logarithmic scale conception of the observable universe. Note that we’re limited in how far we can see back by the amount of time that’s occurred since the hot Big Bang: 13.8 billion years, or (including the expansion of the Universe) 46 billion light years. Anyone living in our Universe, at any location, would see almost exactly the same thing from their vantage point. (WIKIPEDIA USER PABLO CARLOS BUDASSI)
So what can we say about the part of the Universe that’s beyond the limits of our observations? We can only make inferences based on the laws of physics as we know them, and the things we can measure within our observable Universe. For example, we observe that the Universe is spatially flat on the largest scales: it’s neither positively nor negatively curved, to a precision of 0.25%. If we assume that our current laws of physics are correct, we can set limits on how large, at least, the Universe must be before it curves back on itself.
The magnitudes of the hot and cold spots, as well as their scales, indicate the curvature of the Universe. To the best of our capabilities, we measure it to be perfectly flat. Baryon acoustic oscillations provide a different method to constrain this, but with similar results.(SMOOT COSMOLOGY GROUP / LBL)
Observations from the Sloan Digital Sky Survey and the Planck satellite are where we get the best data. They tell us that if the Universe does curve back in on itself and close, the part we can see is so indistinguishable from “uncurved” that it much be at least 250 times the radius of the observable part.
This means the unobservable Universe, assuming there’s no topological weirdness, must be at least 23 trillion light years in diameter, and contain a volume of space that’s over 15 million times as large as the volume we can observe. If we’re willing to speculate, however, we can argue quite compellingly that the unobservable Universe should be significantly even bigger than that.
The observable Universe might be 46 billion light years in all directions from our point of view, but there’s certainly more, unobservable Universe, perhaps even an infinite amount, just like ours beyond that. Over time, we’ll be able to see a bit, but not a lot, more of it. (FRÉDÉRIC MICHEL AND ANDREW Z. COLVIN, ANNOTATED BY E. SIEGEL)
The hot Big Bang might mark the beginning of the observable Universe as we know it, but it doesn’t mark the birth of space and time itself. Before the Big Bang, the Universe underwent a period of cosmic inflation. Instead of being filled with matter and radiation, and instead of being hot, the Universe was:
filled with energy inherent to space itself,
expanding at a constant, exponential rate,
and creating new space so quickly that the smallest physical length scale, the Planck length, would be stretched to the size of the presently observable Universe every 10–32 seconds.
Inflation causes space to expand exponentially, which can very quickly result in any pre-existing curved or non-smooth space appearing flat. If the Universe is curved, it has a radius of curvature that is at minimum hundreds of times larger than what we can observe. (E. SIEGEL (L); NED WRIGHT’S COSMOLOGY TUTORIAL (R))
How big was the region of the Universe, post-inflation, that created our hot Big Bang?
Is the idea of “eternal inflation,” where the Universe inflates eternally into the future in at least some regions, correct?
And, finally, how long did inflation go on prior to its end and the resultant hot Big Bang?
It’s possible that the Universe, where inflation occurred, barely attained a size larger than what we can observe. It’s possible that, any year now, the evidence for an “edge” to where inflation happened will materialize. But it’s also possible that the Universe is googols of times larger than what we can observe. Until we can answer these questions, we may never know.
A huge number of separate regions where Big Bangs occur are separated by continuously inflating space in eternal inflation. But we have no idea how to test, measure or access what’s out there beyond our own observable Universe. (OZYTIVE — PUBLIC DOMAIN)
Beyond what we can see, we strongly suspect that there’s plenty more Universe out there just like ours, with the same laws of physics, the same types of physical, cosmic structures, and the same chances at complex life. There should also be a finite size and scale to the “bubble” in which inflation ended, and an exponentially huge number of such bubbles contained within the larger, inflating spacetime. But as inconceivably large as that entire Universe — or Multiverse, if you prefer — may be, it might not be infinite. In fact, unless inflation went on for a truly infinite amount of time, or the Universe was born infinitely large, the Universe ought to be finite in extent.
As vast as our observable Universe is and as much as we can see, it’s only a tiny fraction of what must be out there. (NASA, ESA, R. WINDHORST, S. COHEN, AND M. MECHTLEY (ASU), R. O’CONNELL (UVA), P. MCCARTHY (CARNEGIE OBS), N. HATHI (UC RIVERSIDE), R. RYAN (UC DAVIS), & H. YAN (TOSU))
The biggest problem of all, though, is that we don’t have enough information to definitively answer the question. We only know how to access the information available inside our observable Universe: those 46 billion light years in all directions. The answer to the biggest of all questions, of whether the Universe is finite or infinite, might be encoded in the Universe itself, but we can’t access enough of it to know. Until we either figure it out, or come up with a clever scheme to expand what we know physics is capable of, all we’ll have are the possibilities.
Artist’s concept of KIC 8462852, which has experienced unusual changes in luminosity over the past few years. (NASA / JPL-CALTECH)The most unusual star known has finally had its dimming scientifically explained. Here’s the unusual, dusty resolution.
The science of planet-hunting has truly taken off in the 21st century, with the transit method leading the way. When a planet passes in front of its parent star, relative to our line-of-sight, some of the star’s light will disappear for a short while. These transits are a prolific method for exoplanet hunters to search for worlds around other stars. As of today, we know of thousands of stars with worlds around them, and most of them were discovered by transit.
When you design a mission optimized to look for planets, you expect that the technique is going to uncover a few oddities. But nothing prepared astronomers for the oddball that is Tabby’s star, whose flux dims by a tremendous amount, without any regularly repeating signals. After years of speculation involving scenarios ranging from comet storms to alien megastructures, scientists have finally solved the mystery. Dust, in an entirely new way, looks to be the culprit.
The infrared (L) and ultraviolet (R) emissions from Tabby’s star: KIC 8462852. They show no evidence of a great many of the natural explanations for the flux dips observed. (INFRARED: IPAC/NASA (2MASS), AT LEFT; ULTRAVIOLET: STSCI (GALEX), AT RIGHT)
NASA’s Kepler mission changed the game, surveying over 100,000 stars for a period of many years. Of the hundreds of thousands of stars that NASA’s Kepler spacecraft observed, one stands out as the most unusual. KIC 8462852 — known colloquially as either Tabby’s/Boyajian’s star (after the discoverer of its interesting behavior, Tabetha Boyajian) or the WTF? (for where’s the flux?) star — has a combination of properties that make it entirely unique. All at once, it:
exhibits huge drops in its flux, by up to 22% (while most planets cause <1% dips),
where the overall brightness fluctuates around the dips (rather than the smooth decrease-and-increase seen for planets),
but with no infrared emission (which all other stars with large flux dips possess).
This created a huge puzzle.
A large number of protoplanetary systems have been imaged, but the state-of-the-art infrared imager designed for exoplanet disk pictures is SPHERE, which routinely obtains resolutions of ~10", or less than 0.003 degrees per pixel. KIC 8462852 does not have these properties or this infrared emission. (SHINE (SPHERE INFRARED SURVEY FOR EXOPLANETS) COLLABORATION / ARTHUR VIGAN)
It couldn’t be planets, because no planet is large enough to block that much light from its star. Even if you envision a planet with an enormous ringed system, like a super-Saturn, those flux dips would be both periodic and exhibit a smooth pattern with a plateau. This contradicts the available data.
Artist’s conception of the extrasolar ring system circling the young giant planet or brown dwarf J1407b. Worlds with extraordinary ringed systems could produce large flux dips, but those dips would be periodic and contain a planet-like component, which is not observed.(RON MILLER)
This could have been a very young star, with planetesimals, a proto-planetary disk, and an extremely dusty environment. We’ve seen stars with large flux dips around them, and they’ve all fallen into this category.
But Boyajian’s star is much too old to have a protoplanetary disk: many hundreds of millions of years too old. It also, most importantly, doesn’t exhibit the infrared flux emission that a star with a protoplanetary disk ought to have. This is why the star was originally named the “WTF?” (for where’s the flux?) star.
Artist’s impression of a young star surrounded by a protoplanetary disk. There are many unknown properties about protoplanetary disks around Sun-like stars, but they all exhibit infrared radiation. Tabby’s star has none. (ESO/L. CALÇADA)
It could be a series of cometary events, where they emit large amounts of dust being kicked up as they infall onto the inner portion of the solar system in question. This could, as was shown relatively recently, explain the short-term flux dips that have been seen.
An illustration of a storm of comets around a star near our own, called Eta Corvi. The comet scenario is one explanation for the dimming around Tabby’s star, one that a high-quality astronomical spectrum has now ruled out. (NASA / JPL-CALTECH)
But there’s another phenomenon that this proposed solution cannot account for: the long-term dimming of the star. This star isn’t called “Tabby’s star” or “Boyajian’s star” because it was discovered by that particular scientist; only because she led the scientific investigation concerning the interesting and important new behavior.
But this star has been known for over a century, and observations indicate a long-term fading, which this model cannot account for. Cometary dust gets blown off on the timescales of months; it would take a near-continuous bombardment of comets to sustain a reduced flux over the timescale of over a century. Many comets in a similar orbit would be required, which is not something we know how to obtain.
The Harvard light curve of star KIC 8462852, along with two other stars whose flux hasn’t changed. (BRADLEY E. SCHAEFER, VIA ARXIV.ORG/ABS/1601.03256)
So, what possible explanations remained? One popular idea that was advanced was that of alien megastructures: that a civilization far ahead of humanity, technologically, was constructing an apparatus that periodically (or aperiodically) blocked a large percentage of the star’s light. As the structure became more and more complete, that would increase the amount of light that was blocked. Over the past century, the fact that the light from this star had dimmed by such a significant amount could be explained by an advance in how completed the structure would be.
It’s a compelling, if out-of-the-box, idea.
A partially-obscured star could be due to an alien megastructure that is not yet complete, and could potentially be detectable by the Gaia spacecraft. However, that is not what’s occurring around KIC 8462852. The spectral evidence rules that out. (KEVIN MCGILL / FLICKR)
But thanks to a myriad of follow-up observations, we know that it’s wrong. The reason? An object like an alien megastructure would be completely opaque to light: it would be unable to pass through it. This is equally true of things like planets, moons, or any other “solid” objects you can imagine.
From over 19000 images taken over the past three years, in four different wavelength bands from blue light all the way to infrared light, we’ve learned that blue light is preferentially blocked in all dimming events: from the short-term flux dips to the long-term fading of the star. There’s one thing known that can cause bluer light to be blocked while redder light is preferentially transmitted: dust particles that go down to at least a certain, minimal size.
Visible (left) and infrared (right) views of the dust-rich Bok globule, Barnard 68. The infrared light is not blocked nearly as much, as the smaller-sized dust grains are too little to interact with the long-wavelength light. (ESO)
It must, therefore, be dust. Whatever is causing the flux dips, as well as whatever’s causing the long-term fading, must both have a dusty origin. The Kepler dips and the “secular dimming” are caused by the same phenomenon. According to the new paper itself:
This chromatic extinction implies dust particle sizes going down to ~0.1 micron, suggesting that this dust will be rapidly blown away by stellar radiation pressure, so the dust clouds must have formed within months. The modern infrared observations were taken at a time when there was at least 12.4% ± 1.3% dust coverage (as part of the secular dimming), and this is consistent with dimming originating in circumstellar dust.
This is where the evidence points: to dust. But this is still a bit mysterious.
An illustration of a complex, dusty region around a star, superimposed with recent data from Tabetha Boyajian (2018, via Twitter) showing some recent flux dips. The dust could not be on the surface of the star, as illustrated here. KIC 8462852, an F-class star, is too hot for this to be plausible. (T. BOYAJIAN / TWITTER)
After all, Boyajian’s star is a combination of things that we wouldn’t expect to find together.
It’s consistent with having a large amount of circumstellar dust, which normally indicates an extremely young star still in the formative stages.
The star itself is brighter, hotter, and more massive than the Sun: it gives off more than four times the amount of light our Sun does.
The star is old: hundred of millions of years old, burning stably on the main sequence by all accounts.
In other words, the dust we see should last only months given the properties of the star itself. There must be some way for the star to replenish its dust. As far as we know, there are two possibilities that make sense: either there’s an external dust ring that has dense dust clouds in it or infalling bombardment events, or there’s something external to the star that leads to this blocking of the starlight.
The leading idea, at present, is that a disk of dusty debris should exist around this star. If so, it’s incredibly serendipitous that the plane is so perfectly aligned with our line-of-sight, a remarkable and unlikely occurrence if true. Even if the odds are as great as 1%, it would be a puzzle that we haven’t seen other, similar stars (the 99%) without such an alignment. (NASA / JPL-CALTECH)
The declining brightness that’s been observed since 1890 appears to continue through the current 2018 data, but it’s not steady. In addition, there are long-period dips lasting months, and shorter dips lasting a day or less superimposed on top of them. It’s definitely due to dust particles, down to maybe about 100 nanometers in size. The ratio of how the light dims in different wavelengths/colorsdemonstrates that and rules out other hypotheses.
But where does that dust come from? To help narrow this down, the scientists involved calculated how much dust must be involved to explain the past 100+ years of dimming and dipping events. For what’s merely in the transiting plane defined by our point-of-view alone, we need to have an amount of dust equal to about the mass of the Moon.
Originally, a scenario of a shattered comet was considered to explain Tabby’s star. Instead, a series of long-period comet-like objects with massive dust halos could cause these temporary, transient flux dips, but a very large amount of mass, that isn’t in the form of opaque objects, must exist to do it. (NASA/JPL-CALTECH)
This could either replace or be in addition to the circumstellar dust’s presence. As far as a disk of material around the star goes, the disk is a bare minimum. There could be a large amount of dust that isn’t just in the plane we observe, but also outside of it: in perhaps a halo. We simply don’t know, but we do know that if it does exist, it cannot be close enough to emit infrared radiation. Comets, too, should create infrared radiation; the James Webb Space Telescope should be able to tell, when the flux dips occur, whether the comet hypothesis is in or out.
A dusty debris disk either around the star itself or the planets that orbit it close in would emit infrared radiation, where none is seen. If there is a dust ring (or halo) farther out, however, that could explain these observations. (ESA, NASA, AND L. CALCADA (ESO FOR STSCI))
If a gas giant planet — say, the size of Uranus — were devoured by this star, it could be the culprit. An inspiral of a planet or a series of planetary bodies a long time ago, perhaps centuries or even many millenia ago, could have caused a temporary brightening, from which the star is now returning to its original, stable state. The flux dips we observe, then, could be due to planetary debris from an earlier disruption, or evaporation and outgassing of smaller bodies.
An artist’s impression of HD 189733 b, a Hot Jupiter so close to its host that its atmosphere is being boiled off into space. If a gas giant was recently swallowed by KIC 8462852, it could potentially be ‘belching’ dust particles that might cause the observed dimming.(NASA / GSFC)
Regardless of the mechanism in question, we can be certain of one conclusion: the reason for the dimming of Boyajian’s star is due to dust. This is normal, particulate dust, containing particle sizes down to about 100 nanometers, or smaller than the wavelength of visible light. The same dust that causes short, day-or-less dips also causes dips that last many months, and also cause the decline that’s lasted more than a century. It’s all due to plain, normal dust.
The big, open question that now remains is where this dust came from? It’s not because the star is young or still forming, and there are incredible constraints on the star having an unseen companion. It cannot all come from interstellar dust. Was a planet devoured? Is there something even more unusual afoot? The only way to know will be with more — and better — science on this object. But one thing’s for certain: even if alien megastructures exist somewhere, they aren’t here.
Thanks to Jason Wright for his comments and recommendations in constructing this article.
In this artistic rendering, a blazar is accelerating protons that produce pions, which produce neutrinos and gamma rays. Neutrinos are always the result of a hadronic reaction such as the one displayed here. Gamma rays can be produced in both hadronic and electromagnetic interactions. (ICECUBE/NASA)In 1987, we detected neutrinos from another galaxy in a supernova. After a 30 year wait, we’ve found something even better.
One of the great mysteries in science is determining not only what’s out there, but what creates the signals we detect here on Earth. For over a century, we’ve known that zipping through the Universe are cosmic rays: high energy particles originating from far beyond our galaxy. While some sources for these particles have been identified, the overwhelming majority of them, including the ones that are most energetic, remain a mystery.
As of today, all of that has changed. The IceCube collaboration, on September 22, 2017, detected an ultra-high-energy neutrino that arrived at the South Pole, and was able to identify its source. When a series of gamma-ray telescopes looked at that same position, they not only saw a signal, they identified a blazar, which happened to be flaring at that very moment. At last, humanity has discovered at least one source that creates these ultra-energetic cosmic particles.
When black holes feed on matter, they create an accretion disk and a bipolar jet perpendicular to it. When a jet from a supermassive black hole points at us, we call it either a BL Lacertae object or a blazar. This is now thought to be a major source of both cosmic rays and high-energy neutrinos. (NASA/JPL)
The Universe, everywhere we look, is full of things to look at and interact with. Matter clumps together into galaxies, stars, planets, and even people. Radiation streams through the Universe, covering the entirety of the electromagnetic spectrum. And in every cubic centimeter of space, hundreds of ghostly, tiny-massed particles known as neutrinos can be found.
At least, they could be found, if they interacted with any appreciable frequency with the normal matter we know how to manipulate. Instead, a neutrino would have to pass through a light year of lead to have a 50/50 shot of colliding with a particle in there. For decades after its proposal in 1930, we were unable to detect the neutrino.
Reactor nuclear experimental RA-6 (Republica Argentina 6), en marcha, showing the characteristic Cherenkov radiation from the faster-than-light-in-water particles emitted. The neutrinos (or more accurately, antineutrinos) first hypothesized by Pauli in 1930 were detected from a similar nuclear reactor in 1956. (CENTRO ATOMICO BARILOCHE, VIA PIECK DARÍO)
In 1956, we first detected them by setting up detectors right outside of nuclear reactors, mere feet away from where neutrinos are produced. In the 1960s, we built large enough detectors — underground, shielded from other contaminating particles — to find the neutrinos produced by the Sun and by cosmic ray collisions with the atmosphere.
Then, in 1987, it was only serendipity that gave us a supernova so close to home that we could detect neutrinos from it. Experiments running for entirely unrelated purposes detected the neutrinos from SN 1987A, ushering in the era of multi-messenger astronomy. Neutrinos, as far as we could tell, traveled across the Universe at energies indistinguishable from the speed of light.
The remnant of supernova 1987a, located in the Large Magellanic Cloud some 165,000 light years away. The fact that neutrinos arrived hours before the first light signal taught us more about the duration it takes light to propagate through the star’s layers of a supernova than it did about the speed neutrinos travel at, which was indistinguishable from the speed of light. Neutrinos, light, and gravity appear to all travel at the same speed now. (NOEL CARBONI & THE ESA/ESO/NASA PHOTOSHOP FITS LIBERATOR)
For some 30 years, the neutrinos from that supernova were the only neutrinos that we had ever confirmed to be from outside of our own Solar System, much less our home galaxy. But that doesn’t mean we weren’t receiving more distant neutrinos; it simply meant that we couldn’t robustly identify them with any known source in the sky. Although neutrinos interact only very weakly with matter, they’re more likely to interact if they’re higher in energy.
The IceCube observatory, the first neutrino observatory of its kind, is designed to observe these elusive, high-energy particles from beneath the Antarctic ice. (EMANUEL JACOBI, ICECUBE/NSF)
Deep within the South Pole ice, IceCube encloses a cubic kilometer of solid material, searching for these nearly-massless neutrinos. When neutrinos pass through the Earth, there’s a chance of having an interaction with a particle in there. An interaction will lead to a shower of particles, which should leave unmistakable signatures in the detector.
In this illustration, a neutrino has interacted with a molecule of ice, producing a secondary particle — a muon — that moves at relativistic speed in the ice, leaving a trace of blue light behind it. (NICOLLE R. FULLER/NSF/ICECUBE)
In the six years that IceCube has been running, they’ve detected more than 80 high-energy cosmic neutrinos with energies over 100 TeV: more than ten times the highest energies achieved by any particles at the LHC. Some of them have even crested the PeV scale, achieving energies thousands of times greater than what’s needed to create even the heaviest of the known fundamental particles.
Yet despite all these neutrinos of cosmic origin that have arrived on Earth, we haven’t yet ever matched them up with a source on the sky that offers a definitive location. Detecting these neutrinos is a tremendous feat, but unless we can correlate them with an actual, observed object in the Universe — for example, that’s also observable in some form of electromagnetic light — we have no clue as to what creates them.
When a neutrino interacts in the clear Antarctic ice, it produces secondary particles that leave a trace of blue light as they travel through the IceCube detector. (NICOLLE R. FULLER/NSF/ICECUBE)
Theorists have had no problem coming up with ideas, including:
hypernovae, the most superluminous of all the supernovae,
gamma ray bursts,
flaring black holes,
or quasars, the largest, active black holes in the Universe.
But it would take evidence to decide.
An example of a high-energy neutrino event detected by IceCube: a 4.45 PeV neutrino striking the detector back in 2014. (ICECUBE SOUTH POLE NEUTRINO OBSERVATORY / NSF / UNIVERSITY OF WISCONSIN-MADISON)
IceCube has been tracking and issuing releases with every ultra-high-energy neutrino they’ve found. On September 22 of 2017, another such event was seen: IceCube-170922A. In the release that went out, they stated the following:
On 22 Sep, 2017 IceCube detected a track-like, very-high-energy event with a high probability of being of astrophysical origin. The event was identified by the Extremely High Energy (EHE) track event selection. The IceCube detector was in a normal operating state. EHE events typically have a neutrino interaction vertex that is outside the detector, produce a muon that traverses the detector volume, and have a high light level (a proxy for energy).
Cosmic rays shower particles by striking protons and atoms in the atmosphere, but they also emit light due to Cherenkov radiation. By observing both cosmic rays from the sky and neutrinos that strike the Earth, we can use coincidences to uncover the origins of both.(SIMON SWORDY (U. CHICAGO), NASA)
This endeavor is interesting not just for neutrinos, but for cosmic rays in general. Despite the fact that we’ve seen millions of cosmic rays of high energies for more than a century, we do not understand where most of them originate. This is true for protons, nuclei, and neutrinos created both at the source and via cascades/showers in the atmosphere.
And that led observers, attempting to perform follow-up observations across the electromagnetic spectrum, to this object.
Artist’s impression of the active galactic nucleus. The supermassive black hole at the center of the accretion disk sends a narrow high-energy jet of matter into space, perpendicular to the disc. A blazar about 4 billion light years away is the origin of these cosmic rays and neutrinos. (DESY, SCIENCE COMMUNICATION LAB)
This is a blazar: a supermassive black hole that’s currently in the active state, feeding on matter and accelerating it to tremendous speeds. Blazars are just like quasars, but with one important difference. While quasars can be oriented in any direction, a blazar will always have one of its jets pointed directly at Earth. They’re called blazars because they “blaze” right at you.
About 20 observatories on Earth and in space made follow-up observations of the location where IceCube observed last September’s neutrino, which allowed idenification of what scientists deem to be a source of very high energy neutrinos and, thus, of cosmic rays. Besides neutrinos, the observations made across the electromagnetic spectrum included gamma-rays, X-rays, and optical and radio radiation. (NICOLLE R. FULLER/NSF/ICECUBE)
Not only that, but when the neutrinos arrived, the blazar was found to be in a flaring state, corresponding to the most active outflows such an object experiences. Since outflows peak and ebb, researchers affiliated with IceCube went through a decade’s worth of records prior to the September 22, 2017 flare, and searched for any neutrino events that would originate from the position of TXS 0506+056.
The immediate find? Neutrinos arrived from this object in multiple bursts, spanning many years. By combining neutrino observations with electromagnetic ones, we’ve robustly been able to establish that high-energy neutrinos are produced by blazars, and that we have the capability to detect them, even from such a great distance. TXS 0506+056, if you were curious, is located some 4 billion light years away.
Blazar TXS 0506+056 is the first identified source of high-energy neutrinos and cosmic rays. This illustration, based on an image of Orion by NASA, shows the location of the blazar, situated in the night sky just off the left shoulder of the constellation Orion. The source is about 4 billion light-years from Earth. (ICECUBE/NASA/NSF)
A tremendous amount can be learned just from this one multi-messenger observation.
Blazars have been demonstrated to be at least one source of cosmic rays.
To produce neutrinos, you need decaying pions, and those are produced by accelerated protons.
This provides the first definitive evidence of proton acceleration by black holes.
This also demonstrates that the blazar TXS 0506+056 is one of the most luminous sources in the Universe.
Finally, from the accompanying gamma rays, we can be certain that cosmic neutrinos and cosmic rays, at least sometimes, have a common origin.
Cosmic rays produced by high-energy astrophysics sources can reach Earth’s surface. When a cosmic ray collides with a particle in Earth’s atmosphere, it produces a shower of particles that we can detect with arrays on the ground. At last, we’ve uncovered a major source of them. (ASPERA COLLABORATION / ASTROPARTICLE ERANET)
According to Frances Halzen, principal investigator of the IceCube neutrino observatory,
It is interesting that there was a general consensus in the astrophysics community that blazars were unlikely to be sources of cosmic rays, and here we are… The ability to marshal telescopes globally to make a discovery using a variety of wavelengths and coupled with a neutrino detector like IceCube marks a milestone in what scientists call “multi-messenger astronomy.”
The era of multi-messenger astronomy is officially here, and now we have three completely independent and complementary ways of looking at the sky: with light, with neutrinos, and with gravitational waves. We’ve learned that blazars, once considered an unlikely candidate for generating high-energy neutrinos and cosmic rays, in fact create both.
This is an artist’s impression of a distant quasar 3C 279. The bipolar jets are a common feature, but it’s extremely uncommon for such a jet to be pointed directly at us. When that occurs, we have a Blazar, now confirmed to be a source of both high-energy cosmic rays and the ultra-high-energy neutrinos we’ve been seeing for years. (ESO/M. KORNMESSER)
A new scientific field, that of high-energy neutrino astronomy, officially launches with this discovery. Neutrinos are no longer a by-product of other interactions, nor a cosmic curiosity that barely extends beyond our Solar System. Instead, we can use them as a fundamental probe of the Universe and of the basic laws of physics itself. One of the major goals in building IceCube was to identify the sources of high-energy cosmic neutrinos. With the identification of the blazar TXS 0506+056 as the source for both these neutrinos and of gamma rays, that’s one cosmic dream that’s at last been achieved.
High-energy collisions of particles can create matter-antimatter pairs or photons, while matter-antimatter pairs annihilate to produce photons as well. At the inception of the hot Big Bang, the Universe is filled with particles, antiparticles, and photons, which interact, annihilate, produce new particles, all as the Universe expands and cools. (BROOKHAVEN NATIONAL LABORATORY / RHIC)Immediately after the Big Bang, the Universe was more energetic than ever. What was it like?
When we look out at the Universe today, we see that it’s full of stars and galaxies, in all directions and at all locations in space. The Universe isn’t static, though; the distant galaxies are bound together in groups and clusters, with those groups and clusters speeding away from one another as part of the expanding Universe. As the Universe expands, it gets not only sparser, but cooler, as the individual photons shift to redder wavelengths as they travel through space.
But this means if we look back in time, the Universe was not only denser, but also hotter. If we go all the way back to the earliest moments where this description applies, to the first moments of the Big Bang, we come to the Universe as it was at its absolute hottest. Here’s what it was like to live back then.
The quarks, antiquarks, and gluons of the standard model have a color charge, in addition to all the other properties like mass and electric charge. All of these particles, to the best we can tell, are truly point-like, and come in three generations. At higher energies, it is possible that still additional types of particles will exist. (E. SIEGEL / BEYOND THE GALAXY)
In today’s Universe, particles obey certain rules. Most of them have masses, corresponding to the total amount of internal energy inherent to that particle’s existence. They can either be matter (for the Fermions), antimatter (for the anti-Fermions), or neither (for the bosons). Some of the particles are massless, which demands they move at the speed of light.
Whenever corresponding matter/antimatter pairs collide with one another, they can spontaneously annihilate, generally producing two massless photons. And when you smash together any two particles at all with large enough amounts of energy, there’s a chance that you can spontaneously create new matter/antimatter particle pairs. So long as there’s enough energy, according to Einstein’s E = mc², we can turn energy into matter, and vice versa.
The production of matter/antimatter pairs (left) from pure energy is a completely reversible reaction (right), with matter/antimatter annihilating back to pure energy. This creation-and-annihilation process, which obeys E = mc², is the only known way to create and destroy matter or antimatter. (DMITRI POGOSYAN / UNIVERSITY OF ALBERTA)
Well, things sure were different early on! At the extremely high energies we find in the earliest stages of the Big Bang, every particle in the Standard Model was massless. The Higgs symmetry, which gives particles masses when it breaks, is completely restored at these temperatures. It’s too hot not only to form atoms and bound atomic nuclei, but even individual protons and neutrons are impossible; the Universe is a hot, dense plasma filled with all the particles and antiparticles that can exist.
Energies are so high that even the most ghostly known particles and antiparticles of all, neutrinos and antineutrinos, smash into other particles more frequently than at any other time. Every particle smacks into another countless trillions of times per microsecond, all moving at the speed of light.
The early Universe was full of matter and radiation, and was so hot and dense that it prevented protons and neutrons from stably forming for the first fraction-of-a-second. Once they do, however, and the antimatter annihilates away, we wind up with a sea of matter and radiation particles, zipping around close to the speed of light. (RHIC COLLABORATION, BROOKHAVEN)
In addition to the particles we know, there may well be additional particles (and antiparticles) that we don’t know about today. The Universe was far hotter and more energetic — a million times greater than the highest-energy cosmic rays and trillions of times stronger than the LHC’s energies — than anything we can view on Earth. If there are additional particles to produce in the Universe, including:
particles predicted by Grand Unified Theories,
particles accessible via large or warped extra dimensions,
smaller particles that make up the ones we now think are fundamental,
heavy, right-handed neutrinos,
or a great variety of dark matter candidate particles,
the young, post-Big Bang Universe would have created them.
The photons, particles and antiparticles of the early Universe. It was filled with both bosons and fermions at that time, plus all the antifermions you can dream up. If there are additional, high energy particles we haven’t yet discovered, they likely existed in these early stages, too. (BROOKHAVEN NATIONAL LABORATORY)
What’s remarkable is that despite these incredible energies and densities, there’s a limit. The Universe never was arbitrarily hot and dense, and we have the observational evidence to prove it. Today, we can observe the Cosmic Microwave Background: the leftover glow of radiation from the Big Bang. While this is a uniform 2.725 K everywhere and in all directions, there are tiny fluctuations in it: fluctuations of only tens or hundreds of microkelvin. Thanks to the Planck satellite, we’ve mapped this out to extraordinary precision, with an angular resolution that goes down to just 0.07 degrees.
The fluctuations in the Cosmic Microwave Background were first measured accurately by COBE in the 1990s, then more accurately by WMAP in the 2000s and Planck (above) in the 2010s. This image encodes a huge amount of information about the early Universe, including its composition, age, and history. The fluctuations are only tens to hundreds of microkelvin in magnitude. (ESA AND THE PLANCK COLLABORATION)
The spectrum and magnitude of these fluctuations teaches us something about the maximum temperature the Universe could have achieved during the earliest, hottest stages of the Big Bang: it has an upper limit. In physics, the highest possible energies of all are at the Planck scale, which is around 10¹⁹ GeV, where a GeV is the energy required to accelerate one electron to a potential of one billion volts. Beyond those energies, the laws of physics no longer make sense.
The objects we’ve interacted with in the Universe range from very large, cosmic scales down to about 10^-19 meters, with the newest record set by the LHC. There’s a long, long way down (in size) and up (in energy) to the Planck scale, however. (UNIVERSITY OF NEW SOUTH WALES / SCHOOL OF PHYSICS)
But given the map of the fluctuations we have in the Cosmic Microwave Background, we can conclude those temperatures were never achieved. The maximum temperature that our Universe ever could have achieved, as shown by the fluctuations in the cosmic microwave background, is only ~10¹⁶ GeV, or a factor of 1,000 smaller than the Planck scale. The Universe, in other words, had a maximum temperature it could have reached, and it’s significantly lower than the Planck scale.
These fluctuations do more than tell us about the highest temperature the hot Big Bang achieved; they tell us what seeds were planted in the Universe to grow into the cosmic structure we have today.
Regions of space that are slightly denser than average will create larger gravitational potential wells to climb out of, meaning the light arising from those regions appears colder by time it arrives at our eyes. Vice versa, underdense regions will look like hot spots, while regions with perfectly average density will have perfectly average temperatures. (E. SIEGEL / BEYOND THE GALAXY)
The cold spots are cold because the light has a slightly greater gravitational potential well to climb out of, corresponding to a region of greater-than-average density. The hot spots, correspondingly, come from regions with below-average densities. Over time, the cold spots will grow into galaxies, groups and clusters of galaxies, and will help form the great cosmic web. The hot spots, on the other hand, will give up their matter to the denser regions, becoming great cosmic voids over billions of years. The seeds for structure were there from the Big Bang’s earliest, hottest stages.
As the fabric of the Universe expands, the wavelengths of any light/radiation sources will get stretched as well. Many high-energy processes occur spontaneously in the very early stages of the Universe, but will cease occurring when the temperature of the Universe drops below a critical value owing to the expansion of space.(E. SIEGEL / BEYOND THE GALAXY)
What’s more is that once you reach the maximum temperature achievable in the early Universe, it immediately begins to plummet. Just like a balloon expands when you fill it with hot air, because the molecules have lots of energy and push out against the balloon walls, the fabric of space expands when you fill it with hot particles, antiparticles, and radiation.
And whenever the Universe expands, it also cools. Radiation, remember, has its energy proportional to its wavelength: the amount of distance it takes a wave to complete one oscillation. As the fabric of space stretches, the wavelength stretches too, bringing that radiation to lower and lower energies. Lower energies correspond to lower temperatures, and hence the Universe gets not only less dense, but less hot, too, as time goes on.
There is a large suite of scientific evidence that supports the picture of the expanding Universe and the Big Bang. The entire mass-energy of the Universe was released in an event lasting less than 10^-30 seconds in duration; the most energetic thing ever to occur in our Universe’s history. (NASA / GSFC)
At the inception of the hot Big Bang, the Universe reaches its hottest, densest state, and is filled with matter, antimatter, and radiation. The imperfections in the Universe — nearly perfectly uniform but with inhomogeneities of 1-part-in-30,000 — tell us how hot it could have gotten, and also provide the seeds from which the large-scale structure of the Universe will grow. Immediately, the Universe begins expanding and cooling, becoming less hot and less dense, and making it more difficult to create anything requiring a large store of energy. E = mc² means that without enough energy, you can’t create a particle of a given mass.
Over time, the expanding and cooling Universe will drive an enormous number of changes. But for one brief moment, everything was symmetric, and as energetic as possible. Somehow, over time, these initial conditions created the entire Universe.
A neutrino event, identifiable by the rings of Cerenkov radiation that show up along the photomultiplier tubes lining the detector walls, showcase the successful methodology of neutrino astronomy. This image shows multiple events. (SUPER KAMIOKANDE COLLABORATION)Before there were gravitational waves, multi-messenger astronomy got its start with the neutrino.
Sometimes, the best-designed experiments fail. The effect you’re looking for might not even occur, meaning that a null result should always be a possible outcome you’re prepared for. When that happens, the experiment is often dismissed as a failure, even though you never would have known the results without performing it.
Yet, every once in a while, the apparatus that you build might be sensitive to something else entirely. When you do science in a new way, at a new sensitivity, or under new, unique conditions, that’s often where the most surprising, serendipitous discoveries are made. In 1987, a failed experiment for detecting proton decay detected neutrinos, for the first time, from beyond not only our Solar System, but from outside of the Milky Way. This is how neutrino astronomy was born.
The conversion of a neutron to a proton, an electron, and an anti-electron neutrino is how Pauli hypothesized resolving the energy non-conservation problem in beta decay. (JOEL HOLDSWORTH)
The neutrino is one of the great success stories in all the history of theoretical physics. Back in the early 20th century, three types of radioactive decay were known:
Alpha decay, where a larger atom emits a helium nucleus, jumping two elements down the periodic table.
Beta decay, where an atomic nucleus emits a high-energy electron, moving one element up the periodic table.
Gamma decay, where an atomic nucleus emits an energetic photon, remaining in the same location on the periodic table.
In any reaction, under the laws of physics, whatever the total energy and momentum of the initial reactants are, the energy and momentum of the final products need to match. For alpha and gamma decays, they always did. But for beta decays? Never. Energy was always lost.
The V-shaped track in the center of the image is likely a muon decaying to an electron and two neutrinos. The high-energy track with a kink in it is evidence of a mid-air particle decay. This decay, if the (undetected) neutrino is not included, would violate energy conservation. (THE SCOTTISH SCIENCE & TECHNOLOGY ROADSHOW)
In 1930, Wolfgang Pauli proposed a new particle that could solve the problem: the neutrino. This small, neutral particle could carry both energy and momentum, but would be extremely difficult to detect. It wouldn’t absorb or emit light, and would only interact with atomic nuclei extremely rarely.
Upon its proposal, rather than confident and elated, Pauli felt ashamed. “I have done a terrible thing, I have postulated a particle that cannot be detected,” he declared. But despite his reservations, the theory was vindicated by experiment.
Reactor nuclear experimental RA-6 (Republica Argentina 6), en marcha, showing the characteristic Cherenkov radiation from the faster-than-light-in-water particles emitted. The neutrinos (or more accurately, antineutrinos) first hypothesized by Pauli in 1930 were detected from a similar nuclear reactor in 1956. (CENTRO ATOMICO BARILOCHE, VIA PIECK DARÍO)
In 1956, neutrinos (or more specifically, antineutrinos) were first directly detected as part of the products of a nuclear reactor. When neutrinos interact with an atomic nucleus, two things can result:
they either scatter and cause a recoil, like a billiard ball knocking into other billiard balls,
or they cause the emission of new particles, which have their own energies and momenta.
Either way, you can build specialized particle detectors around where you expect the neutrinos to interact, and look for them. This was how the first neutrinos were detected: by building particle detectors sensitive to neutrino signatures at the edges of nuclear reactors. If you reconstructed the entire energy of the products, including neutrinos, energy is conserved after all.
Schematic illustration of nuclear beta decay in a massive atomic nucleus. Only if the (missing) neutrino energy and momentum is included can these quantities be conserved. (WIKIMEDIA COMMONS USER INDUCTIVELOAD)
In theory, neutrinos should be produced wherever nuclear reactions take place: in the Sun, in stars and supernovae, and whenever an incoming high-energy cosmic ray strikes a particle from Earth’s atmosphere. By the 1960s, physicists were building neutrino detectors to look for both solar (from the Sun) and atmospheric (from cosmic ray) neutrinos.
A large amount of material, with mass designed to interact with the neutrinos inside of it, would be surrounded by this neutrino detection technology. In order to shield the neutrino detectors from other particles, they were placed far underground: in mines. Only neutrinos should make it into the mines; the other particles should be absorbed by the Earth. By the end of the 1960s, solar and atmospheric neutrinos had both successfully been found.
The Homestake Gold Mine sits wedged in the mountains in Lead, South Dakota. It began operation over 123 years ago, producing 40 million ounces of gold from the 8,000 foot deep underground mine and mill. In 1968, the first Solar neutrinos were detected at an experiment here, devised by John Bahcall and Ray Davis. (Jean-Marc Giboux/Liaison)
The particle detection technology that was developed for both neutrino experiments and high-energy accelerators was found to be applicable to another phenomenon: the search for proton decay. While the Standard Model of particle physics predicts that the proton is absolutely stable, in many extensions — such as Grand Unification Theories — the proton can decay into lighter particles.
In theory, whenever a proton does decay, it will emit lower-mass particles at very high speeds. If you can detect the energies and momenta of those fast-moving particles, you can reconstruct what the total energy is, and see if it came from a proton.
High-energy particles can collide with others, producing showers of new particles that can be seen in a detector. By reconstructing the energy, momentum, and other properties of each one, we can determine what initially collided and what was produced in this event.(FERMILAB)
If protons decay, their lifetime must be extremely long. The Universe itself is 10¹⁰ years old, but the proton’s lifetime must be much longer. How much longer? The key is to look not at one proton, but at an enormous number. If a proton’s lifetime is 10³⁰ years, you can either take a single proton and wait that long (a bad idea), or take 10³⁰ protons and wait 1 year to see if any decay.
A liter of water contains a little over 10²⁵ molecules in it, where each molecule contains two hydrogen atoms: a proton orbited by an electron. If the proton is unstable, a large enough tank of water, with a large set of detectors around it, should allow you to either measure or constrain its stability/instability.
In Japan, in 1982, they began constructing a large underground detector in the Kamioka mines. The detector was named KamiokaNDE: Kamioka Nucleon Decay Experiment. It was large enough to hold over 3,000 tons of water, with around a thousand detectors optimized to detect the radiation that fast-moving particles would emit.
By 1987, the detector had been running for years, without a single instance of proton decay. With around 10³³ protons in that tank, this null result completely eliminated the most popular model among Grand Unified Theories. The proton, as far as we could tell, doesn’t decay. KamiokaNDE’s main objective was a failure.
A supernova explosion enriches the surrounding interstellar medium with heavy elements. The outer rings are caused by previous ejecta, long before the final explosion. This explosion also emitted a huge variety of neutrinos, some of which made it all the way to Earth. (ESO / L. CALÇADA)
But then something unexpected happened. 165,000 years earlier, in a satellite galaxy of the Milky Way, a massive star reached the end of its life and exploded in a supernova. On February 23, 1987, that light reached Earth for the first time.
But a few hours before that light arrived, something remarkable happened at KamiokaNDE: a total of 12 neutrinos arrived within a span of about 13 seconds. Two bursts — the first containing 9 neutrinos and the second containing 3 — demonstrated that the nuclear processes that create neutrinos occur in great abundance in supernovae.
Three different detectors observed the neutrinos from SN 1987A, with KamiokaNDE the most robust and successful. The transformation from a nucleon decay experiment to a neutrino detector experiment would pave the way for the developing science of neutrino astronomy. (INSTITUTE FOR NUCLEAR THEORY / UNIVERSITY OF WASHINGTON)
For the first time, we had detected neutrinos from beyond our Solar System. The science of neutrino astronomy had just begun. Over the next few days, the light from that supernova, now known as SN 1987A, was observed in a huge variety of wavelengths by a number of ground-based and space-based observatories. Based on the tiny difference in the time-of-flight of the neutrinos and the arrival time of the light, we learned that neutrinos:
traveled that 165,000 light years at a speed indistinguishable from the speed of light,
that their mass could be no more than 1/30,000th the mass of an electron,
and that neutrinos aren’t slowed down as they travel from the core of the collapsing star to its photospher, the way that light is.
Even today, more than 30 years later, we can examine this supernova remnant and see how it’s evolved.
The outward-moving shockwave of material from the 1987 explosion continues to collide with previous ejecta from the formerly massive star, heating and illuminating the material when collisions occur. A wide variety of observatories continue to image the supernova remnant today. (NASA, ESA, AND R. KIRSHNER (HARVARD-SMITHSONIAN CENTER FOR ASTROPHYSICS AND GORDON AND BETTY MOORE FOUNDATION) AND P. CHALLIS (HARVARD-SMITHSONIAN CENTER FOR ASTROPHYSICS))
The scientific importance of this result cannot be overstated. It marked the birth of neutrino astronomy, just as the first direct detection of gravitational waves from merging black holes marked the birth of gravitational wave astronomy. It was the birth of multi-messenger astronomy, marking the first time that the same object had been observed in both electromagnetic radiation (light) and via another method (neutrinos).
It showed us the potential of using large, underground tanks to detect cosmic events. And it causes us to hope that, someday, we might make the ultimate observation: an event where light, neutrinos, and gravitational waves all come together to teach us all about the workings of the objects in our Universe.
The ultimate event for multi-messenger astronomy would be a merger of either two white dwarfs or two neutrons stars that was close enough. If such an event occurred in near-enough proximity to Earth, neutrinos, light, and gravitational waves could all be detected. (NASA, ESA, AND A. FEILD (STSCI))
Most cleverly, it resulted in a renaming of KamiokaNDE. The Kamioka Nucleon Decay Experiment was a total failure, so KamiokaNDE was out. But the spectacular observation of neutrinos from SN 1987A gave rise to a new observatory: KamiokaNDE, the Kamioka Neutrino Detector Experiment! Over the past 30+ years, this has now been upgraded many times, and multiple similar facilities have popped up all over the world.
If a supernova were to go off today, in our own galaxy, we would be treated to upwards of 10,000 neutrinos arriving in our detector. All of them, combined, have further constrained the lifetime of the proton to now be greater than around 10³⁵ years, but that’s not why we build them. Whenever a high-energy cataclysm occurs, neutrinos speed through the Universe. With our detectors online, neutrino astronomy is alive, well, and ready for whatever the cosmos sends our way.
The giant elliptical galaxy at the center of galaxy cluster Abell S1063 is much larger and more luminous than the Milky Way is, but many other galaxies, even smaller ones, will outshine it. (NASA, ESA, and J. Lotz (STScI))The brightest galaxies of all neither have the most stars nor the biggest black holes. Here’s how to solve the mystery.
With some 400 billion stars burning steadily, the Milky Way is just a typical galaxy in the Universe.
The SDSS view in the infrared — with APOGEE — of the Milky Way galaxy as viewed towards the center. Containing some 400 billion stars, infrared wavelengths are the best for viewing as many as possible due to its transparency to light-blocking dust. (Sloan Digital Sky Survey)
Many galaxies are larger, containing tens or hundreds of times as many stars.
The giant elliptical near the center of the Coma Cluster, NGC 4874 (at right), is typical of the largest, brightest galaxies found at the centers of the most massive galaxy clusters. Its stars are primarily older and redder, with only a few populations of bluer stars found sparsely inside. (ESA/Hubble & NASA)
But there are galaxies that are intrinsically brighter because they’re active, irrespective of their size.
Galaxies undergoing massive bursts of star formation can outshine even much larger, typical galaxies. M82, the Cigar Galaxy, is gravitationally interacting with its neighbor (not pictured), causing this burst of active, new star formation, making it much brighter than a typical galaxy of its size and mass. (NASA, ESA, and The Hubble Heritage Team (STScI/AURA))
When new stars form en masse, the most massive ones can shine up to millions of times brighter than a Sun-like star.
The largest group of newborn stars in our Local Group of galaxies, cluster R136, contains the most massive stars we’ve ever discovered: over 250 times the mass of our Sun for the largest. The brightest of the stars found here are more than 8,000,000 times as luminous as our Sun. (NASA, ESA, and F. Paresce, INAF-IASF, Bologna, R. O’Connell, University of Virginia, Charlottesville, and the Wide Field Camera 3 Science Oversight Committee)
Galactic mergers trigger new waves of star formation, and can also activate supermassive black holes at the centers of these galaxies.
The most distant X-ray jet in the Universe, from quasar GB 1428, is approximately the same distance and age, as viewed from Earth, as quasar S5 0014+81, which houses possibly the largest known black hole in the Universe. These distant behemoths are thought to be activated by mergers or other gravitational interactions. (X-ray: NASA/CXC/NRC/C.Cheung et al; Optical: NASA/STScI; Radio: NSF/NRAO/VLA)
An active, supermassive black hole will accelerate nearby matter to relativistic speeds, creating bright jets of multiwavelength light.
An illustration of an active black hole, one that accretes matter and accelerates a portion of it outwards in two perpendicular jets, is an outstanding descriptor of how quasars work. (Mark A. Garlick)
The brightest ones, the quasars, are all thought to be housed in galaxies, though many remain unobserved.
This artist’s rendering shows a galaxy being cleared of interstellar gas, the building blocks of new stars. Winds driven by a central black hole are responsible for this, and may be at the heart of what’s driving a number of active, ultra-distant galaxies. (ESA/ATG Medialab)
This artist’s concept depicts the current record holder for the most luminous galaxy in the universe. The galaxy, named WISE J224607.57–052635.0, is erupting with light equal to more than 300 trillion suns. It was discovered by NASA’s Wide-Field Infrared Survey Explorer, or WISE. The galaxy is smaller than the Milky Way, yet puts out 10,000 times more energy. (NASA / JPL-Caltech)
Many galaxies, particularly young and dusty ones, emit most of their energy in the infrared portion of the spectrum. If we want to find the brightest galaxies of all, we’ll need a next-generation infrared space telescope. The Fireworks galaxy, from NASA’s Spitzer space telescope, is a local example of a predominantly infrared galaxy.(NASA / JPL-Caltech / SSC / R. Kennicutt et al.)
This artist’s rendering shows a galaxy called W2246–0526, the most luminous galaxy known. New research suggests there is turbulent gas across its entirety, much of which is being expelled. This is the first example of its kind. (NRAO/AUI/NSF; Dana Berry / SkyWorks; ALMA (ESO/NAOJ/NRAO))
Although the Universe is just 10% of its current age and the galaxy is even smaller than ours, it outshines them all.
An ultra-distant quasar showing plenty of evidence for a supermassive black hole at its center. How that black hole got so massive so quickly is a topic of contentious scientific debate, but mergers of smaller black holes formed in early generations of stars might create the necessary seeds. Many quasars even outshine the most luminous galaxies of all. (X-ray: NASA/CXC/Univ of Michigan/R.C.Reis et al; Optical: NASA/STScI)
This image shows SDSS J0100+2802 (center), the brightest quasar in the early Universe. It’s light comes to us from when the Universe was only 0.9 billion years old, versus the 13.8 billion year age we have today. (Sloan Digital Sky Survey)
Mostly Mute Monday tells the astronomical story of an object, class, or phenomenon in the Universe in visuals, images, and no more than 200 words. Talk less, smile more.
A planet that is a candidate for being inhabited will no doubt experience catastrophes and extinction events on it. If life is to survive and thrive on a world, it must possess the right intrinsic and environmental conditions to allow it to be so. (NASA Goddard Space Flight Center)How ‘special’ are we for life to have survived and thrived the way it did?
While there are hundreds of billions of stars in our galaxy, many with Earth-sized planets at the right distance for liquid water on their surface, there are great chances for life all throughout the Milky Way. At least, that’s what we assume. But isn’t it possible that the conditions we have at our location make us very special as far as life surviving and thriving the way it has here on Earth? That’s what Tayte Taliaferro wants to know, as she asks:
[W]hat would happen if our solar system had formed a little farther up the arm of the galaxy? What would happen if we were at the tip of the arm? What if, theoretically, instead of the humongous black hole in the center of our galaxy, our solar system was there? Would there be major climate difference[s]? Would we be able to survive?
Let’s look at how different things would be.
An illustration of a protoplanetary disk, where planets and planetesimals form first, creating ‘gaps’ in the disk when they do. The outer disk provides the material that winds up creating the mantles, crusts, atmospheres and oceans of planets like ours. (NAOJ)
Here in our Solar System, we’re relatively well-informed about how it all broke down over the past 4.5 billion years. A molecular cloud of gas with a certain amount of enrichment — containing about 2% heavy elements by mass, along with ~28% helium and ~70% hydrogen — collapsed, giving rise to new stars. One of those would be our Sun, which formed a protoplanetary disk around it, like practically all stars do.
Over millions to tens-of-millions of years, the hot Sun boiled off the material in the inner disk, while the outer, cooler material then fell in and accreted around the pre-existing cores. The most massive, giant worlds held onto large amounts of the lightest elements (hydrogen and helium), while the smaller, rocky worlds did not. Gravitational interactions did the rest, and determined the Solar System we arrived at today.
There are many properties about Earth and our Solar System that appear special, but they may not be necessary for life. Unlike all the other rocky planets in our Solar System, Earth has a giant moon, which causes the tides and keeps our axial tilt stable. Unlike many other Solar Systems, ours has a large gas giant — Jupiter — just slightly beyond the location where our asteroid belt exists. And unlike the majority of stars in the galaxy, we’re located on the outskirts of a spur of a spiral arm, some 25,000 light years away from the galactic center.
The structure of our Milky Way has been mapped out fairly well, including the position of our Sun. It is presently unknown which stars and regions of the galaxy are capable of supporting life. (NASA/JPL-Caltech/R. Hurt; Wikimedia Commons user Cmglee)
For 4.5 billion years on Earth, life has continued to survive and evolve, developing more complexity, diversity, and with more information encoded in its DNA. We’ve made it through a large number of mass extinction events, most of which have unknown or only speculatively known causes. Although anywhere from 30% to perhaps 70% of the species on our world have been wiped out at various times, most recently from a giant asteroid strike just 65 million years ago, life on Earth has never faltered. As time has continued on, so has the presence of biological activity on our planet.
The percentage of species that have gone extinct during a variety of time intervals. The largest known extinction is the Permian-Triassic boundary some 250 million years ago, whose cause is still unknown. The most recent extreme mass extinction, 65 million years ago, caused about 30% of the world’s species to go extinct. (Wikimedia Commons user Smith609, with data from Raup & Smith (1982) and Rohde and Muller (2005))
Of all the traits that Earth has, though, which ones are absolutely necessary for life? And which ones would lead to a planet where the history of life told a different story than ours, but where everything was still possible?
Until we actually find life beyond Earth, in planets that lie beyond our Solar System, questions like this will inevitably be based in speculation. But this isn’t mere guesswork; these are the theoretical statements we can make using the best science we have available to us at the present time. Based on everything we know, it seems like the conditions that make life possible are a lot more diverse and flexible than most people would expect.
When the Earth’s north pole is maximally tilted away from the Sun, it’s maximally tilted towards the full Moon, on the opposite side of the Earth. The Moon stabilizes our orbit but also slows the Earth’s rotation. It is unknown whether such a moon is required for a planet to develop or sustain life. (National Astronomical Observatory ROZHEN)
Take Earth’s large moon, for example. The gravitational forces from it keep our planet rotating on the same axis over time. Our present axial tilt is 23.5 degrees, but this will vary over very long timescales between 22.1° and 24.5°. A world like Mars, on the other hand, has almost the same axial tilt as Earth: around 25°. But over tens of millions of years, this will vary by ten times as much as it does on Earth: from a minimum of 13° to a maximum of 40°.
This results in huge variations in the climate at various latitudes on Mars, far bigger than any ice age will deliver on Earth. But so long as life can either survive long-term temperature changes or migrate to more temperate climate zones, this shouldn’t be a dealbreaker. Interestingly, the tidal forces from our Moon have also slowed the length of our day: from ~8 hours to 24 hours over the past four billion years. This doesn’t seem to have affected life at all.
The asteroids present in the main belt and the Trojan asteroids around Jupiter may be shepherded by the giant planet, but it is still unknown whether Jupiter causes more or fewer asteroids to cross Earth’s path than a solar system without such a gas giant would see.(Nature)
This is a very similar deal to what happens with Jupiter in our Solar System. Sure, the conventional wisdom is that Jupiter “clears out” the asteroid belt, making it far less likely that an asteroid will collide with Earth. But there’s actually a lot of debate surrounding this issue. For example, consider the following question: does the gravitational presence of Jupiter increase or decrease the probability that an asteroid will be sent our way? Jupiter acts like a perturbing force, randomly imparting an additional velocity to whatever comes near it. Many asteroids will be kicked out, but many otherwise stable asteroids could become potentially hazardous. In the cosmic equation for life, we’re actually unsure whether this is a net positive or negative.
A map of star density in the Milky Way and surrounding sky, clearly showing the Milky Way, large and small Magellanic Clouds, and if you look more closely, NGC 104 to the left of the SMC, NGC 6205 slightly above and to the left of the galactic core, and NGC 7078 slightly below. All told, the Milky Way contains some 200–400 billion stars over its disk-like extent, with the Sun located some 25,000 light years from the center. (ESA/GAIA)
In addition, there is a huge debate about which stars can support life. They not only need to not be too massive and short-lived, but they may need to be more massive than a certain threshold. Most stars — about 80% of them — are red dwarf stars. With low-luminosities, they tidally lock their planets quickly, and emit frequent, large flares. Is life possible around them, or do we need a more massive, Sun-like star?
And what about our position in the galaxy? Some things we can speak sensibly about, like the presence and abundance of heavy elements. In order to have rocky planets with the ingredients conducive to life on it, we believe that we need enough heavy elements to be present. Without those elements, we could only have gas giants, and we couldn’t have the diversity of carbon-based compounds we need to create life.
A multiwavelength view of the galactic center shows stars, gas, radiation and black holes, among other sources. There is a tremendous amount of material there, including the heavy elements and organic compounds that are the necessary precursors to life. They must exist in great enough abundances, however, or life will be an impossibility. (NASA/ESA/SSC/CXC/STScI)
But what is the threshold there? Do we need the full abundance of heavy elements to make it work? Would half the abundance still get the job done? What about 10%? 1%? We can map out the abundance of heavy elements — what astronomers call metallicity — relative to a star’s position in our galaxy. What we find, perhaps surprisingly, is that as long as the stars lie close to the plane of the Milky Way’s disk and not too close or too far from the center, they’ll be more-or-less like we are. The right balance of heavy elements, assuming you need to cross a certain threshold to make life possible, is actually found in most stars in the Milky Way being created today.
The relationship between where stars are located in the Milky Way and their metallicity, or the presence of heavy elements. Stars within about 3000 light years of the central disk of the Milky Way, over a distance range of tens of thousands of light years, have extremely Solar System-like abundances of heavy elements. (Zeljko Ivezic/University of Washington/SDSS-II Collaboration)
Still, there ought to be locations where conditions are simply too violent to support life. A star that’s too massive, maybe more than 50% as massive as our Sun, wouldn’t live long enough for life to reach the complexity it’s reached here on Earth. An inhabited planet that’s too close to a violent cataclysmic event — like a supernova or gamma-ray burst — could have life on it extinguished, although this is debatable, as life could potentially survive it. And a place where the density of stars was too great would mean that a planet might get ejected from its home Solar System or otherwise have its orbit catastrophically disturbed. The odds of such an ejection are very low where we are, but would be enormously increased at the galactic center.
At the centers of galaxies, there exists stars, gas, dust, and (as we now know) black holes, all of which orbit and interact with the central supermassive presence in the galaxy. The masses here not only respond to curved space, they also curve space themselves, and mutual gravitational interactions, along with ejected stars and planets, are extremely common. (ESO/MPE/Marc Schartmann)
In order for life to succeed over a time period of billions of years, we believe we need three major ingredients: for life to begin, to maintain enough stability in a planet’s conditions for life to continue, and to avoid events that result in 100% extinctions. It’s very easy to imagine a planet, like Mars, perhaps, where life began. But if the planetary conditions changed to make it unsuitable for life, or if a catastrophe came along to cause the extinction of every living thing, we couldn’t have an Earth-like world.
Cataclysmic events occur throughout the galaxy and across the Universe, from supernovae to active black holes to merging neutron stars and more. These may make some dense environments full of stars harsh on planets that may deign to develop life, but to rule out the possibility completely would require some evidence beyond what we have at present. (J. Wise/Georgia Institute of Technology and J. Regan/Dublin City University)
Yet, for all of it, the odds are only against the densest and sparsest galactic environments. The galactic center is populated with young, massive stars where life is most threatened; the sparsest galactic outskirts are where life is unlikely to begin in the first place. To the best of our knowledge, once life begins on a world and starts to get under a planet’s skin, it becomes very difficult to extinguish.
We are absolutely certain that the conditions we’ve had on Earth since its inception have led to a thriving biosphere, but we speculate that very different conditions could still have led to a similar outcome. In the great cosmic equation, don’t bet against life thriving and surviving even in a tremendous diversity of environments. After all, a rainforest, a hydrothermal vent, and the snows of Antarctica are all teeming with life. An alien planet might not be right for humans, but it might be just right for the aliens that grew up there.
One of the most intriguing, and least resource-intensive, ideas for searching for life in Enceladus’ ocean is to fly a probe through the geyser-like eruption, collecting samples and analyzing them for molecules that are the products of life. (NASA / Cassini-Huygens mission / Imaging Science Subsystem)Finding the ingredients for life is a very different prospect than finding the products of life.
Perhaps the greatest quest in science today is to find life that originated beyond Earth. While searches for extraterrestrial intelligence have all come up empty, and our astronomical capabilities have not quite advanced to the stage where we can sniff it out in the atmospheres of planets around other stars, there’s a close-to-home possibility to consider. If one of the worlds in our Solar System contains life — past or present — we can discover it with today’s technology.
Many possibilities abound for where life might exist today, including beneath the surface of Mars, in the cloud-tops of Venus, and in the sub-surface ocean of a world like Jupiter’s moon, Europa. But one world in the Solar System stands out: Saturn’s moon Enceladus. With a liquid water ocean beneath its ice and geysers that shoot off hundreds of miles above the surface, the possibility of encountering alien life has never been more accessible.
This is a false-color image of jets (blue areas) in the southern hemisphere of Enceladus taken with the Cassini spacecraft narrow-angle camera on Nov. 27, 2005. (NASA/JPL/Space Science Institute)
Recently, Enceladus made headlines because complex, organic molecules were discovered in its geyser plumes, leading many to speculate that there was life deep in its sub-surface ocean. This speculation may have some merits to it, after all. The carbon compounds that make up the raw ingredients for life exist in many locations in the Solar System: on Earth, in comets and asteroids, on other planets, and on a large number of moons.
The elements that life requires exist in great abundance, but Enceladus is also special for having copious amounts of liquid water. In addition, its close proximity to Saturn means that the tidal forces on Enceladus are huge: there should be energy released at the bottom of the ocean. Those three ingredients — the carbon compounds essential to life, liquid water, and a heat source — are all life requires to thrive at the bottom of Earth’s oceans.
Deep under the sea, around hydrothermal vents, where no sunlight reaches, life still thrives on Earth. How to create life from non-life is one of the great open questions in science today, but if life can exist down here, perhaps undersea on Europa or Enceladus, there’s life, too. (NOAA/PMEL Vents Program)
So if it happened here on Earth, why couldn’t it happen on another world as well? The answer, of course, is that it could happen there, and it could have happened there billions of years ago. Life could very easily be surviving and thriving beneath the icy crust of this distant, Saturnian moon.
But that’s not what we found. We didn’t find molecules that indicate they’re the products of life processes; the molecules we found are the raw ingredients for life. There’s a tremendous difference between the two, and finding raw ingredients on Enceladus no more means there’s life on that world than finding sugar, flour, eggs, milk, and butter in your house means there’s a successfully baked cake there.
Artist’s impression of a young star surrounded by a protoplanetary disk. There are many unknown properties about protoplanetary disks around Sun-like stars, including the elemental segregation of various types of atoms. (ESO/L. Calçada)
In fact, it’s arguable that a bigger surprise would be not finding the ingredients for life! Recent discoveries have given us a solid end-to-end picture for how planets and moons form in our Solar System. We fully expect that our Sun, like all stars, formed with a protoplanetary disk around it. Early imperfections in the disk may have formed the cores of the major worlds, while the extreme temperatures blew off most of the gaseous materials. A few million years later, the cooler, less dense material from the outer Solar System migrates inwards, beefing up the planets and growing them accordingly. The Earth’s mantle, representing some 84% of the Earth by mass, is thought to surround a heavy metal core for exactly this reason.
Although the Earth, by radius, may be mostly composed of its inner and outer cores, the mantle, formed later and composed of similar material to the majority of what’s elsewhere in the Solar System, makes up over 80% of our world by both mass and volume.(Wikimedia Commons user CharlesC)
The same elements, molecules and compounds that the majority of Earth is made of also compose the Moon, Mars, asteroids, and many other rocky bodies in our Solar System. When meteorites fall to Earth and survive, we can examine them and see what’s inside. One classic example is the Murchison meteorite, a large rock which fell in Australia nearly 50 years ago. When we sliced it open and examined what it was made of, we found a whole slew of complex, organic molecules inside. In addition to the 20 amino acids useful in life processes here on Earth, we found more than 60 other unique ones. Not only are the ingredients for life common elsewhere in the Solar System and Universe, but potential ingredients that aren’t found in life here abound wherever we look.
Scores of amino acids not found in nature are found in the Murchison Meteorite, which fell to Earth in Australia in the 20th century. The fact that 80+ unique types of amino acids exist in just a plain old space rock might indicate that the ingredients for life, or even life itself, began not on a planet at all. (Wikimedia Commons user Basilicofresco)
There’s a whole family of meteorites that are Murchison-like, indicating that it’s not a one-off. Many of these same organic compounds — in addition to others like sugars, ethyl formate, and polycyclic aromatic hydrocarbons — are also found in nebulae, neutral gas clouds, as outflows from young stars, and in interstellar space.
The ingredients for life, including but not limited to the ones we got so excited about finding welling up from within Enceladus, are literally found everywhere that we know to look.
The chemical pathways of various carbon-based, organic cations discovered by Cassini in the geyser plumes of Enceladus. These are closely related to the organic molecules found everywhere we look in the galaxy. (F. Postberg et al., Nature, 558, L564 (2018))
What we found on Enceladus is remarkable in a number of ways. It truly is the first time we found a world beyond our own with the organic materials thought to be required for life’s beginning, embedded in a liquid water ocean, where an energy source from within the world’s interior can catalyze and perhaps even start biochemical processes. While we fully expect that other worlds, like Europa and perhaps even Triton and Pluto, will have similar stories, Enceladus is the first such world to get there.
Illustration of the interior of Saturn’s moon Enceladus showing a global liquid water ocean between its rocky core and icy crust. Thickness of layers shown here is not to scale, but for the first time, the plumes of water emanating from Enceladus have revealed traces of the ingredients for life. (NASA / JPL-Caltech)
But we haven’t found life, nor have we found chemical evidence for the products of biological processes. All we’ve truly found is a combination of basic, organic chemical ingredients that are under no obligation to have a biological origin. Not every cup of flour will become a cake, and not every carbon-based molecule will lead to the origin of life. As spectacular as it would be if we discovered alien life, what we’ve found on Enceladus isn’t even a hint that it exists. All we have is evidence of another opportunity for life as we know it. To take that next great leap, we’ll need to venture into those plumes themselves, and find whether the ingredients have created something wonderful, or whether our search for alien life beyond Earth will need to take us to another destination.
An activist with a mask of Kim Jong-un (L), and another with a mask of U.S. President Donald Trump (R), march with a model of a nuclear rocket during a demonstration against nuclear weapons in Berlin, Germany. The event was organized by peace advocacy organizations including the International Campaign to Abolish Nuclear Weapons (ICAN), which won the Nobel Prize for Peace in 2017. (Adam Berry/Getty Images)North Korea’s statements, actions, and the physics of how to do it all point to the same terrifying conclusion.
There are few things in this world that have the capability to destroy as much as a nuclear bomb. While history looks back on the 1945 bombings of Hiroshima and Nagasaki with horror, it’s vital to remember that, in terms of energy yields, these fission bombs were less than 0.1% as powerful as modern hydrogen bombs.
During the 21st century, North Korea has performed five separate nuclear tests, all verified by the incontrovertible science of seismology. The most recent, in 2017, yielded enough energy to kill more than 2 million people if it were detonated in a populous area like Seoul, South Korea. Despite multiple promises to denuclearize over the years, the nuclear threat looms larger than ever. Worst of all, there’s now a clear path for North Korea to develop a hydrogen bomb.
Nuclear weapon test Mike (yield 10.4 Mt) on Enewetak Atoll. The test was part of the Operation Ivy. Mike was the first hydrogen bomb ever tested. North Korea may have H-bomb capabilities by the end of 2019 if nothing is done to mitigate the current ongoing developments.(National Nuclear Security Administration / Nevada Site Office)
As the weaponisation of nuclear weapons has been verified, it is not necessary for us to conduct any more nuclear tests or test launches of mid- and long range missiles or ICBMs.
This is basically an admission of what scientific observation has already taught us: in addition to their ballistic missile technology, we know that the seismic events that have been occurring at the Earth’s surface in North Korea are, in fact, nuclear bombs.
Thanks to the sensitivity of the monitoring stations, the depth, magnitude, and location of the blast that caused the Earth to shake on January 6th, 2016, can be well-established. All of the six North Korean quakes from 2006–2017 are consistent with a nuclear explosion.(United States Geological Survey, via http://earthquake.usgs.gov/earthquakes/eventpage/us10004bnm#general_map)
North Korea has conducted six nuclear tests, the first in 2006. All were conducted in the depths of Mount Mantap, a nondescript granite peak in the remote and heavily forested Hamgyong mountain range. Since North Korea is the only country in the world that still conducts nuclear weapons tests, its Punggye-ri site underneath Mount Mantap is also the world’s only active nuclear testing site. The letters on the screen, broadcast in South Korea, read: “Hydrogen bomb test.” (AP Photo/Ahn Young-joon)
Continued work at the Yongbyon facility should not be seen as having any relationship to North Korea’s pledge to denuclearise. The North’s nuclear cadre can be expected to proceed with business as usual until specific orders are issued from Pyongyang.
Unfortunately, with the stage that North Korea has reached in weapons development, a hydrogen bomb (or fusion bomb) is feasibly right on their technological horizon.
Once uranium has been extracted from naturally-occurring ore, it contains less than 1% U-235, and must be processed into reactor-grade uranium. A photo of yellow cake uranium, a solid form of uranium oxide produced from uranium ore. Yellow cake must be processed further to become reactor-grade. which is 3–5% U-235. Weapons-grade requires approximately 85%+ U-235. (Nuclear Regulatory Commission / US Government)
There are two paths to a fission bomb: through enriched uranium and through the production of plutonium. Enriching uranium is difficult and expensive, and involves a type of energy expenditure that is normally measured in quantities called separative work units (SWUs). Put simply, uranium comes in a couple of different varieties (or isotopes), and you have to separate the fissionable uranium (U-235, which is the minority of uranium) from the non-fissionable uranium (U-238: the majority).
Natural Uranium is under 1% U-235, even after refinement. Reactor-enriched Uranium rises to ~3–4%. But weapons-grade requires ~90% U-235, which the U.S. achieves by a cascade of gas centrifuges, as shown here in this 1984 photo. (U.S. Department of Energy)
Reactor nuclear experimental RA-6 (Republica Argentina 6), en marcha, showing the characteristic Cherenkov radiation from the faster-than-light-in-water particles emitted. The reactions also produce copious amounts of antineutrinos, but most worryingly, the by-products of heavy hydrogen isotopes could be used for extremely nefarious purposes. (Centro Atomico Bariloche, via Pieck Darío)
North Korea has a nuclear reactor, so we can assume that the standard process of creating the 3–5% enriched uranium is at play there. For those who want the specifics, that means:
mine the uranium ore,
extract the uranium from the ore,
convert the uranium into uranium hexafluoride,
enrich the uranium-containing compound to reactor levels,
and run your nuclear reactor.
This process won’t get you up anywhere near the 85% you need to make a uranium bomb. But there was a second path to a fission bomb: through the production of plutonium. And an unmonitored, running nuclear reactor can produce exactly that.
Uncapped fuel stored underwater in K-East Basin. This is spent nuclear fuel at the Hanford site. Potentially, if the fuel was run for short amounts of time, this could be processed into reactor-grade Plutonium… or even something more. (U.S. Department of Energy)
After U-235 is fused in a reactor, there are a slew of additional products that come out, including a number of highly radioactive elements not found in nature. Four of the products are different isotopes of plutonium: Pu-238, Pu-239, Pu-240, and Pu-241. If you’re concerned about a nuclear weapon, it’s the Pu-239 that you have to worry about.
Unfortunately, Pu-239 is also the first new thing you produce when you run a uranium-based nuclear reactor. Nuclear fission of U-235 creates free neutrons, and if U-238 (the majority of uranium) absorbs one, it quickly become Pu-239. So long as you produce a large relative amount of Pu-239 to Pu-240 (which requires a second neutron capture), you can create the material you need for a fission bomb.
By simply adding neutrons to U-238, an inevitable consequence of leaving your uranium fuel in a nuclear reactor, many isotopes of heavy elements are produced, including Pu-239. If Pu-240 is produced in small enough amounts, this process could be used iteratively to create super-weapons-grade plutonium. (JWB at English Wikipedia)
Even though there’s no way to separate out different isotopes of plutonium, you can separate plutonium out of the other elements, such as uranium and curium. Run your uranium-based reactor for a short time, separate out the mostly Pu-239 plutonium from the rest of the fuel, put the uranium back in the reactor, repeat, etc., and you’ll wind up with a store of highly enriched plutonium. If you have less than 7% Pu-240 in your plutonium, it’s weapons-grade material; if it’s less than 3%, it’s super-weapons-grade.
A photo of Kim Jong-Un, released just weeks before the 2016 North Korean nuclear detonation. It shows the nation’s leader in an undisclosed location in North Korea. (KNS/AFP/Getty Images)
Although we have no proof of this, the recent nuclear tests indicate that North Korea has at least weapons-grade materials and possible super-weapons-grade. To build a hydrogen (fusion) bomb, all you need is for a fission bomb to properly surround and compress, after the fission bomb detonates, a pellet of fusible material. The fusible material usually simply consists of two different isotopes of hydrogen: deuterium and tritium.
Frighteningly, arguably the best way to produce tritium is to run a water-cooled nuclear reactor. North Korea has one; it has undergone testing already this year and is potentially slated for activation in 2019. This method of creating a fusion bomb has been around since the 1950s, and represents one of the single greatest existential threats to all of humanity.
The 1961 Tsar Bomba explosion was the largest nuclear detonation ever to take place on Earth, and is perhaps the most famous example of a fusion weapon ever created, with a yield far surpassing any other ever developed. (Andy Zeigert / flickr)
Although North Korea no longer has their longstanding nuclear test site available to them, they have all the ingredients and infrastructure to create a very powerful fission bomb, and have demonstrably done so in recent years. They are just one ingredient — an artificial and unstable isotope of hydrogen — away from having everything necessary for a hydrogen bomb: the most powerful destructive force ever unleashed by humanity.
“The letter that we are signing is very comprehensive, and I think both sides will be very impressed with the results . . . We’re going to take care of a very big and very dangerous problem for the world,”
there are no tangible results to point to that are leading us away from this foreseeable disaster. There is a clear scientific path to developing a nuclear fusion bomb, and North Korea has already shown they are 80% of the way there. It’s time to call upon our leaders to put a stop to the remaining steps before it’s too late.
There is a large suite of scientific evidence that supports the picture of the expanding Universe and the Big Bang. The entire mass-energy of the Universe was released in an event lasting less than 10^-30 seconds in duration; the most energetic thing ever to occur in our Universe’s history. (NASA / GSFC)13.8 billion years ago, our Universe as-we-know-it came into existence. Here’s what it was like.
Looking out at our Universe today, we not only see a huge variety of stars and galaxies both nearby and far away, we also see a curious relationship: the farther away a distant galaxy is, the faster it appears to move away from us. In cosmic terms, the Universe is expanding, with all the galaxies and clusters of galaxies getting more distant from one another over time. In the past, therefore, the Universe was hotter, denser, and everything in it was closer together.
If we extrapolate back as far as possible, we’d come to a time before the first galaxies formed; before the first stars ignited; before neutral atoms or atomic nuclei or even stable matter could exist. The earliest moment at which we can describe our Universe at hot, dense, and uniformly full-of-stuff is known as the Big Bang. Here’s how it first began.
If you look farther and farther away, you also look farther and farther into the past. The earlier you go, the hotter and denser, as well as less-evolved, the Universe turns out to be. The earliest signals can even, potentially, tell us about what happened prior to the moments of the hot Big Bang. (NASA / STScI / A. Feild (STScI))
Some of you are going to read that last sentence and be confused. You might ask, “isn’t the Big Bang the birth of time and space?” Sure; that’s how it was originally conceived. Take something that’s expanding and of a certain size and age today, and you can go back to a time where it was arbitrarily small and dense. When you get down to a single point, you’ll create a singularity: the birth of space and time.
An illustration of the early Universe as consisting of quantum foam, where quantum fluctuations are large, varied, and important on the smallest of scales. During inflation, these fluctuations get stretched across all scales in the Universe, reaching arbitrarily larger ones over time. (NASA/CXC/M.Weiss)
During inflation, the Universe is completely empty. There are no particles, no matter, no photons; just empty space itself. That empty space has a huge amount of energy in it, with the exact amount of energy slightly fluctuating over time. Those fluctuations get stretched to larger scales, while new, small-scale fluctuations are created on top of that. (We described what the Universe looked like during inflation previously.)
This continues as long as inflation goes on. But inflation will come to an end randomly, and not in all locations at once. In fact, if you lived in an inflating Universe, you’d likely experience a nearby region have inflation come to an end, while the space between you and it expanded exponentially. For a brief instant, you’d see what happens at the start of a Big Bang before that region disappeared from view.
In an inflating Universe, the grid-like space you would visualize has tiny quantum fluctuations superimposed atop of it, but is uniform and non-descript, simply expanding exponentially. When inflation ends, there should be a brief ‘window’ into a new Universe where the hot Big Bang takes place. (Pixabay user JohnsonMartin)
In an initially, relatively small region, perhaps no bigger than a soccer ball but perhaps much larger, the energy inherent to space gets converted into matter and radiation. The conversion process is relatively fast, taking approximately 10^-33 seconds or so, but not instantaneous. As the energy bound up in space itself gets converted into particles, antiparticles, photons and more, the temperature starts to rapidly rise.
Because the amount of energy that gets converted is so large, everything will be moving close to the speed of light. They will all behave as radiation, whether the particles are massless or massive doesn’t matter. This conversion process is known as reheating, and signifies when inflation comes to an end and the stage known as the hot Big Bang begins.
The analogy of a ball sliding over a high surface is when inflation persists, while the structure crumbling and releasing energy represents the conversion of energy into particles. (E. Siegel)
In terms of the expansion speed, you’ll witness a tremendous change. In an inflationary Universe, space expands exponentially, with more distant regions accelerating away as time goes on. But when inflation ends, the Universe reheats, and the hot Big Bang starts, more distant regions will recede from you more slowly as time goes on. From an outside perspective, the part of the Universe where inflation ends sees the expansion rate there drop, while the inflating regions surrounding it see no such drop.
High-energy collisions of particles can create matter-antimatter pairs or photons, while matter-antimatter pairs annihilate to produce photons as well. Immediately after inflation ends, the Universe is filled with particles, antiparticles, and photons, which interact, annihilate, produce new particles, all as the Universe expands and cools. (Brookhaven National Laboratory / RHIC)
Probability-wise, it’s extremely likely that from the perspective of whatever region of inflating space you’re in prior to the Big Bang, you’ll see inflation end in nearby regions many times. These locations where inflation ends will quickly fill with matter, antimatter, and radiation, and expand more slowly than the still-inflating regions do.
These regions will expand away from all the other locations where inflation still goes on exponentially, meaning they will very quickly recede from view. In the standard inflationary picture, because of this expansion rate change, there’s virtually no chance that any two Universes, where separate hot Big Bangs occur, will ever collide or interact.
An illustration of multiple, independent Universes, causally disconnected from one another in an ever-expanding cosmic ocean, is one depiction of the Multiverse idea. In a region where the Big Bang begins and inflation ends, the expansion rate will drop, while inflation continues in between two such regions, forever separating them.(Ozytive / Public domain)
Finally, the region where we will come to live gets cosmically lucky, and inflation comes to an end for us. The energy that was inherent to space itself gets converted to a hot, dense, and almost uniform sea of particles. The only imperfections, and the only departures from uniformity, correspond to the quantum fluctuations that existed (and were stretched across the Universe) during inflation. The positive fluctuations correspond to initially overdense regions, while the negative fluctuations get converted into initially underdense regions.
The overdense, average density, and underdense regions that existed when the Universe was just 380,000 years old now correspond to cold, average, and hot spots in the CMB. (E. Siegel / Beyond The Galaxy)
We cannot observe these density fluctuations, today, as they were when the Universe first underwent the hot Big Bang. There are no visual signatures we can access from that early on; the first one we’ve ever accessed come from 380,000 years later, after they’ve undergone countless interactions. Even at that, we can extrapolate back what the initial density fluctuations were, and find something extremely consistent with the story of cosmic inflation. The temperature fluctuations that are imprinted on the first picture of the Universe — the cosmic microwave background — gives us confirmation of how the Big Bang began.
The final prediction of cosmic inflation is the existence of primordial gravitational waves. It is the only one of inflation’s predictions to not be verified by observation… yet. (National Science Foundation (NASA, JPL, Keck Foundation, Moore Foundation, related) — Funded BICEP2 Program; modifications by E. Siegel)
What might be observable to us, however, are the gravitational waves left over from the end of inflation and the start of the hot Big Bang. The gravitational waves that inflation generates move at the speed of light in all directions, but unlike the visual signatures, no interactions can slow them down. They will arrive continuously, from all directions, passing through our bodies and our detectors. All we need to do, if we want to understand how our Universe got its start, is find a way to observe these waves either directly or indirectly. While many ideas and experiments abound, none have returned a successful detection so far.
The quantum fluctuations that occur during inflation get stretched across the Universe, and when inflation ends, they become density fluctuations. This leads, over time, to the large-scale structure in the Universe today, as well as the fluctuations in temperature observed in the CMB. (E. Siegel, with images derived from ESA/Planck and the DoE/NASA/ NSF interagency task force on CMB research)
Once inflation comes to an end, and all the energy that was inherent to space itself gets converted into particles, antiparticles, photons, etc., all the Universe can do is expand and cool. Everything smashes into one another, sometimes creating new particle/antiparticle pairs, sometimes annihilating pairs back into photons or other particles, but always dropping in energy as the Universe expands.
The Universe never reaches infinitely high temperatures or densities, but still attains energies that are perhaps a trillion times greater than anything the LHC can ever produce. The tiny seed overdensities and underdensities will eventually grow into the cosmic web of stars and galaxies that exist today. 13.8 billion years ago, the Universe as-we-know-it had its beginning. The rest is our cosmic history.