Loading...

Follow Starts With A Bang! on Feedspot

Continue with Google
Continue with Facebook
or

Valid
The quantum fluctuations that occur during inflation get stretched across the Universe, and when inflation ends, they become density fluctuations. This leads, over time, to the large-scale structure in the Universe today, as well as the fluctuations in temperature observed in the CMB. These new predictions are essential for demonstrating the validity of a fine-tuning mechanism. (E. SIEGEL, WITH IMAGES DERIVED FROM ESA/PLANCK AND THE DOE/NASA/ NSF INTERAGENCY TASK FORCE ON CMB RESEARCH)Some claim that inflation isn’t science, but it sure has made some incredibly successful scientific predictions.

So, you want to know how the Universe began? You’re not alone. Every other curious member of humanity, for as long as recorded history exists (and probably much longer), has wondered about exactly this question, “where does all this come from?” In the 20th century, science advanced to the point where a large suite of evidence pointed to a singular answer: the hot Big Bang.

Yet a number of puzzles arose that the Big Bang was unable to solve, and a theoretical add-on to the Big Bang was proposed as the ultimate cosmic solution: inflation. This December will mark 40 years since inflation was proposed by Alan Guth, and Paul Erlich wants to know how well inflation has stood the test of time, asking:

To what margin of error or what level of statistical significance would you say you say inflation has been verified?

The short answer is “better than most people think.” The long answer is even more compelling.

The redshift-distance relationship for distant galaxies. The points that don’t fall exactly on the line owe the slight mismatch to the differences in peculiar velocities, which offer only slight deviations from the overall observed expansion. The original data from Edwin Hubble, first used to show the Universe was expanding, all fit in the small red box at the lower-left. (ROBERT KIRSHNER, PNAS, 101, 1, 8–13 (2004))

The Big Bang is an incredibly successful theory. It began from just two simple starting points, and made an extrapolation from there. First, it insisted that the Universe be consistent with General Relativity, and that is the theory of gravity that we should use as our framework for building any realistic model of the Universe. Second, it demanded that we take seriously the astronomical observations that galaxies, on average, appear to be receding from us with speeds that are in direct proportion to their distance from us.

The simplest way to proceed is to let the data guide you. In the context of General Relativity, if you allow the Universe to be evenly (or roughly evenly) filled with matter, radiation, or other forms of energy, it will not remain static, but must either expand or contract. The observed redshift-distance relation can be directly explained if the fabric of space itself is expanding as time goes on.

The balloon/coin analogy of the expanding Universe. The individual structures (coins) don’t expand, but the distances between them do in an expanding Universe. This can be very confusing if you insist on attributing the apparent motion of the objects we see to their relative velocities through space. In reality, it’s the space between them that’s expanding. (E. SIEGEL / BEYOND THE GALAXY)

If this is the picture of the Universe you put together, it can carry some enormous consequences along for the ride. As the Universe expands, the total number of particles within it remains the same, but the volume increases. As a result, it gets less dense. Gravity pulls things into progressively larger-scale clumps with the passage of more time. And radiation — whose energy is defined by its wavelength — sees its wavelength stretch as the Universe expands; hence, it becomes cooler in temperature and lower in energy.

The huge idea of the Big Bang is to extrapolate this idea backwards in time, to higher energies, higher temperatures, greater densities, and a more uniform state.

After the Big Bang, the Universe was almost perfectly uniform, and full of matter, energy and radiation in a rapidly expanding state. The Universe’s evolution at all times is determined by the energy density of what’s inside it. If it’s expanding and cooling today, however, it must have been denser and hotter in the distant past. (NASA / WMAP SCIENCE TEAM)

This led to three new predictions, in addition to the expanding Universe (which had already been observed). They were as follows:

  1. The earliest, hottest, densest times should allow for a period of nuclear fusion early on, predicting a specific set of abundance ratios for the lightest elements and isotopes even before the first stars form.
  2. As the Universe cools further, it should form neutral atoms for the first time, with the leftover radiation from those early times traveling unimpeded and continuing to redshift until the present, where it should be just a few degrees above absolute zero.
  3. And finally, whatever initial density imperfections are present should grow into a vast cosmic web of stars, galaxies, galaxy clusters, and cosmic voids separating them over the billions of years that have passed since those early stages.

All three predictions have been verified, and that’s why the Big Bang stands alone among theories of the Universe’s origins.

A visual history of the expanding Universe includes the hot, dense state known as the Big Bang and the growth and formation of structure subsequently. The full suite of data, including the observations of the light elements and the cosmic microwave background, leaves only the Big Bang as a valid explanation for all we see. As the Universe expands, it also cools, enabling ions, neutral atoms, and eventually molecules, gas clouds, stars, and finally galaxies to form. (NASA / CXC / M. WEISS)

But that doesn’t mean the Big Bang explains everything. If you extrapolate all the way back to arbitrarily high temperatures and densities — all the way back to a singularity — you wind up with a number of predictions that don’t pan out in reality.

We don’t see a Universe with different temperatures in different directions. But we should, since a region of space tens of billions of light-years to your left and another one tens of billion of light-years to your right should never have had time to exchange information since the Big Bang.

We don’t see a Universe with leftover particles that are relics from some arbitrarily hot time, like magnetic monopoles, despite the fact that they should have been produced in great abundance.

And we don’t see a Universe with any measurable degree of spatial curvature, despite the fact that the Big Bang has no mechanism to exactly balance energy density and spatial curvature from an extremely early time.

If the Universe had just a slightly higher density (red), it would have recollapsed already; if it had just a slightly lower density, it would have expanded much faster and become much larger. The Big Bang, on its own, offers no explanation as to why the initial expansion rate at the moment of the Universe’s birth balances the total energy density so perfectly, leaving no room for spatial curvature at all. Our Universe appears perfectly spatially flat. (NED WRIGHT’S COSMOLOGY TUTORIAL)

The Big Bang, on its own, offers no solution to these puzzles. It succeeds if we extrapolate back to a hot, dense, almost-perfectly-uniform early state, but it doesn’t explain any more than that. To go beyond these limitations requires a new scientific idea that supersedes the Big Bang.

But superseding the Big Bang isn’t easy at all. To do so, a new theory would have to do all three of the following:

  1. Reproduce all of the successes of the Big Bang, including the creation of an expanding, hot, dense, almost-perfectly uniform Universe.
  2. Provide a mechanism for explaining those three puzzles — the temperature uniformity, the lack of high-energy relics, and the flatness problem — that the Big Bang has no solution for.
  3. Finally, and perhaps most importantly it must make new, testable predictions that are different from the standard Big Bang that it’s attempting to supersede.

The idea of inflation, and the hope that it could do so, began in late 1979, when Alan Guth wrote the idea down in his notebook.

It was the consideration of a number of finely-tuned scenarios that led Alan Guth to conceive of cosmic inflation, the leading theory of the Universe’s origin. (ALAN GUTH’S 1979 NOTEBOOK)

What inflation specifically hypothesized is that the Big Bang wasn’t the beginning, but rather was set up by a prior stage of the Universe. In this early state — dubbed an inflationary state by Guth — the dominant form of energy wasn’t in matter or radiation, but was inherent to the fabric of space itself, and possessed a very large energy density.

This would cause the Universe to expand both rapidly and relentlessly, driving any pre-existing matter apart. The Universe would be stretched so large it would be indistinguishable from flat. All the parts that an observer (like us) would be able to access would now have the same uniform properties everywhere, since they originated from a previously-connected state in the past. And since there would be a maximum temperature the Universe achieved when inflation ended, and the energy inherent to space transitioned into matter, antimatter, and radiation, we could avoid the production of leftover, high-energy relics.

In the top panel, our modern Universe has the same properties (including temperature) everywhere because they originated from a region possessing the same properties. In the middle panel, the space that could have had any arbitrary curvature is inflated to the point where we cannot observe any curvature today, solving the flatness problem. And in the bottom panel, pre-existing high-energy relics are inflated away, providing a solution to the high-energy relic problem. This is how inflation solves the three great puzzles that the Big Bang cannot account for on its own. (E. SIEGEL / BEYOND THE GALAXY)

All at once, all three of those puzzles that the Big Bang couldn’t explain were solved. This was truly a watershed moment for cosmology, and immediately led to a deluge of scientists working to correct Guth’s original model in order to reproduce all of the Big Bang’s successes. Guth’s idea was published in 1981, and by 1982, two independent teams — Andrei Linde and the duo of Paul Steinhardt and Andy Albrecht — had done it.

The key was to picture inflation as a slowly-rolling ball atop a hill. As long as the ball remained atop the plateau, inflation would continue to stretch the fabric of space. But when the ball rolls down the hill, inflation comes to an end. As the ball rolls into the valley below, energy inherent to space gets transferred into matter, antimatter and radiation, leading to a hot Big Bang, but with a finite temperature and energy.

When cosmic inflation occurs, the energy inherent in space is large, as it is at the top of this hill. As the ball rolls down into the valley, that energy converts into particles. This provides a mechanism for not only setting up the hot Big Bang, but for both solving the problems associated with it and making new predictions as well. (E. SIEGEL)

At last, not only did we have a solution to all of the problems that the Big Bang couldn’t resolve, but we could reproduce all of its successes. The key, then, would be to make new predictions that could then be tested.

The 1980s were full of such predictions. Most of them were very general, occurring in practically all viable models of inflation that one could construct. In particular, we realized that inflation had to be a quantum field, and that when you have this rapid, exponential expansion occurring with an extremely high energy inherent to space itself, these quantum effects can have impacts that translate onto cosmological scales.

The fluctuations in the cosmic microwave background, as measured by COBE (on large scales), WMAP (on intermediate scales), and Planck (on small scales), are all consistent with not only arising from a scale-invariant set of quantum fluctuations, but of being so low in magnitude that they could not possibly have arisen from an arbitrarily hot, dense state. The horizontal line represents the initial spectrum of fluctuations (from inflation), while the wiggly one represents how gravity and radiation/matter interactions have shaped the expanding Universe in the early stages. (NASA / WMAP SCIENCE TEAM)

In brief, the six most generic predictions were:

  1. There should be an upper-limit to the maximum temperature the Universe achieves post-inflation; it cannot approach the Planck scale of ~10¹⁹ GeV.
  2. Super-horizon fluctuations, or fluctuations on scales larger than light could have traversed since the Big Bang, should exist.
  3. The quantum fluctuations during inflation should produce the seeds of density fluctuations, and they should be 100% adiabatic and 0% isocurvature. (Where adiabatic and isocurvature are the two allowed classes.)
  4. These fluctuations should be almost perfectly scale-invariant, but should have slightly greater magnitudes on larger scales than smaller ones.
  5. The Universe should be nearly, but not quite, perfectly flat, with quantum effects producing curvature only at the 0.01% level or below.
  6. And the Universe should be filled with primordial gravitational waves, which should imprint on the cosmic microwave background as B-modes.
The magnitudes of the hot and cold spots, as well as their scales, indicate the curvature of the Universe. To the best of our capabilities, we measure it to be perfectly flat. Baryon acoustic oscillations and the CMB, together, provide the best methods of constraining this, down to a combined precision of 0.4%. (SMOOT COSMOLOGY GROUP / LBL)

It’s now 2019, and the first four predictions have been observationally confirmed. The fifth has been tested down to the ~0.4% level and is consistent with inflation, but we haven’t reached the critical level. Only the sixth point has not been tested at all, with a famous false-positive detection appearing earlier this decade owing to the BICEP2 collaboration.

The maximum temperature has been verified, by looking at the cosmic microwave background, to be no greater than about 10¹⁶ GeV.

Super-horizon fluctuations have been seen from the polarization data provided by both WMAP and Planck, and are in perfect agreement with what inflation predicts.

The latest data from structure formation indicates that these early, seed fluctuations are at least 98.7% adiabatic and no more than 1.3% isocurvature, consistent with inflation’s predictions.

But the best test — and what I’d call the most significant confirmation of inflation — has come from measuring the spectrum of the initial fluctuations.

Correlations between certain aspects of the magnitude of temperature fluctuations (y-axis) as a function of decreasing angular scale (x-axis) show a Universe that is consistent with a scalar spectral index of 0.96 or 0.97, but not 0.99 or 1.00. (P.A.R. ADE ET AL. AND THE PLANCK COLLABORATION)

Inflation is very particular when it comes to what sorts of structure should form on different scales. We have a quantity that we use to describe how much structure forms on large cosmic scales versus smaller ones: n_s. If you formed the same amount of structure on all scales, n_s would equal 1 exactly, with no variations.

What inflation generically predicts, however, is that we will have an nsthat’s almost, but slightly less than, 1. The amount we depart from 1 by is determined by the specific inflationary model. When inflation was first proposed, the standard assumption was that n_s would be exactly equal to 1. It wouldn’t be until the 2000s that we became capable of testing this, through both the fluctuations in the cosmic microwave background and the signature of baryon acoustic oscillations.

As of today, n_s is approximately 0.965 or so, with an uncertainty of around 0.008. This means there’s about a 4-to-5 sigma certainty that n_s is truly less than 1, a remarkable confirmation of inflation.

Our entire cosmic history is theoretically well-understood, but only qualitatively. It’s by observationally confirming and revealing various stages in our Universe’s past that must have occurred, like when the first stars and galaxies formed, and how the Universe expanded over time, that we can truly come to understand our cosmos. The relic signatures imprinted on our Universe from an inflationary state before the hot Big Bang give us a unique way to test our cosmic history. (NICOLE RAGER FULLER / NATIONAL SCIENCE FOUNDATION)

The Big Bang became our theory of the Universe when the leftover glow was discovered in the form of the cosmic microwave background. As early as 1965, the critical evidence had come in, enabling the Big Bang to succeed where its competitors failed. Over the subsequent years and decades, measurements of the cosmic microwave background’s spectrum, the abundance of the light elements, and the formation of structure only strengthened the Big Bang. Although alternatives persist, they cannot stand up to the scientific scrutiny that the Big Bang does.

Inflation has literally met every threshold that science demands, with clever new tests becoming possible with improved observations and instrumentation. Whenever the data has been capable of being collected, inflation’s predictions have been verified. Although it’s perhaps more palatable and fashionable to be a contrarian, inflation is the leading theory for the best reason of all: it works. If we ever make a critical observation that disagrees with inflation, perhaps that will be the harbinger of an even more revolutionary theory of how it all began.

Send in your Ask Ethan questions to startswithabang at gmail dot com!

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

Ask Ethan: How Well Has Cosmic Inflation Been Verified? was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
A young, star-forming region found within our own Milky Way. Note how the material around the stars gets ionized, and over time becomes transparent to all forms of light. Until that happens, however, the surrounding gas absorbs the radiation, emitting light of its own of a variety of wavelengths. In the early Universe, it takes hundreds of millions of years for the Universe to fully become transparent to light. (NASA, ESA, AND THE HUBBLE HERITAGE (STSCI/AURA)-ESA/HUBBLE COLLABORATION; ACKNOWLEDGMENT: R. O’CONNELL (UNIVERSITY OF VIRGINIA) AND THE WFC3 SCIENTIFIC OVERSIGHT COMMITTEE)Depending on how you measure it, there are two different answers that could be right.

If you want to see what’s out there in the Universe, you first have to be able to see. We take for granted, today, that the Universe is transparent to light, and that the light from distant objects can travel unimpeded through space before reaching our eyes. But it wasn’t always this way.

In fact, there are two ways that the Universe can stop light from propagating in a straight line. One is to fill the Universe with free, unbound electrons. The light will then scatter with the electrons, bouncing off in a randomly-determined direction. The other is to fill the Universe with neutral atoms that can clump and cluster together. The light will then be blocked by this matter, the same way that most solid objects are opaque to light. Our actual Universe does both of these, and won’t become transparent until both obstacles are overcome.

Neutral atoms were formed just a few hundred thousand years after the Big Bang. The very first stars began ionizing those atoms once again, but it took hundreds of millions of years of forming stars and galaxies until this process, known as reionization, was completed. (THE HYDROGEN EPOCH OF REIONIZATION ARRAY (HERA))

In the earliest stages of the Universe, the atoms that make up everything we know of weren’t bound together in neutral configurations, but rather were ionized: in the state of a plasma. When light travels through a dense-enough plasma, it will scatter off of the electrons, being absorbed and re-emitted in a variety of unpredictable directions. So long as there are enough free electrons, the photons streaming through the Universe will continue to be kicked around at random.

There’s a competing process occurring, however, even during these early stages. This plasma is made of electrons and atomic nuclei, and it’s energetically favorable for them to bind together. Occasionally, even at these early times, they do exactly that, with only the input from a sufficiently energetic photon capable of splitting them apart once again.

As the fabric of the Universe expands, the wavelengths of any radiation present get stretched as well. This causes the Universe to become less energetic, and makes many high-energy processes that occur spontaneously at early times impossible at later, cooler epochs. It requires hundreds of thousands of years for the Universe to cool enough so that neutral atoms can form. (E. SIEGEL / BEYOND THE GALAXY)

As the Universe expands, however, it not only gets less dense, but the particles within it get less energetic. Because the fabric of space itself is what’s expanding, it affects every photon traveling through that space. Because a photon’s energy is determined by its wavelength, then as that wavelength gets stretched, the photon gets shifted — redshifted — to lower energies.

It’s only a matter of time, then, until all the photons in the Universe drop below a critical energy threshold: the energy required to knock an electron off of the individual atoms that exist in the early Universe. It takes hundreds of thousands of years after the Big Bang for photons to lose enough energy to make the formation of neutral atoms even possible.

At early times (left), photons scatter off of electrons and are high-enough in energy to knock any atoms back into an ionized state. Once the Universe cools enough, and is devoid of such high-energy photons (right), they cannot interact with the neutral atoms. Instead, they simply free-stream through space indefinitely, since they have the wrong wavelength to excite these atoms to a higher energy level. (E. SIEGEL / BEYOND THE GALAXY)

Many cosmic events happen during this time: the earliest unstable isotopes radioactively decay; matter becomes more energetically important than radiation; gravitation begins pulling matter into clumps as the seeds of structure start growing. As the photons become more and more redshifted, another barrier to neutral atoms appears: the photons emitted when electrons bind to protons for the first time. Every time an electron successfully binds with an atomic nucleus, it does two things:

  1. It emits an ultraviolet photon, because atomic transitions always cascade down in energy levels in a predictable fashion.
  2. It gets bombarded by other particles, including the billion-or-so photons that exist for every electron in the Universe.

Every time you form a stable, neutral atom, it emits an ultraviolet photon. Those photons then continue on, in a straight line, until they encounter another neutral atom, which they then ionize.

When free electrons recombine with hydrogen nuclei, the electrons cascade down the energy levels, emitting photons as they go. In order for stable, neutral atoms to form in the early Universe, they have to reach the ground state without producing an ultraviolet photon that could potentially ionize another identical atom. (BRIGHTERORANGE & ENOCH LAU/WIKIMDIA COMMONS)

There’s no net addition of neutral atoms through this mechanism, and hence the Universe cannot become transparent to light through this pathway alone. There’s another effect that comes in, instead, that dominates. It’s extremely rare, but given all the atoms in the Universe and the more-than-100,000 years it takes for atoms to finally and stably become neutral, it’s an incredible and intricate part of the story.

Most times, in a hydrogen atom, when you have an electron occupying the first excited state, it simply drops down to the lowest-energy state, emitting an ultraviolet photon of a specific energy: a Lyman alpha photon. But about 1 time in 100 million transitions, the drop-down will occur through a different path, instead emitting two lower-energy photons. This is known as a two-photon decay or transition, and is what is primarily responsible for the Universe becoming neutral.

When you transition from an “s” orbital to a lower-energy “s” orbital, you can on rare occasion do it through the emission of two photons of equal energy. This two-photon transition occurs even between the 2s (first excited) state and the 1s (ground) state, about one time out of every 100 million transitions. (R. ROY ET AL., OPTICS EXPRESS 25(7):7960 · APRIL 2017)

When you emit a single photon, it almost always collides with another hydrogen atom, exciting it and eventually leading to its reionization. But when you emit two photons, it’s extraordinarily unlikely that both will hit an atom at the same time, meaning that you net one additional neutral atom.

This two-photon transition, rare though it is, is the process by which neutral atoms first form. It takes us from a hot, plasma-filled Universe to an almost-equally-hot Universe filled with 100% neutral atoms. Although we say that the Universe formed these atoms 380,000 years after the Big Bang, this was actually a slow, gradual process that took about 100,000 years on either side of that figure to complete. Once the atoms are neutral, there is nothing left for the Big Bang’s light to scatter off of. This is the origin of the CMB: the Cosmic Microwave Background.

A Universe where electrons and protons are free and collide with photons transitions to a neutral one that’s transparent to photons as the Universe expands and cools. Shown here is the ionized plasma (L) before the CMB is emitted, followed by the transition to a neutral Universe (R) that’s transparent to photons. The scattering between electrons and electrons, as well as electrons and photons, can be well-described by the Dirac equation, but photon-photon interactions, which occur in reality, are not. (AMANDA YOHO)

This marks the first time that the Universe becomes transparent to light. The leftover photons from the Big Bang, now long in wavelength and low in energy, can finally travel freely through the Universe. With the free electrons gone — bound up into stable, neutral atoms — the photons have nothing to stop them or slow them down.

But the neutral atoms are now everywhere, and they serve an insidious purpose. While they may make the Universe transparent to these low-energy photons, these atoms will clump together into molecular clouds, dust, and collections of gas. Neutral atoms in these configurations might be transparent to low-energy light, but the higher-energy light, like that emitted by stars, gets absorbed by them.

An illustration of the first stars turning on in the Universe. Without metals to cool down the stars, only the largest clumps within a large-mass cloud can become stars. Until enough time has passed for gravity to affect larger scales, only the small-scales can form structure early on, and the stars themselves will see their light unable to penetrate very far through the opaque Universe. (NASA)

When all of the atoms in the Universe are now neutral, they do an amazingly good job of blocking starlight. The same long-awaited configuration that we required to make the Universe transparent now makes it opaque again to photons of a different wavelength: the ultraviolet, optical, and near-infrared light produced by stars.

In order to make the Universe transparent to this other type of light, we’ll need to ionize them all again. This means that we need enough high-energy light to kick the electrons off of the atoms they’re bound to, which requires an intense source of ultraviolet emission.

In other words, the Universe needs to form enough stars to successfully reionize the atoms within it, rendering the tenuous, low-density intergalactic medium transparent to starlight.

This four-panel view shows the Milky Way’s central region in four different wavelengths of light, with the longer (submillimeter) wavelengths at top, going through the far-and-near infrared (2nd and 3rd) and ending in a visible-light view of the Milky Way. Note that the dust lanes and foreground stars obscure the center in visible light, but not so much in the infrared. (ESO/ATLASGAL CONSORTIUM/NASA/GLIMPSE CONSORTIUM/VVV SURVEY/ESA/PLANCK/D. MINNITI/S. GUISARD ACKNOWLEDGEMENT: IGNACIO TOLEDO, MARTIN KORNMESSER)

We see this even in our own galaxy: the galactic center cannot be seen in visible light. The galactic plane is rich in neutral dust and gas, which is extremely successful at blocking the higher-energy ultraviolet and visible light, but infrared light goes clear through. This explains why the cosmic microwave background won’t get absorbed by neutral atoms, but starlight will.

Thankfully, the stars that we form can be massive and hot, where the most massive ones are much more luminous and hotter than even our Sun. Early stars can be tens, hundreds, or even a thousand times as massive as our own Sun, meaning they can reach surface temperatures of tens of thousands of degrees and brightnesses that are millions of times as luminous as our Sun. These behemoths are the biggest threat to the neutral atoms spread throughout the Universe.

The first stars in the Universe will be surrounded by neutral atoms of (mostly) hydrogen gas, which absorbs the starlight. The hydrogen makes the Universe opaque to visible, ultraviolet, and a large fraction of infrared light, but long wavelength light, such as radio-light, can transmit unimpeded. (NICOLE RAGER FULLER / NATIONAL SCIENCE FOUNDATION)

What we need to happen is for enough stars to form that they can flood the Universe with a sufficient number of ultraviolet photons. If they can ionize enough of this neutral matter filling the intergalactic medium, they can clear a path in all directions for starlight to travel unimpeded. Moreover, it has to occur in sufficient amounts that the ionized protons and electrons can’t get back together again. There is no room for Ross-and-Rachel style shenanigans in the effort to reionize the Universe.

The first stars make a small dent in this, but the earliest star clusters are small and short-lived. For the first few hundred million years of our Universe, all the stars that form can barely make a dent in how much of the matter in the Universe remains neutral. But that begins to change when star clusters merge together, forming the first galaxies.

An illustration of CR7, the first galaxy detected that was thought to house Population III stars: the first stars ever formed in the Universe. JWST will reveal actual images of this galaxy and others like it, and will be able to make measurements of these objects even where reionization has not yet completed. (ESO/M. KORNMESSER)

As large clumps of gas, stars, and other matter merge together, they trigger a tremendous burst of star formation, lighting up the Universe as never before. As time goes on, a slew of phenomena take place all at once:

  • the regions with the largest collections of matter attract even more early stars and star clusters towards them,
  • the regions that haven’t yet formed stars can begin to,
  • and the regions where the first galaxies are made attract other young galaxies,

all of which serves to increase the overall star formation rate.

If we were to map out the Universe at this time, what we’d see is that the star formation rate increases at a relatively constant rate for the first few billion years of the Universe’s existence. In some favorable regions, enough of the matter gets ionized early enough that we can see through the Universe before most regions are reionized; in others, it may take as long as two or three billion years for the last neutral matter to be blown away.

If you were to map out the Universe’s neutral matter from the start of the Big Bang, you would find that it starts to transition to ionized matter in clumps, but you’d also find that it took hundreds of millions of years to mostly disappear. It does so unevenly, and preferentially along the locations of the densest parts of the cosmic web.

Schematic diagram of the Universe’s history, highlighting reionization. Before stars or galaxies formed, the Universe was full of light-blocking, neutral atoms. While most of the Universe doesn’t become reionized until 550 million years afterwards, some regions will achieve full reionization earlier and others won’t achieve it until later. The first major waves of reionization begin happening at around 250 million years of age, while a few fortunate stars may form just 50-to-100 million years after the Big Bang. With the right tools, like the James Webb Space Telescope, we may begin to reveal the earliest galaxies. (S. G. DJORGOVSKI ET AL., CALTECH DIGITAL MEDIA CENTER)

On average, it takes 550 million years from the inception of the Big Bang for the Universe to become reionized and transparent to starlight. We see this from observing ultra-distant quasars, which continue to display the absorption features that only neutral, intervening matter causes. But reionization doesn’t happen everywhere at once; it reaches completion at different times in different directions and at different locations. The Universe is uneven, and so are the stars and galaxies and clumps of matter that form within it.

The Universe became transparent to the light left over from the Big Bang when it was roughly 380,000 years old, and remained transparent to long-wavelength light thereafter. But it was only when the Universe reached about half a billion years of age that it became fully transparent to starlight, with some locations experiencing transparency earlier and others experiencing it later.

To probe beyond these limits requires a telescope that goes to longer and longer wavelengths. With any luck, the James Webb Space Telescope will finally open our eyes to the Universe as it was during this in-between era, where it’s transparent to the Big Bang’s glow but not to starlight. When it opens its eyes on the Universe, we may finally learn just how the Universe grew up during these poorly-understood dark ages.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

When Did The Universe Become Transparent To Light? was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The Hubble Space Telescope, as imaged during its last and final servicing mission. The only way it can point itself is from the internal spinning devices that allow it to change its orientation and hold a stable position. But what it can see is determined by its instruments, mirror, and design limitations. It has reached those ultimate limits; to go beyond them, we’ll need a better telescope. (NASA)The world’s greatest observatory can go no further with its current instrument set.

The Hubble Space Telescope has provided humanity with our deepest views of the Universe ever. It has revealed fainter, younger, less-evolved, and more distant stars, galaxies, and galaxy clusters than any other observatory. More than 29 years after its launch, Hubble is still the greatest tool we have for exploring the farthest reaches of the Universe. Wherever astrophysical objects emit starlight, no observatory is better equipped to study them than Hubble.

But there are limits to what any observatory can see, even Hubble. It’s limited by the size of its mirror, the quality of its instruments, its temperature and wavelength range, and the most universal limiting factor inherent to any astronomical observation: time. Over the past few years, Hubble has released some of the greatest images humanity has ever seen. But it’s unlikely to ever do better; it’s reached its absolute limit. Here’s the story.

The Hubble Space Telescope (left) is our greatest flagship observatory in astrophysics history, but is much smaller and less powerful than the upcoming James Webb (center). Of the four proposed flagship missions for the 2030s, LUVOIR (right) is by far the most ambitious. By probing the Universe to fainter objects, higher resolution, and across a wider range of wavelengths, we can improve our understanding of the cosmos in unprecedented ways. (MATT MOUNTAIN / AURA)

From its location in space, approximately 540 kilometers (336 mi) up, the Hubble Space Telescope has an enormous advantage over ground-based telescopes: it doesn’t have to contend with Earth’s atmosphere. The moving particles making up Earth’s atmosphere provide a turbulent medium that distorts the path of any incoming light, while simultaneously containing molecules that prevent certain wavelengths of light from passing through it entirely.

While ground-based telescopes at the time could achieve practical resolutions no better than 0.5–1.0 arcseconds, where 1 arcsecond is 1/3600th of a degree, Hubble — once the flaw with its primary mirror was corrected — immediately delivered resolutions down to the theoretical diffraction limit for a telescope of its size: 0.05 arcseconds. Almost instantly, our views of the Universe were sharper than ever before.

This composite image of a region of the distant Universe (upper left) uses optical (upper right) and near-infrared (lower left) data from Hubble, along with far-infrared (lower right) data from Spitzer. The Spitzer Space Telescope is nearly as large as Hubble: more than a third of its diameter, but the wavelengths it probes are so much longer that its resolution is far worse. The number of wavelengths that fit across the diameter of the primary mirror is what determines the resolution.(NASA/JPL-CALTECH/ESA)

Sharpness, or resolution, is one of the most important factors in discovering what’s out there in the distant Universe. But there are three others that are just as essential:

  • the amount of light-gathering power you have, needed to view the faintest objects possible,
  • the field-of-view of your telescope, enabling you to observe a larger number of objects,
  • and the wavelength range you’re capable of probing, as the observed light’s wavelength depends the object’s distance from you.

Hubble may be great at all of these, but it also possesses fundamental limits for all four.

When you look at a region of the sky with an instrument like the Hubble Space Telescope, you are not simply viewing the light from distant objects as it was when that light was emitted, but also as the light is affected by all the intervening material and the expansion of space, that it experiences along its journey. Although Hubble has taken us farther back than any other observatory to date, there are fundamental limits to it, and reasons why it will be incapable of going farther. (NASA, ESA, AND Z. LEVAY, F. SUMMERS (STSCI))

The resolution of any telescope is determined by the number of wavelengths of light that can fit across its primary mirror. Hubble’s 2.4 meter (7.9 foot) mirror enables it to obtain that diffraction-limited resolution of 0.05 arcseconds. This is so good that only in the past few years have Earth’s most powerful telescopes, often more than four times as large and equipped with state-of-the-art adaptive optics systems, been able to compete.

To improve upon the resolution of Hubble, there are really only two options available:

  1. use shorter wavelengths of light, so that a greater number of wavelengths can fit across a mirror of the same size,
  2. or build a larger telescope, which will also enable a greater number of wavelengths to fit across your mirror.

Hubble’s optics are designed to view ultraviolet light, visible light, and near-infrared light, with sensitivities ranging from approximately 100 nanometers to 1.8 microns in wavelength. It can do no better with its current instruments, which were installed during the final servicing mission back in 2009.

This image shows Hubble servicing Mission 4 astronauts practice on a Hubble model underwater at the Neutral Buoyancy Lab in Houston under the watchful eyes of NASA engineers and safety divers. The final servicing mission on Hubble was successfully completed 10 years ago; Hubble has not had its equipment or instruments upgraded since, and is now running up against its fundamental limitations. (NASA)

Light-gathering power is simply about collecting more and more light over a greater period of time, and Hubble has been mind-blowing in that regard. Without the atmosphere to contend with or the Earth’s rotation to worry about, Hubble can simply point to an interesting spot in the sky, apply whichever color/wavelength filter is desired, and take an observation. These observations can then be stacked — or added together — to produce a deep, long-exposure image.

Using this technique, we can see the distant Universe to unprecedented depths and faintnesses. The Hubble Deep Field was the first demonstration of this technique, revealing thousands of galaxies in a region of space where zero were previously known. At present, the eXtreme Deep Field (XDF) is the deepest ultraviolet-visible-infrared composite, revealing some 5,500 galaxies in a region covering just 1/32,000,000th of the full sky.

The Hubble eXtreme Deep Field (XDF) may have observed a region of sky just 1/32,000,000th of the total, but was able to uncover a whopping 5,500 galaxies within it: an estimated 10% of the total number of galaxies actually contained in this pencil-beam-style slice. The remaining 90% of galaxies are either too faint or too red or too obscured for Hubble to reveal, and observing for longer periods of time won’t improve this issue by very much. Hubble has reached its limits. (HUDF09 AND HXDF12 TEAMS / E. SIEGEL (PROCESSING))

Of course, it took 23 days of total data taking to collect the information contained within the XDF. To reveal objects with half the brightness as the faintest objects seen in the XDF, we’d have to continue observing for a total of 92 days: four times as long. There’s a severe trade-off if we were to do this, as it would tie up the telescope for months and would only teach us marginally more about the distant Universe.

Instead, an alternative strategy for learning more about the distant Universe is to survey a targeted, wide-field area of the sky. Individual galaxies and larger structures like galaxy clusters can be probed with deep but large-area views, revealing a tremendous level of detail about what’s present at the greatest distances of all. Instead of using our observing time to go deeper, we can still go very deep, but cast a much wider net.

This, too, comes with a tremendous cost. The deepest, widest view of the Universe ever assembled by Hubble took over 250 days of telescope time, and was stitched together from nearly 7,500 individual exposures. While this new Hubble Legacy Field is great for extragalactic astronomy, it still only reveals 265,000 galaxies over a region of sky smaller than that covered by the full Moon.

Hubble was designed to go deep, but not to go wide. Its field of view is extremely narrow, which makes a larger, more comprehensive survey of the distant Universe all but prohibitive. It’s truly remarkable how far Hubble has taken us in terms of resolution, survey depth, and field-of-view, but Hubble has truly reached its limit on those fronts.

In the big image at left, the many galaxies of a massive cluster called MACS J1149+2223 dominate the scene. Gravitational lensing by the giant cluster brightened the light from the newfound galaxy, known as MACS 1149-JD, some 15 times. At upper right, a partial zoom-in shows MACS 1149-JD in more detail, and a deeper zoom appears to the lower right. This is correct and consistent with General Relativity, and independent of how we visualize (or whether we visualize) space. (NASA/ESA/STSCI/JHU)

Finally, there are the wavelength limits as well. Stars emits a wide variety of light, from the ultraviolet through the optical and into the infrared. It’s no coincidence that this is what Hubble was designed for: to look for light that’s of the same variety and wavelengths that we know stars emit.

But this, too, is fundamentally limiting. You see, as light travels through the Universe, the fabric of space itself is expanding. This causes the light, even if it’s emitted with intrinsically short wavelengths, to have its wavelength stretched by the expansion of space. By the time it arrives at our eyes, it’s redshifted by a particular factor that’s determined by the expansion rate of the Universe and the object’s distance from us.

Hubble’s wavelength range sets a fundamental limit to how far back we can see: to when the Universe is around 400 million years old, but no earlier.

The most distant galaxy ever discovered in the known Universe, GN-z11, has its light come to us from 13.4 billion years ago: when the Universe was only 3% its current age: 407 million years old. But there are even more distant galaxies out there, and we all hope that the James Webb Space Telescope will discover them. (NASA, ESA, AND G. BACON (STSCI))

The most distant galaxy ever discovered by Hubble, GN-z11, is right at this limit. Discovered in one of the deep-field images, it has everything imaginable going for it.

  • It was observed across all the different wavelength ranges Hubble is capable of, with only its ultraviolet-emitted light showing up in the longest-wavelength infrared filters Hubble can measure.
  • It was gravitationally lensed by a nearby galaxy, magnifying its brightness to raise it above Hubble’s naturally-limiting faintness threshold.
  • It happens to be located along a line-of-sight that experienced a high (and statistically-unlikely) level of star-formation at early times, providing a clear path for the emitted light to travel along without being blocked.

No other galaxy has been discovered and confirmed at even close to the same distance as this object.

Only because this distant galaxy, GN-z11, is located in a region where the intergalactic medium is mostly reionized, can Hubble reveal it to us at the present time. To see further, we require a better observatory, optimized for these kinds of detection, than Hubble. (NASA, ESA, AND A. FEILD (STSCI))

Hubble may have reached its limits, but future observatories will take us far beyond what Hubble’s limits are. The James Webb Space Telescope is not only larger — with a primary mirror diameter of 6.5 meters (as opposed to Hubble’s 2.4 meters) — but operates at far cooler temperatures, enabling it to view longer wavelengths.

At these longer wavelengths, up to 30 microns (as opposed to Hubble’s 1.8), James Webb will be able to see through the light-blocking dust that hampers Hubble’s view of most of the Universe. Additionally, it will be able to see objects with much greater redshifts and earlier lookback times: seeing the Universe when it was a mere 200 million years old. While Hubble might reveal some extremely early galaxies, James Webb might reveal them as they’re in the process of forming for the very first time.

The viewing area of Hubble (top left) as compared to the area that WFIRST will be able to view, at the same depth, in the same amount of time. The wide-field view of WFIRST will allow us to capture a greater number of distant supernovae than ever before, and will enable us to perform deep, wide surveys of galaxies on cosmic scales never probed before. It will bring a revolution in science, regardless of what it finds, and provide the best constraints on how dark energy evolves over cosmic time. (NASA / GODDARD / WFIRST)

Other observatories will take us to other frontiers in realms where Hubble is only scratching the surface. NASA’s proposed flagship of the 2020s, WFIRST, will be very similar to Hubble, but will have 50 times the field-of-view, making it ideal for large surveys. Telescopes like the LSST will cover nearly the entire sky, with resolutions comparable to what Hubble achieves, albeit with shorter observing times. And future ground-based observatories like GMT or ELT, which will usher in the era of 30-meter-class telescopes, might finally surpass Hubble in terms of practical resolution.

At the limits of what Hubble is capable of, it’s still extending our views into the distant Universe, and providing the data that enables astronomers to push the frontiers of what is known. But to truly go farther, we need better tools. If we truly value learning the secrets of the Universe, including what it’s made of, how it came to be the way it is today, and what its fate is, there’s no substitute for the next generation of observatories.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

We Have Now Reached The Limits Of The Hubble Space Telescope was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
During the Cambrian explosion, some 550–600 million years ago, the first complex, differentiated, macroscopic, multicellular, sexually-reproducing animals came to dominate the oceans. Over the next half a billion years, evolution would take life in many different directions. By the time the asteroid eliminating the dinosaurs arrived, 65 million years ago, mammals had diversified in a number of directions, with the earliest primates splitting off just before that great event. Modern lemurs likely bear a strong resemblance to those early primates. (GETTY)The history of life on Earth took many meandering turns before it gave rise to us.

When Earth first formed, all the raw ingredients for life — atoms, molecules, a potentially habitable planet at the right distance from its star — were already in place. While life itself arose relatively quickly, it took billions of years for that life to become complex, differentiated, and macroscopic. The four key developments that took us there were:

  • horizontal gene transfer, enabling an organism to gain useful genetic sequences from other species,
  • eukaryotic cells, whereby individual cells can have their own specialized organelles, enabling the performance of unique functions,
  • multicellularity, allowing further specialization and differentiation,
  • and sexual reproduction, enabling slowly-reproducing organisms to have dramatically different DNA sequences and physical traits from their parents.

All of this, in tandem, led to the Cambrian explosion some 550–600 million years ago. But the rise of warm-blooded mammals to prominence would take nearly another half-a-billion years.

The earliest complex plants and animals arose in the sea at the beginning of the Cambrian explosion, exhibiting many physical traits that were absent on Earth for the first 4 billion years of our planet’s history. After the Cambrian explosion, life evolved in a multitude of ways, but it would take another half a billion years for mammals to rise to a prominent position in our natural world. (GETTY)

Biologically, we classify organisms by their genetic and evolutionary traits. Approximately 1.5 billion years ago, eukaryotic life diverged into multiple kingdoms, with separate kingdoms eventually giving rise to modern plants, animals, and fungi. While life can mutate and evolve to become competitive in a variety of ecological niches, it’s very difficult to displace an already-established organism that successfully occupies it.

From an evolutionary perspective, what life often needs as a catalyst for change is an extinction event. This can come from any event, internal to Earth or external to it, that leads to the demise of a large percentage of species.

While the Snowball Earth scenario may be controversial, it is the details that are in doubt, not the overall effect. In the distant past, the evidence is overwhelming that Earth’s tropical latitudes were largely covered in ice. The Huronian Glaciation may have been the greatest mass extinction in Earth’s history, while a more recent glaciation, occurring some 600–700 million years ago, may have paved the way for the Cambrian explosion. (KEVIN GILL / FLICKR)

While a snowball Earth scenario, caused by photosynthetic organisms poisoning their environment with oxygen, may have played a critical role more than 2 billion years ago, the emergence from a later snowball Earth (or a severe, widespread glaciation) may have led directly to the Cambrian explosion.

Some critical stages that occurred in the millions of years just preceding the Cambrian explosion include:

  • the development of bilateral symmetry, leading to animals with tops and bottoms as well as fronts and backs; worms, dating to around 600 million years ago, may have been first.
  • Deuterostomes (which includes all animals with spinal cords) and protostomes (which includes all of the insects, crustaceans, and arachnids) appear for the first time some 580 million years ago.
  • and the first animal trails, suggesting that they move under their own power, came into being some 565 million years ago.

At the start of the Cambrian explosion, jellyfish, starfish, arthropods and mollusks were the dominant forms of life.

The Burgess Shale fossil deposit, dating to the mid-Cambrian, is arguably the most famous and well-preserved fossil deposit on Earth dating back to such early times. At least 280 species of complex, differentiated plants and animals have been identified, signifying one of the most important epochs in Earth’s evolutionary history: the Cambrian explosion. This diorama shows a model-based reconstruction of what the living organisms of the time might have looked like in true color. (JAMES ST. JOHN / FLICKR)

It was only a short period of time, until 540 million years ago, that the first true vertebrates arose. These early chordates mark the first appearance of the human phylum: chordata. The earliest fossils with spinal columns looked like lampreys, hagfish, and eels. Everything from sharks to tortoises to peacocks to humans can trace their ancestry back to these more primitive creatures.

Over the next 10 million years, a great diversity of body types finally starts to appear in the fossil record, along with the first appearance of trilobites. These invertebrates, which looked like enormous, 70 cm (a little over two feet) long lice, would remain the dominant form of life in the ocean for the next 200 million years.

Fossilised trilobite Calymene from the upper Ordovician Period (460 million years ago) of the Anti-Atlas Region of Morocco. These arthropod-like organisms were one of the dominant forms of oceanic life on Earth for approximately 300 million years, arising first during the Cambrian explosion and persisting until the end-Permian extinction. (GETTY)

But life didn’t remain in the ocean. Approximately 500 million years ago, the first animals began exploring the land. 470 million years ago, plants followed suit, quickly colonizing the entire surface. 460 million years ago, fish split off into bony fish (like salmon, trout, tuna, and most of the fish with scales) and cartilaginous fish (like sharks, with cartilage-based skeletons instead of bone).

Ocean life remained dominant, even after the great end-Ordovician mass extinction 440 million years ago, theorized to be a rapid ice age, which wiped out some 86% of all species. The surviving fish split into lobe-finned fishes (with bones in their fins), which would evolve into amphibians, reptiles, dinosaurs, birds, and mammals, and the ray-finned fishes, which evolved into most modern fish. So-called living fossils, like coelacanths and lungfish, evolved 420 million years ago from the lobe-finned fishes. They remain in a largely unchanged form today.

The Coelacanth fish was believed to have become extinct during the Cretaceous Period, after having arisen more than 400 million years ago. A surprise discovery of a living example in 1938 changed that story; coelacants are now considered to be a ‘living fossil’ by many, but more detailed studies have shown notable evolutionary changes among specimens over time. (GETTY)

Meanwhile, an enormous set of changes gets set into motion about 400 million years ago. The first insects evolve, and the land plants begin to develop woody stems. Almost simultaneously, the first four-legged animals evolve, moving from freshwater habitats onto land. The tetrapods were the first animals to arrive on land, and were never successfully displaced, despite all the extinction events that would subsequently occur.

Trees, shortly thereafter, must have developed, as the oldest fossilized tree presently dates to 385 million years ago. Everything was going extremely well for life, until about 375 million years ago, when the next great mass extinction occurred: the late Devonian extinction. It’s hypothesized that a series of algal blooms sucked the oxygen out of the ocean, suffocating some 75% of marine species altogether.

A life-like restoration of a transitional fossil known as Tiktaalik roseae, which provides a so-called missing link between fish and tetrapods, dating back to the late Devonian period of North America. (ZINA DERETSKY, NATIONAL SCIENCE FOUNDATION)

But great extinction events are almost always followed by life resurging in quantity, biomass, and diversity. 340 million years ago, the amphibians rose to prominence. Dimetrodon, a large, carnivorous reptile, became the dominant apex predator on land at around the same time.

310 million years ago, there was an important evolutionary split between the sauropsids, which would become the modern reptiles, dinosaurs, and birds, and the synapsids, which were reptiles with distinctive jaws. These latter reptiles would eventually evolve into all the mammals ever to populate Earth. Dimetrodon-like animals and their close cousins, the therapsids (which arose 275 million years ago), are the dominant synapsid land animals at this time.

A restoration of Dimetrodon, one of the dominant land animals of the late Permian period. Demitrodon, despite its similarities to dinosaurs, is actually a synapsid reptile, more closely related to modern mammals than the dinosaurs it more closely resembles. (PUBLIC DOMAIN / GOODFREEPHOTOS)

And then the biggest mass extinction ever known on our planet occurred: the end-Permian extinction. 250 million years ago, from an unknown cause, a whopping 96% of species on Earth cease to exist. The last of the trilobites, debilitated by the prior mass extinction, are driven out of existence. Dimetrodon and its relatives are wiped out; some therapsids barely survive.

But the sauropsids, previously living in the shadows of synapsids, rise to dominate the world. The explosion of sauropsids heralds the rise of dinosaurs and large ocean-dwelling reptiles, with the synapsids — our mammalian ancestors — surviving as small, nocturnal creatures. The cynodonts, a form of therapsid, first arose just before the Permian extinction: around 260 million years ago. The cynodonts developed dog-like teeth, while their descendants became warm-blooded approximately 200 million years ago. The end-Triassic extiction, concurrent with this development, wiped out 80% of species; it has no known cause at present.

One of the more recently-discovered cynodonts from the late triassic period, Bonacynodon, was a small animal with many mammal-like anatomical features. It was carnivorous, about 10 cm (4 inches) in length, and may be closely related to the ancestor of all extant mammals today. (JORGE BLANCO, MARTINELLI AG, SOARES MB, SCHWANKE C)

On land, the dinosaurs became the dominant form of animal life around this time, roughly 200 million years ago. Shortly thereafter, the first bird-like features began appearing among them: bird-like footprints, evidence of feathering, and vestigial wings that help running animals balance. Large crocodiles evolved, eliminating the giant amphibians.

Cynodont-descended mammals continued to survive while most other synapsids went extinct. 180 million years ago, the monotreme (egg-laying) mammals like the duck-billed platypus and echidna split off; 140 million years ago, so did the marsupials and placental mammals.

Koalas are perhaps the dumbest and least-evolved marsupials on the planet, having a smaller brain-to-body-size ratio than any other extant mammal. Marsupials first split off from the placental mammals some 140 million years ago. Modern marsupials may thrive in Australia, but reached it by way of originating in southeast Asia, migrating through the Americas, and then Antarctica, finally arriving in Australia. (ROBERT MICHAEL/PICTURE ALLIANCE VIA GETTY IMAGES)

In the plant world, conifers begin this era as the dominant form of tree, but angiosperms and other flowering plants arise some 130 million years ago, eventually dominating the Cretaceous. In the oceans, the great marine reptiles — the plesiosaurs — rose to prominence, along with ichthyosaurs, ammonites, squids and octopi.

By the time we get to 100 million years ago, and the largest, most famous dinosaurs dominate the landscape, the world is filled with flying birds, deciduous trees, pterosaurs, insects, and the legendary predators and herbivores common during the Cretaceous. The world starts cooling at around this time, leading to a slow decline and a decrease in size of many of these animals. Many birds become smaller and occupy a diversity of ecological niches. But perhaps the most interesting developments occur in our mammalian ancestors.

A small rodent known as a nutria, photographed here feeding among the wet grasses, is perhaps typical of the kinds of mammals that existed in great abundance during the very late Cretaceous period, just before the arrival of the asteroid that would clear out all the large reptiles, dinosaurs, birds and more that had dominated the oceans and land for the past 100+ million years. (LISA DUCRET/DPA/GETTY IMAGES)

Some 95 million years ago, an evolutionary split occurs among the placental mammals, giving rise to the laurasiatheres (horses, pigs, dogs, bats, etc.), the xenarthra (like anteaters and armadillos), the afrotheres (such as elephants and aardvarks), and the euarchontoglires (including primates, rodents, and lagomorphs). 75 million years ago, another split occurred, as the ancestors of modern primates split off from the remaining euarchontoglires; the rodents will become the most successful, eventually making up 40% of all modern mammals.

70 million years ago, the first grasses evolve, followed another 5 million years later by the most catastrophic event in the past 100 million years: the end-Cretaceous extinction, likely triggered by an enormous asteroid strike that created the Gulf of Mexico and the Yucatan peninsula.

A large, rapidly moving mass that strikes the Earth would be certainly capable of causing a mass extinction event. However, such events appear to be relatively rare. Even though asteroid and comet strikes are frequent, one that causes a mass extinction may be rare enough that no additional ones will occur for billions of years, even though the last one occurred a mere 65 million years ago. (DON DAVIS (WORK COMMISSIONED BY NASA))

Although the Deccan traps and other volcanic activity certainly played a role in the steady decline of dinosaurs during the late Cretaceous, the arrival of a massive asteroid left a telltale layer of catastrophe all over the world. This giant impact triggers an extinction event that wipes out huge classes of species of animals: 75% of all species in total.

Abruptly, 65 million years ago, the fossil record ceases to show all non-avian dinosaurs, pterosaurs, ichthyosaurs and plesiosaurs. The giant reptiles are all gone. The ammonites are wiped out; the nautilus is their oldest surviving relative. With the exception of a few animals like leatherback sea turtles and crocodiles, no creature weighing more than 55 pounds (about 25 kg) survives.

A measure of biodiversity, and changes in the number of genera that exist at any given time, to identify the most major extinction events in the past 500 million years. They are not periodic, and only the most recent one (from 65 million years ago) has a cause that is known for certain. Note the explosion in biodiversity following such a mass extinction event. (WIKIMEDIA COMMONS USER ALBERT MESTRE, WITH DATA FROM ROHDE, R.A., AND MULLER, R.A.)

But the small mammals did, and would go on to dominate the world. As is often the case, a large extinction event clears the way for new species to develop and grow to prominence. Having thoroughly diversified to occupy a variety of niches already, the mammals were poised to make that enormous leap.

65 million years ago, 99.5% of the Universe’s history had already unfolded, and yet the ancestors of modern humans were no better developed than a modern-day lemur. Complex, differentiated animals had already existed for half-a-billion years, but it seems to be mere chance that led to the rise of an intelligent, technologically-advanced species like us. We do not yet know what secrets other planets hold as far as life goes, but here on Earth, the most remarkable story of all was just getting truly interesting.

Further reading on what the Universe was like when:

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

What Was It Like When Mammals Evolved And Rose To Prominence? was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This large, fuzzy-looking galaxy is so diffuse that astronomers call it a “see-through” galaxy because they can clearly see distant galaxies behind it. The ghostly object, catalogued as NGC 1052-DF2, doesn’t have a noticeable central region, or even spiral arms and a disk, typical features of a spiral galaxy. But it doesn’t look like an elliptical galaxy, either, as its velocity dispersion is all wrong. Even its globular clusters are oddballs: they are twice as large as typical stellar groupings seen in other galaxies. All of these oddities pale in comparison to the weirdest aspect of this galaxy: NGC 1052-DF2 is very controversial because of its apparent lack of dark matter. This could solve an enormous cosmic puzzle. (NASA, ESA, AND P. VAN DOKKUM (YALE UNIVERSITY))Dark matter, dark energy, inflation and the Big Bang are real, and the alternatives all fail spectacularly.

If you keep up with the latest science news, you’re probably familiar with a large number of controversies concerning the nature of the Universe itself. Dark matter, thought to outweigh normal atomic matter by a 5-to-1 ratio, could be unnecessary, and replaced by a modification to our law of gravity. Dark energy, making up two-thirds of the Universe, is responsible for the accelerated expansion of space, but the expansion rate itself isn’t even agreed upon. And cosmic inflation has recently been derided by some as unscientific, as some of its detractors claim it can predict anything, and therefore predicts nothing.

If you add them all together, as philosopher Bjørn Ekeberg did in his recent piece for Scientific American, you might think cosmology was in crisis. But if you’re a scrupulous scientist, exactly the opposite is true. Here’s why.

If you look farther and farther away, you also look farther and farther into the past. The earlier you go, the hotter and denser, as well as less-evolved, the Universe turns out to be. The earliest signals can even, potentially, tell us about what happened prior to the moments of the hot Big Bang. (NASA / STSCI / A. FEILD (STSCI))

Science is more than just a collection of facts, although it certainly relies on the full suite of data and information we’ve collected about the natural world. Science is also a process, where the prevailing theories and frameworks are confronted with as many novel tests as possible, seeking to either validate or refute the consequential predictions of our most successful ideas.

This is where the frontiers of science lie: at the edges of the validity of our leading theories. We make predictions, we go out and test them experimentally and observationally, and then we constrain, revise, or extend our ideas to accommodate whatever new information we obtained. The ultimate dream of many is to revolutionize the way we conceive of our world, and to replace our current theories with something even more successful and profound.

Long before the data from BOOMERanG came back, the measurement of the spectrum of the CMB, from COBE, demonstrated that the leftover glow from the Big Bang was a perfect blackbody. One potential alternative explanation was that of reflected starlight, as the quasi-steady-state model predicted, but the difference in spectral intensity between what was predicted and observed showed that this alternative could not explain what was seen. (E. SIEGEL / BEYOND THE GALAXY)

But it’s not such an easy task to reproduce the successes of our leading scientific theories, much less to go beyond their present limitations. People who are enamored with ideas that conflict with robust observations have had notoriously difficult times letting go of their preferred conclusions. This has been a recurring theme throughout the history of science, and includes:

  • Fred Hoyle refusing to accept the Big Bang for nearly 40 years after the discovery of the Cosmic Microwave Background,
  • Halton Arp insisting that quasars are not distant objects, despite decades of data demonstrating that their redshifts are not quantized,
  • Hannes Alfven and his later followers insisting that gravitation does not dominate the Universe on large scales, and that plasmas determine the large-scale structure of the Universe, even after countless observations have refuted the idea.

Although science itself may be unbiased, scientists are not. We can fall prey to the same cognitive biases that anyone else can. Once we choose our preferred conclusions, we frequently fool ourselves through the fallacious practice of motivated reasoning.

Schematic diagram of the Universe’s history, highlighting reionization. Before stars or galaxies formed, the Universe was full of light-blocking, neutral atoms. While most of the Universe doesn’t become reionized until 550 million years afterwards, with the first major waves happening at around 250 million years, a few fortunate stars may form just 50-to-100 million years after the Big Bang, and with the right tools, we may reveal the earliest galaxies. (S. G. DJORGOVSKI ET AL., CALTECH DIGITAL MEDIA CENTER)

It’s where the famous aphorism that “physics advances one funeral at a time” first came from. This notion was originally put forth by Max Planck with the following statement:

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

The big problem that many non-scientists (and even some scientists) will never realize is this: you can always contort your theoretical ideas to force them to be viable, and consistent with what’s been observed. That’s why the key, for any theory, is to make robust predictions ahead of time: before the critical observation or measurement is performed. This way, you can be certain you’re testing your theory, rather than tinkering with parameters after-the-fact.

According to the tired light hypothesis, the number of photons-per-second we receive from each object drops proportional to the square of its distance, while the number of objects we see increases as the square of the distance. Objects should be redder, but should emit a constant number of photons-per-second as a function of distance. In an expanding universe, however, we receive fewer photons-per-second as time goes on because they have to travel greater distances as the Universe expands, and the energy is also reduced by the redshift. Even factoring in galaxy evolution results in a changing surface brightness that’s fainter at great distances, consistent with what we see.(WIKIMEDIA COMMONS USER STIGMATELLA AURANTIACA)

As it turns out, this is exactly how we wound up with the leading cosmological model we have today, in pretty much every regard.

The notion of the expanding Universe was theoretically predicted by Alexander Friedmann in 1922, when he derived what I have called the most important equation in the Universe. The observations of Vesto Slipher, Edwin Hubble and Milton Humason confirmed this only a few years later, leading to the modern notion of the expanding Universe.

According to the original observations of Penzias and Wilson, the galactic plane emitted some astrophysical sources of radiation (center), while a near-perfect, uniform background of radiation existed above and below that plane. The temperature and spectrum of this radiation has now been measured, and the agreement with the Big Bang’s predictions are extraordinary. (NASA / WMAP SCIENCE TEAM)

Many competing explanations for the Universe’s origin then emerged, with the Big Bang having four explicit cornerstones:

  1. the expanding Universe,
  2. the predicted abundances of the light elements, created during the hot, dense, early stage of the Big Bang,
  3. a leftover glow of photons just a few degrees above absolute zero,
  4. and the formation of large-scale structure, with structures which must evolve with distance.

All four of these have now been observed, with the latter three occurring after the Big Bang was first proposed. In particular, the discovery of the leftover glow of photons in the mid-1960s was the tipping point. As no other framework can account for these four observations, there are now no viable alternatives to the Big Bang.

The fluctuations in the CMB, the formation and correlations between large-scale structure, and modern observations of gravitational lensing, among many others, all point towards the same picture: an accelerating Universe, containing and full of dark matter and dark energy. Alternatives that offer differing observable predictions must be considered as well, but compared with the full suite of observational evidence out there. (CHRIS BLAKE AND SAM MOORFIELD)

With an expanding, cooling Universe that began from a hot, dense, matter-and-radiation-filled state, all governed by the Einstein’s General Relativity, there are a number of possibilities for how the Universe could have unfolded, but it’s not an infinite number. There are relationships between what’s in the Universe and how its expansion rate evolves, and that tremendously constrains what’s possible.

This is the only statement that is unequivocally correct in Ekeberg’s piece.

Once you accept the Big Bang and a Universe governed by General Relativity, there is an enormous suite of evidence that points to the existence of dark matter and dark energy. This is not a new suite, either, but one that’s been mounting since the 1970s. Dark energy’s main competitor fell away some 15 years ago, leaving only a Universe with dark matter and dark energy as a viable cosmology to explain the full suite of evidence.

Constraints on dark energy from three independent sources: supernovae, the CMB and BAO (which are a feature in the Universe’s large-scale structure.) Note that even without supernovae, we’d need dark energy, and that only 1/6th of the matter found can be normal matter; the rest must be dark matter. (SUPERNOVA COSMOLOGY PROJECT, AMANULLAH, ET AL., AP.J. (2010))

That’s the key that’s so often overlooked: you have to examine the full suite of evidence in evaluating the success or failure of your theory or framework. Sure, you can always find individual observations that pose a difficulty for your theory to explain, but that doesn’t mean you can just replace it with something that does successfully explain that one observation.

You have to account for everything, plus the new observation, plus new phenomena that have not yet been observed.

This is the problem with every alternative. Every alternative to the expanding Universe, to the Big Bang, to dark matter, dark energy, or inflation, all fail to even account for whatever’s been already observed, much less the rest of it. That’s why practically every working scientist considers these proposed alternatives to be mere sandboxing, rather than a serious challenge to the mainstream consensus.

The Carina dwarf galaxy, very similar in size, star distribution, and morphology to the Draco dwarf galaxy, exhibits a very different gravitational profile from Draco. This can be cleanly explained with dark matter if it can be heated up by star formation, but not by modified gravity. (ESO/G. BONO & CTIO)

There are indeed galaxies out there without dark matter, but this is predicted by theory. In fact, nearly a decade ago, a prominent contrarian noted the lack of galaxies without dark matter and claimed it falsified the dark matter model. When these galaxies without dark matter were discovered, that same scientist immediately claimed they were consistent with modified gravity. But only dark matter explains the full suite of evidence concerning the Universe.

There is, indeed, a discrepancy between two different sets of groups trying to measure the expansion rate of the Universe. The difference is 9%, and could represent a fundamental error in one group’s technique. More excitingly, it could be a sign that dark energy or some other aspect of the Universe is more complex than our naive assumptions. But dark energy is still necessary either way; the only “crisis” is aritificially manufactured.

A plot of the apparent expansion rate (y-axis) vs. distance (x-axis) is consistent with a Universe that expanded faster in the past, but where distant galaxies are accelerating in their recession today. This is a modern version of, extending thousands of times farther than, Hubble’s original work. Note the fact that the points do not form a straight line, indicating the expansion rate’s change over time. The fact that the Universe follows the curve it does is indicative of the presence, and late-time dominance, of dark energy. (NED WRIGHT, BASED ON THE LATEST DATA FROM BETOULE ET AL. (2014))

Finally, there’s cosmic inflation, the phase of the Universe that occurred prior to the hot Big Bang, setting up the initial conditions our Universe was born with. Although it’s often derided by many, inflation was never intended to be the ultimate, final answer, but rather as a framework to solve puzzles that the Big Bang cannot explain and to make new predictions describing the early Universe.

On these accounts, it is spectacularly successful. Inflation:

  1. successfully reproduces all the predictions of the hot Big Bang,
  2. solves the horizon, flatness, and monopole puzzles that plagued the non-inflationary Big Bang,
  3. and made six novel predictions that were distinct from the old-style Big Bang’s, with at least four of them now confirmed.
The quantum fluctuations that occur during inflation get stretched across the Universe, and when inflation ends, they become density fluctuations. This leads, over time, to the large-scale structure in the Universe today, as well as the fluctuations in temperature observed in the CMB. These new predictions are essential for demonstrating the validity of a fine-tuning mechanism. (E. SIEGEL, WITH IMAGES DERIVED FROM ESA/PLANCK AND THE DOE/NASA/ NSF INTERAGENCY TASK FORCE ON CMB RESEARCH)

To say that cosmology has some interesting puzzles is compelling; to say it has big problems is not something that most cosmologists would agree with. Ekeberg discusses the inflationary Big Bang with dark matter and dark energy as follows:

This well-known story is usually taken as a self-evident scientific fact, despite the relative lack of empirical evidence — and despite a steady crop of discrepancies arising with observations of the distant universe.

To argue that there’s a lack of empirical evidence for this completely misunderstands what science is or how science works, in general and specifically in this particular field, where data is abundant and high in quality. To point to “a steady crop of discrepancies” is a disingenuous — and I daresay deliberate — misreading of the evidence, used by Ekeberg to push forth a solipsistic, philosophically empty, anti-science agenda.

Many nearby galaxies, including all the galaxies of the local group (mostly clustered at the extreme left), display a relationship between their mass and velocity dispersion that indicates the presence of dark matter. NGC 1052-DF2 is the first known galaxy that appears to be made of normal matter alone. (DANIELI ET AL. (2019), ARXIV:1901.03711)

We should always be aware of the limitations of and assumptions inherent to any scientific hypothesis we put forth. Every theory has a range of established validity, and a range where we extend our predictions past the known frontiers. A theory is only as good as the verifiable predictions it can make; pushing to new observational or experimental territory is where we must look if we ever hope to supersede our present understanding.

But we mustn’t forget or throw out the existing successes of General Relativity, the expanding Universe, the Big Bang, dark matter, dark energy, or inflation. Going beyond our current theories includes — as a mandatory requirement — encompassing and reproducing their triumphs. Until a robust alternative can reach that threshold, all pronouncements of “big problems” with the prevailing paradigm should be treated for what they are: ideologically-driven diatribes without the requisite scientific merit to back them up.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

Cosmology’s Only Big Problems Are Manufactured Misunderstandings was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
On May 6, 1968, Neil Armstrong ejected safely from Lunar Landing Research Vehicle #1 as it began listing beyond recovery. This made Neil Armstrong the first pilot forced to exit the vehicle while in flight. The gusting winds on Earth were a contributing factor, but there were multiple faulty components, including a sensor that failed to detect and warn him of a fuel imbalance. Armstrong emerged uninjured, but the vehicle was destroyed. (NASA HISTORY DIVISION / G. J. MATRANGA, C. W. OTTINGER, AND C. R. JARVIS)The small step that one man took would never have happened without this narrow escape.

On July 20, 1969, history was made as humanity set foot on the Moon for the first time.

On July 20, 1969, the Apollo 11 astronauts landed on the Moon and began performing the first mission ever to take place with human beings on another world. The year prior, Neil Armstrong, holding the camera here, was almost killed in a test flight accident. (NASA / APOLLO 11)

With his “great leap forward for mankind,” Neil Armstrong achieved one of the most ambitious dreams ever attempted by humans.

Neil Armstrong on the surface of the Moon, where we learned so much about the origin of Earth’s only natural satellite. (NASA / APOLLO 11)

But Armstrong almost didn’t make it, narrowly escaping death the year prior.

Neil Armstrong with his birthday cake in August 1969 in the United States. This was the first birthday ever celebrated by a human being after having walked on the surface of another world. (GETTY)

Softly landing on the Moon, with no horizontal motion and only slight vertical motions, was a tremendous problem facing NASA.

By 1965, NASA scientists had determined what an optimal trajectory would look like for safely landing on the Moon. Only the Lunar Landing Research Vehicle (LLRV), built explicitly for this purpose, was capable of simulating such a trajectory here on Earth. (NASA HISTORY DIVISION / G. J. MATRANGA, C. W. OTTINGER, AND C. R. JARVIS)

There was no computerized guidance or high-resolution maps of the lunar landing site.

Using data from NASA’s Lunar Reconnaissance Orbiter (LRO) and its narrow angle camera (LROC), we can now construct 3D models of the surface of the Moon and simulate any potential landing sites for missions. This was not possible given the technology and data sets available in the 1960s. (NASA / SVS / LROC)

The eventual lunar module pilot would have to navigate the touchdown manually.

From the Command/Service Module, Apollo 9 pilot David Scott photographs the Lunar Module in its landing configuration. Lunar surface probes can be seen extending from the ends of the landing gear foot pads. The preparatory tests of the Lunar Landing Research Vehicle (LLRV) were designed to mimic the conditions that the Apollo Lunar Module would experience on the Moon, with Buzz Aldrin eventually serving as the Lunar Module pilot for Apollo 11. (NASA / DAVID SCOTT)

Armstrong was training in Lunar Landing Research Vehicle #1 on May 6, 1968, when something went horribly awry.

The Lunar Landing Research Vehicle (LLRV) was one of the most important tools that the Apollo astronauts trained on. It was the best opportunity they had to simulate an actual landing on the lunar surface here on Earth. (NASA HISTORY DIVISION / G. J. MATRANGA, C. W. OTTINGER, AND C. R. JARVIS)

During his 22nd LLRV test flight, he lost control.

Earth’s surface gravity is six times as powerful as the Moon’s, meaning that to simulate landing on the Moon, a special vehicle would need to be designed. The Lunar Landing Research Vehicle (LLRV) had a special gimbaled engine, which could maintain effective approximate lunar gravity, enabling the pilot to tilt the vehicle and test its responsiveness under conditions that would simulate landing on the Moon. (NASA HISTORY DIVISION / G. J. MATRANGA, C. W. OTTINGER, AND C. R. JARVIS)

The reserve attitude thrusters, which should have engaged when needed, were non-responsive.

This photograph shows Lunar Landing Research Vehicle #2 (LLRV-2) being moved from Armstrong Flight Research Center for display at the Air Force Test Flight Museum at Edwards Air Force Base. It is almost identical to the vehicle that almost killed Neil Armstrong in 1968. (NASA)

200 feet above the ground, with no noticeable on-board warnings, Armstrong unilaterally decided to eject.

On May 6, 1968, Neil Armstrong was piloting Lunar Landing Research Vehicle #1 when he lost the ability to successfully orient the aircraft. Using his own decision-making power, he ejected from the vehicle (L); four seconds later, the craft struck the ground, where it burst into flame less than a second after impact (R). (NASA)

A loss of helium pressure caused the depletion of hydrogen peroxide, cause the reserve attitude thrusters to fail.

Immediately following the crash, Armstrong returned to his desk, continuing his normal work.

The Lunar Module was successfully deployed on its first in-orbit test flight during Apollo 9. Here you can see the landing gear out, demonstrating the potential for landing on the Moon. The return engines have not yet been fired. This mission occurred in February of 1969, nine months after Armstrong’s crash and just four months after the problem that caused his crash was resolved. (NASA / APOLLO 9 ROLL 21/B)

Engineers corrected the problem, with test landings resuming that October.

This is one of the final official appearances of all three Apollo 11 astronauts: Buzz Aldrin, Michael Collins, and Neil Armstrong. If it weren’t for his cool-headed actions and survival during his disastrous 1968 test flight, Neil Armstrong never would have been the first human to set foot on the Moon. (NASA / GETTY IMAGES NORTH AMERICA)

Tonight, wink at the Moon for Neil.

This was the first photo, safely back in the Lunar Module, that was ever taken of Neil Armstrong after his historic first steps on the surface of the Moon. (NASA / APOLLO 11 / BUZZ ALDRIN)

Mostly Mute Monday tells an astronomical or scientific story in images, visuals, and no more than 200 words. Talk less; smile more.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, andTreknology: The Science of Star Trek from Tricorders to Warp Drive.

Today Marks The Anniversary Of Neil Armstrong’s Near-Fatal Lunar Landing Vehicle Crash was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The Allen Telescope Array is potentially capable of detecting a strong radio signal from Proxima b, or any other star system with strong enough radio transmissions. It has successfully worked in concert with other radio telescopes across extremely long baselines to resolve the event horizon of a black hole: arguably its crowning achievement. (WIKIMEDIA COMMONS / COLBY GUTIERREZ-KRAYBILL)It’s made up of scores of telescopes at many different sites across the world. But it acts like one giant telescope. Here’s how.

If you want to observe the Universe more deeply and at higher resolution than ever before, there’s one tactic that everyone agrees is ideal: build as big a telescope as possible. But the highest resolution image we’ve ever constructed in astronomy doesn’t come from the biggest telescope, but rather from an enormous array of modestly-sized telescopes: the Event Horizon Telescope. How is that possible? That’s what our Ask Ethan questioner for this week, Dieter, wants to know, stating:

I’m having difficulty understanding why the EHT array is considered as ONE telescope (which has the diameter of the earth).
When you consider the EHT as ONE radio telescope, I do understand that the angular resolution is very high due to the wavelength of the incoming signal and earth’s diameter. I also understand that time syncing is critical.
But it would help very much to explain why the diameter of the EHT is considered as ONE telescope, considering there are about 10 individual telescopes in the array.

Constructing an image of the black hole at the center of M87 is one of the most remarkable achievements we’ve ever made. Here’s what made it possible.

The brightness distance relationship, and how the flux from a light source falls off as one over the distance squared. The Earth has the temperature that it does because of its distance from the Sun, which determines how much energy-per-unit-area is incident on our planet. Distant stars or galaxies have the apparent brightness they do because of this relationship, which is demanded by energy conservation. Note that the light also spreads out in area as it leaves the source. (E. SIEGEL / BEYOND THE GALAXY)

The first thing you need to understand is how light works. When you have any light-emitting object in the Universe, the light it emits will spread out in a sphere upon leaving the source. If all you had was a photo-detector that was a single point, you could still detect that distant, light-emitting object.

But you wouldn’t be able to resolve it.

When light (i.e., a photon) strikes your point-like detector, you can register that the light arrived; you can measure the light’s energy and wavelength; you can know what direction the light came from. But you wouldn’t be able to know anything about that object’s physical properties. You wouldn’t know its size, shape, physical extent, or whether different parts were different colors or brightnesses. This is because you’re only receiving information at a single point.

Nebula NGC 246 is better known as the Skull Nebula, for the presence of its two glowing eyes. The central eye is actually a pair of binary stars, and the smaller, fainter one is responsible for the nebula itself, as it blows off its outer layers. It’s only 1,600 light-years away, in the constellation of Cetus. Seeing this as more than a single object requires the ability to resolve these features, dependent on the size of the telescope and the number of wavelengths of light that fit across its primary mirror. (GEMINI SOUTH GMOS, TRAVIS RECTOR (UNIV. ALASKA))

What would it take to know whether you were looking at a single point of light, such as a star like our Sun, or multiple points of light, like you’d find in a binary star system? For that, you’d need to receive light at multiple points. Instead of a point-like detector, you could have a dish-like detector, like the primary mirror on a reflecting telescope.

When the light comes in, it’s not striking a point anymore, but rather an area. The light that had spread out in a sphere now gets reflected off of the mirror and focused to a point. And light that comes from two different sources, even if they’re close together, will be focused to two different locations.

Any reflecting telescope is based on the principle of reflecting incoming light rays via a large primary mirror which focuses that light to a point, where it’s then either broken down into data and recorded or used to construct an image. This specific diagram illustrates the light-paths for a Herschel-Lomonosov telescope system. Note that two distinct sources will have their light focused to two distinct locations (blue and green paths), but only if the telescope has sufficient capabilities. (WIKIMEDIA COMMONS USER EUDJINNIUS)

If your telescope mirror is large enough compared to the separation of the two objects, and your optics are good enough, you’ll be able to resolve them. If you build your apparatus right, you’ll be able to tell that there are multiple objects. The two sources of light will appear to be distinct from one another. Technically, there’s a relationship between three quantities:

  • the angular resolution you can achieve,
  • the diameter of your mirror,
  • and the wavelength of light you’re looking in.

If your sources are closer together, or your telescope mirror is smaller, or you look using a longer wavelength of light, it becomes more and more challenging to resolve whatever you’re looking at. It makes it harder to resolve whether there are multiple objects or not, or whether the object you’re viewing has bright-and-dark features. If your resolution is insufficient, everything appears as nothing more than a blurry, unresolved single spot.

The limits of resolution are determined by three factors: the diameter of your telescope, the wavelength of light your viewing in, and the quality of your optics. If you have perfect optics, you can resolve all the way down to the Rayleigh limit, which grants you the highest-possible resolution allowed by physics. (SPENCER BLIVEN / PUBLIC DOMAIN)

So that’s the basics of how any large, single-dish telescope works. The light comes in from the source, with every point in space — even different points originating from the same object — emitting its own light with its own unique properties. The resolution is determined by the number of wavelengths of light that can fit across our primary mirror.

If our detectors are sensitive enough, we’ll be able to resolve all sorts of features on an object. Hot-and-cold regions of a star, like sunspots, can appear. We can make out features like volcanoes, geysers, icecaps and basins on planets and moons. And the extent of light-emitting gas or plasma, along with their temperatures and densities, can be imaged as well. It’s a fantastic achievement that only depends on the physical and optical properties of your telescope.

The second-largest black hole as seen from Earth, the one at the center of the galaxy M87, is shown in three views here. At the top is optical from Hubble, at the lower-left is radio from NRAO, and at the lower-right is X-ray from Chandra. These differing views have different resolutions dependent on the optical sensitivity, wavelength of light used, and size of the telescope mirrors used to observe them. The Chandra X-ray observations provide exquisite resolution despite having an effective 8-inch (20 cm) diameter mirror, owing to the extremely short-wavelength nature of the X-rays it observes. (TOP, OPTICAL, HUBBLE SPACE TELESCOPE / NASA / WIKISKY; LOWER LEFT, RADIO, NRAO / VERY LARGE ARRAY (VLA); LOWER RIGHT, X-RAY, NASA / CHANDRA X-RAY TELESCOPE)

But maybe you don’t need the entire telescope. Building a giant telescope is expensive and resource intensive, and it actually serves two purposes to build them so large.

  1. The larger your telescope, the better your resolution, based on the number of wavelengths of light that fit across your primary mirror.
  2. The larger your telescope’s collecting area, the more light you can gather, which means you can observe fainter objects and finer details than you could with a lower-area telescope.

If you took your large telescope mirror and started darkening out some spots — like you were applying a mask to your mirror — you’d no longer be able to receive light from those locations. As a result, the brightness limits on what you could see would decrease, in proportion to the surface area (light-gathering area) of your telescope. But the resolution would still be equal to the separation between the various portions of the mirror.

Meteor, photographed over the Atacama Large Millimeter/sub-millimeter Array, 2014. ALMA is perhaps the most advanced and most complex array of radio telescopes in the world, is capable of imaging unprecedented details in protoplanetary disks, and is also an integral part of the Event Horizon Telescope. (ESO/C. MALIN)

This is the principle on which arrays of telescopes are based. There are many sources out there, particularly in the radio portion of the spectrum, that are extremely bright, so you don’t need all that collecting area that comes with building an enormous, single dish.

Instead, you can build an array of dishes. Because the light from a distant source will spread out, you want to collect light over as large an area as possible. You don’t need to invest all your resources in constructing an enormous dish with supreme light-gathering power, but you still need that same superior resolution. And that’s where the idea of using a giant array of radio telescopes comes from. With a linked array of telescopes all over the world, we can resolve some of the radio-brightest but smallest angular-size objects out there.

This diagram shows the location of all of the telescopes and telescope arrays used in the 2017 Event Horizon Telescope observations of M87. Only the South Pole Telescope was unable to image M87, as it is located on the wrong part of the Earth to ever view that galaxy’s center. Every one of these locations is outfitted with an atomic clock, among other pieces of equipment. (NRAO)

Functionally, there is no difference between thinking about the following two scenarios.

  1. The Event Horizon Telescope is a single mirror with a lot of masking tape over portions of it. The light gets collected and focused from all these disparate locations across the Earth into a single point, and then synthesized together into an image that reveals the differing brightnesses and properties of your target in space, up to your maximal resolution.
  2. The Event Horizon Telescope is itself an array of many different individual telescopes and individual telescope arrays. The light gets collected, timestamped with an atomic clock (for syncing purposes), and recorded as data at each individual site. That data is then stitched-and-processed together appropriately to create an image that reveals the brightnesses and properties of whatever you’re looking at in space.

The only difference is in the techniques you have to use to make it happen, but that’s why we have the science of VLBI: very long-baseline interferometry.

In VLBI, the radio signals are recorded at each of the individual telescopes before being shipped to a central location. Each data point that’s received is stamped with an extremely accurate, high-frequency atomic clock alongside the data in order to help scientists get the synchronization of the observations correct. (PUBLIC DOMAIN / WIKIPEDIA USER RNT20)

You might immediately start thinking of wild ideas, like launching a radio telescope into deep space and using that, networked with the telescopes on Earth, to extend your baseline. It’s a great plan, but you must understand that there’s a reason we didn’t just build the Event Horizon Telescope with two well-separated sites: we want that incredible resolution in all directions.

We want to get full two-dimensional coverage of the sky, which means ideally we’d have our telescopes arranged in a large ring to get those enormous separations. That’s not feasible, of course, on a world with continents and oceans and cities and nations and other borders, boundaries and constraints. But with eight independent sites across the world (seven of which were useful for the M87 image), we were able to do incredibly well.

The Event Horizon Telescope’s first released image achieved resolutions of 22.5 microarcseconds, enabling the array to resolve the event horizon of the black hole at the center of M87. A single-dish telescope would have to be 12,000 km in diameter to achieve this same sharpness. Note the differing appearances between the April 5/6 images and the April 10/11 images, which show that the features around the black hole are changing over time. This helps demonstrate the importance of syncing the different observations, rather than just time-averaging them. (EVENT HORIZON TELESCOPE COLLABORATION)

Right now, the Event Horizon Telescope is limited to Earth, limited to the dishes that are presently networked together, and limited by the particular wavelengths it can measure. If it could be modified to observe at shorter wavelengths, and could overcome the atmospheric opacity at those wavelengths, we could achieve higher resolutions with the same equipment. In principle, we might be able to see features three-to-five times as sharp without needing a single new dish.

By making these simultaneous observations all across the world, the Event Horizon Telescope really does behave as a single telescope. It only has the light-gathering power of the individual dishes added together, but can achieve the resolution of the distance between the dishes in the direction that the dishes are separated.

By spanning the diameter of Earth with many different telescopes (or telescope arrays) simultaneously, we were able to obtain the data necessary to resolve the event horizon.

The Event Horizon Telescope behaves like a single telescope because of the incredible advances in the techniques we use and the increases in computational power and novel algorithms that enable us to synthesize this data into a single image. It’s not an easy feat, and took a team of over 100 scientists working for many years to make it happen.

But optically, the principles are the same as using a single mirror. We have light coming in from different spots on a single source, all spreading out, and all arriving at the various telescopes in the array. It’s just as though they’re arriving at different locations along an extremely large mirror. The key is in how we synthesize that data together, and use it to reconstruct an image of what’s actually occurring.

Now that the Event Horizon Telescope team has successfully done exactly that, it’s time to set our sights on the next target: learning as much as we can about every black hole we’re capable of viewing. Like all of you, I can hardly wait.

Send in your Ask Ethan questions to startswithabang at gmail dot com!

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, andTreknology: The Science of Star Trek from Tricorders to Warp Drive.

Ask Ethan: How Does The Event Horizon Telescope Act Like One Giant Mirror? was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The expanding Universe, full of galaxies and the complex structure we observe today, arose from a smaller, hotter, denser, more uniform state. It took thousands of scientists working for hundreds of years for us to arrive at this picture, and yet the lack of a consensus on what the expansion rate actually is tells us that either something is dreadfully wrong, we have an unidentified error somewhere, or there’s a new scientific revolution just on the horizon. (C. FAUCHER-GIGUÈRE, A. LIDZ, AND L. HERNQUIST, SCIENCE 319, 5859 (47))How fast is the Universe expanding? The results might be pointing to something incredible.

If you want to know how something in the Universe works, all you need to do is figure out how some measurable quantity will give you the necessary information, go out and measure it, and draw your conclusions. Sure, there will be biases and errors, along with other confounding factors, and they might lead you astray if you’re not careful. The antidote for that? Make as many independent measurements as you can, using as many different techniques as you can, to determine those natural properties as robustly as possible.

If you’re doing everything right, every one of your methods will converge on the same answer, and there will be no ambiguity. If one measurement or technique is off, the others will point you in the right direction. But when we try to apply this technique to the expanding Universe, a puzzle arises: we get one of two answers, and they’re not compatible with each other. It’s cosmology’s biggest conundrum, and it might be just the clue we need to unlock the biggest mysteries about our existence.

The redshift-distance relationship for distant galaxies. The points that don’t fall exactly on the line owe the slight mismatch to the differences in peculiar velocities, which offer only slight deviations from the overall observed expansion. The original data from Edwin Hubble, first used to show the Universe was expanding, all fit in the small red box at the lower-left. (ROBERT KIRSHNER, PNAS, 101, 1, 8–13 (2004))

We’ve known since the 1920s that the Universe is expanding, with the rate of expansion known as the Hubble constant. Ever since, it’s been a quest for the generations to determine “by how much?”

Early on, there was only one class of technique: the cosmic distance ladder. This technique was incredibly straightforward, and involved just four steps.

  1. Choose a class of object whose properties are intrinsically known, where if you measure something observable about it (like its period of brightness fluctuation), you know something inherent to it (like its intrinsic brightness).
  2. Measure the observable quantity, and determine what its intrinsic brightness is.
  3. Then measure the apparent brightness, and use what you know about cosmic distances in an expanding Universe to determine how far away it must be.
  4. Finally, measure the redshift of the object in question.
The farther a galaxy is, the faster it expands away from us, and the more its light appears redshifted. A galaxy moving with the expanding Universe will be even a greater number of light years away, today, than the number of years (multiplied by the speed of light) that it took the light emitted from it to reach us. But how fast the Universe is expanding is something that astronomers using different techniques cannot agree on. (LARRY MCNISH OF RASC CALGARY CENTER)

The redshift is what ties it all together. As the Universe expands, any light traveling through it will also stretch. Light, remember, is a wave, and has a specific wavelength. That wavelength determines what its energy is, and every atom and molecule in the Universe has a specific set of emission and absorption lines that only occur at specific wavelengths. If you can measure at what wavelength those specific spectral lines appear in a distant galaxy, you can determine how much the Universe has expanded from the time it left the object until it arrived at your eyes.

Combine the redshift and the distance for a variety of objects all throughout the Universe, and you can figure out how fast it’s expanding in all directions, as well as how the expansion rate has changed over time.

The history of the expanding Universe, including what it’s composed of at present. It is only by measuring how light redshifts as it travels through the expanding Universe that we can come to understand it as we do, and that requires a large series of independent measurements.(ESA AND THE PLANCK COLLABORATION (MAIN), WITH MODIFICATIONS BY E. SIEGEL; NASA / WIKIMEDIA COMMONS USER 老陳 (INSET))

All throughout the 20th century, scientists used this technique to try and determine as much as possible about our cosmic history. Cosmology — the scientific study of what the Universe is made of, where it came from, how it came to be the way it is today, and what its future holds — was derided by many as a quest for two parameters: the current expansion rate and how the expansion rate evolved over time. Until the 1990s, scientists couldn’t even agree on the first of these.

They were all using the same technique, but made different assumptions. Some groups used different types of astronomical objects from one another, others used different instruments with different measurement errors. Some classes of object turned out to be more complicated than we originally thought they’d be. But many problems still showed up.

Standard candles (L) and standard rulers (R) are two different techniques astronomers use to measure the expansion of space at various times/distances in the past. Based on how quantities like luminosity or angular size change with distance, we can infer the expansion history of the Universe. Using the candle method is part of the distance ladder, yielding 73 km/s/Mpc. Using the ruler is part of the early signal method, yielding 67 km/s/Mpc. (NASA / JPL-CALTECH)

If the Universe were expanding too quickly, there wouldn’t have been enough time to form planet Earth. If we can find the oldest stars in our galaxy, we know the Universe has to be at least as old as the stars within it. And if the expansion rate evolved over time, because there was something other than matter or radiation in it — or a different amount of matter than we’d assumed — that would show up in how the expansion rate changed over time.

Resolving these early controversies were the primary scientific motivation for building the Hubble Space Telescope. It’s key project was to make this measurement, and was tremendously successful. The rate it got was 72 km/s/Mpc, with just a 10% uncertainty. This result, published in 2001, solved a controversy as old as Hubble’s law itself. Alongside the discovery of dark matter and energy, it seemed to give us a fully accurate and self-consistent picture of the Universe.

The construction of the cosmic distance ladder involves going from our Solar System to the stars to nearby galaxies to distant ones. Each “step” carries along its own uncertainties, especially the Cepheid variable and supernovae steps; it also would be biased towards higher or lower values if we lived in an underdense or overdense region. There are enough independent methods use to construct the cosmic distance ladder that we can no longer reasonably fault one ‘rung’ on the ladder as the cause of our mismatch between different methods. (NASA, ESA, A. FEILD (STSCI), AND A. RIESS (STSCI/JHU))

The distance ladder group has grown far more sophisticated over the intervening time. There are now an incredibly large number of independent ways to measure the expansion history of the Universe:

  • using distant gravitational lenses,
  • using supernova data,
  • using rotational and dispersion properties of distant galaxies,
  • or using surface brightness fluctuations from face-on spirals,

and they all yield the same result. Regardless of whether you calibrate them with Cepheid variable stars, RR Lyrae stars, or red giant stars about to undergo helium fusion, you get the same value: ~73 km/s/Mpc, with uncertainties of just 2–3%.

The Variable Star RS Puppis, with its light echoes shining through the interstellar clouds. Variable stars come in many varieties; one of them, Cepheid variables, can be measured both within our own galaxy and in galaxies up to 50–60 million light years away. This enables us to extrapolate distances from our own galaxy to far more distant ones in the Universe. Other classes of individual star, such as a star at the tip of the AGB or a RR Lyrae variable, can be used instead of Cepheids, yielding similar results and the same cosmic conundrum over the expansion rate. (NASA, ESA, AND THE HUBBLE HERITAGE TEAM)

It would be a tremendous victory for cosmology, except for one problem. It’s now 2019, and there’s a second way to measure the expansion rate of the Universe. Instead of looking at distant objects and measuring how the light they’ve emitted has evolved, we can using relics from the earliest stages of the Big Bang. When we do, we get values of ~67 km/s/Mpc, with a claimed uncertainty of just 1–2%. These numbers are different by 9% from one another, and the uncertainties do not overlap.

Modern measurement tensions from the distance ladder (red) with early signal data from the CMB and BAO (blue) shown for contrast. It is plausible that the early signal method is correct and there’s a fundamental flaw with the distance ladder; it’s plausible that there’s a small-scale error biasing the early signal method and the distance ladder is correct, or that both groups are right and some form of new physics (shown at top) is the culprit. But right now, we cannot be sure.(ADAM RIESS (PRIVATE COMMUNICATION))

This time, however, things are different. We can no longer expect that one group will be right and the other will be wrong. Nor can we expect that the answer will be somewhere in the middle, and that both groups are making some sort of error in their assumptions. The reason we can’t count on this is that there are too many independent lines of evidence. If we try to explain one measurement with an error, it will contradict another measurement that’s already been made.

The total amount of stuff that’s in the Universe is what determines how the Universe expands over time. Einstein’s General Relativity ties the energy content of the Universe, the expansion rate, and the overall curvature together. If the Universe expands too quickly, that implies that there’s less matter and more dark energy in it, and that will conflict with observations.

Before Planck, the best-fit to the data indicated a Hubble parameter of approximately 71 km/s/Mpc, but a value of approximately 69 or above would now be too great for both the dark matter density (x-axis) we’ve seen via other means and the scalar spectral index (right side of the y-axis) that we require for the large-scale structure of the Universe to make sense. (P.A.R. ADE ET AL. AND THE PLANCK COLLABORATION (2015))

For example, we know that the total amount of matter in the Universe has to be around 30% of the critical density, as seen from the large-scale structure of the Universe, galaxy clustering, and many other sources. We also see that the scalar spectral index — a parameter that tells us how gravitation will form bound structures on small versus large scales — has to be slightly less than 1.

If the expansion rate is too high, you not only get a Universe with too little matter and too high of a scalar spectral index to agree with the Universe we have, you get a Universe that’s too young: 12.5 billion years old instead of 13.8 billion years old. Since we live in a galaxy with stars that have been identified as being more than 13 billion years old, this would create an enormous conundrum: one that cannot be reconciled.

Located around 4,140 light-years away in the galactic halo, SDSS J102915+172927 is an ancient star that contains just 1/20,000th the heavy elements the Sun possesses, and should be over 13 billion years old: one of the oldest in the Universe, and having possibly formed before even the Milky Way. The existence of stars like this informs us that the Universe cannot have properties that lead to an age younger than the stars within it. (ESO, DIGITIZED SKY SURVEY 2)

But perhaps no one is wrong. Perhaps the early relics point to a true set of facts about the Universe:

  • it is 13.8 billion years old,
  • it does have roughly a 70%/25%/5% ratio of dark energy to dark matter to normal matter,
  • it does appear to be consistent with an expansion rate that’s on the low end of 67 km/s/Mpc.

And perhaps the distance ladder also points to a true set of facts about the Universe, where it’s expanding at a larger rate today on cosmically nearby scales.

Although it sounds bizarre, both groups could be correct. The reconciliation could come from a third option that most people aren’t yet willing to consider. Instead of the distance ladder group being wrong or the early relics group being wrong, perhaps our assumptions about the laws of physics or the nature of the Universe is wrong. In other words, perhaps we’re not dealing with a controversy; perhaps what we’re seeing is a clue of new physics.

A doubly-lensed quasar, like the one shown here, is caused by a gravitational lens. If the time-delay of the multiple images can be understood, it may be possible to reconstruct an expansion rate for the Universe at the distance of the quasar in question. The earliest results now show a total of four lensed quasar systems, providing an estimate for the expansion rate consistent with the distance ladder group. (NASA HUBBLE SPACE TELESCOPE, TOMMASO TREU/UCLA, AND BIRRER ET AL)

It is possible that the ways we measure the expansion rate of the Universe are actually revealing something novel about the nature of the Universe itself. Something about the Universe could be changing with time, which would be yet another explanation for why these two different classes of technique could yield different results for the Universe’s expansion history. Some options include:

  • our local region of the Universe has unusual properties compared to the average (which is already disfavored),
  • dark energy is changing in an unexpected fashion over time,
  • gravity behaves differently than we’ve anticipated on cosmic scales,
  • or there is a new type of field or force permeating the Universe.

The option of evolving dark energy is of particular interest and importance, as this is exactly what NASA’s future flagship mission for astrophysics, WFIRST, is being explicitly designed to measure.

The viewing area of Hubble (top left) as compared to the area that WFIRST will be able to view, at the same depth, in the same amount of time. The wide-field view of WFIRST will allow us to capture a greater number of distant supernovae than ever before, and will enable us to perform deep, wide surveys of galaxies on cosmic scales never probed before. It will bring a revolution in science, regardless of what it finds.(NASA / GODDARD / WFIRST)

Right now, we say that dark energy is consistent with a cosmological constant. What this means is that, as the Universe expands, dark energy’s density remains a constant, rather than becoming less dense (like matter does). Dark energy could also strengthen over time, or it could change in behavior: pushing space inwards or outwards by different amounts.

Our best constraints on this today, in a pre-WFIRST world, show that dark energy is consistent with a cosmological constant to about the 10% level. With WFIRST, we’ll be able to measure any departures down to the 1% level: enough to test whether evolving dark energy holds the answer to the expanding Universe controversy. Until we have that answer, all we can do is continue to refine our best measurements, and look at the full suite of evidence for clues as to what the solution might be.

While matter (both normal and dark) and radiation become less dense as the Universe expands owing to its increasing volume, dark energy is a form of energy inherent to space itself. As new space gets created in the expanding Universe, the dark energy density remains constant. If dark energy changes over time, we could discover not only a possible solution to this conundrum concerning the expanding Universe, but a revolutionary new insight concerning the nature of existence. (E. SIEGEL / BEYOND THE GALAXY)

This is not some fringe idea, where a few contrarian scientists are overemphasizing a small difference in the data. If both groups are correct — and no one can find a flaw in what either one has done — it might be the first clue we have in taking our next great leap in understanding the Universe. Nobel Laureate Adam Riess, perhaps the most prominent figure presently researching the cosmic distance ladder, was kind enough to record a podcast with me, discussing exactly what all of this might mean for the future of cosmology.

It’s possible that somewhere along the way, we have made a mistake somewhere. It’s possible that when we identify it, everything will fall into place just as it should, and there won’t be a controversy or a conundrum any longer. But it’s also possible that the mistake lies in our assumptions about the simplicity of the Universe, and that this discrepancy will pave the way to a deeper understanding of our fundamental cosmic truths.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, andTreknology: The Science of Star Trek from Tricorders to Warp Drive.

Cosmology’s Biggest Conundrum Is A Clue, Not A Controversy was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The MMR (measles, mumps, and rubella) vaccine, or the MMR-V (including varicella, better known as chickenpox) vaccine, are some of the most important vaccinations a young child can receive. The lack of achieving a sufficient vaccination rate among the general population can lead to disease outbreaks which were entirely preventable. (GETTY)Nothing is 100% safe or 100% effective, but vaccine-hesitancy is even worse.

It’s one of the most important decisions you’ll ever make in your life: do I vaccinate my child, and do I do it according to the CDC’s recommended timetable? There’s a lot of information out there from groups that both encourage and discourage vaccination. While some of what’s out there is downright false, it truly is a complex issue.

On one hand, vaccines are truly a marvelous defense against a wide variety of infectious diseases. Afflictions that would sicken, injure, blind, paralyze, or even kill millions of children a year worldwide could be — and in some cases, have been — effectively eradicated in humans. On the other hand, no vaccine can be 100% safe or effective, and many parents have nightmare stories about what happened to their child almost immediately after having a vaccine administered. This is what every parent should know.

Our fears about the safety or dangers of vaccines should be balanced not only with information about the dangers of the disease, but the possibility of spreading the disease to individuals that cannot survive the symptoms. (DAVE HAYGARTH/FLICKR)

The world is a dangerous place, and it makes sense to want to do everything you can to protect your loved ones from any potential harm that may come to them. Similarly, we cannot simply make the decisions that are in our own self-interest, or our society will fall apart. It cannot solely be about you. When it comes to vaccinating your kids, these two opposing fears collide, and it can be difficult to know what the right thing to do is.

There are a few people who cannot be vaccinated for medical reasons: they may have allergies to vaccine ingredients; they may have compromised immune systems; they may be too young or ill. We know we cannot leave this population behind, either. In the battle against preventable illness and disease, we have to do our best to protect everyone.

Sgt. Sarah Ellis of the U.S. Air Force, an officer in charge of allergy immunizations, measures the swelling of a reaction on Wildmer Santiago’s arm. Allergy immunizations slowly build up the body’s tolerance to certain organisms which help prevent reactions. (KYLE GESE / US MILITARY)

Infectious diseases rarely go away altogether, as there’s no reliable biological way to drive every cell carrying such a disease to extinction. But vaccines offer the next best possibility: to drive the number of cases of that disease in humans to zero, effectively eradicating it.

In the absence of any type of immunity, a contagious, infectious illness will pass from person-to-person with a high probability. If the symptoms of the disease are life-threatening in any way, there will be a massive loss of life accompanying the spread of this disease. Even for diseases with tame reputations — such as the measles, whooping cough, or chickenpox — they have historically resulted in anywhere from scores to thousands of deaths per year. Yes, even chickenpox used to kill around 100 people per year before the vaccine came out.

Chickenpox rashes typically begin on the face and trunk of the body and spread from there. Chickenpox is caused by varicella, which used to kill an average of 100 Americans per year before the vaccine was introduced. (GETTY)

But vaccines can change all of that. They are the number one tool in humanity’s arsenal in the war against preventable disease. If the immunity rate against a disease is greater than 95%, science has shown that a single infected person is unlikely to spread that disease to many others. Outbreaks can be suppressed or even eliminated, but only if 95% or more of the population is immune to it.

This is why it’s so vital to have the vaccination rate for every vaccine-preventable disease be as high as possible. At 97% or 99%, those who cannot be vaccinated are almost guaranteed to not be infected; every vaccine not only protects the vaccinated, but the unvaccinated too. But at rates of 91%, 85%, or 76% (as was the case in Clark County, WA), outbreaks will certainly occur.

This map shows a county-by-county breakdown of opt-out vaccination rates in the states that allow non-medical vaccine exemptions. Once the opt-out rate rises above about 5%, the likelihood of an outbreak explodes. (J. K. OLIVE, P. J. HOTEZ, A. DAMANIA, M. S. NOLAN (2018) PLOS MEDICINE)

The population who suffers most are typically those who cannot defend themselves. Individuals with compromised immune systems, vaccinated individuals whose protection has worn off (which happens at random, and can be partially countered by booster shots), and infants who are too young to be vaccinated are the highest-risk groups.

Whooping cough, a disease once on the brink of eradication, is infecting tens of thousands annually. The worst injuries include brain inflammation (leading to permanent damage) and fatalities, largely among infants who are too young to receive the vaccination. If you have a newborn in your house and you allow unvaccinated individuals to come near your baby, you are literally risking its life. Numerous children under the age of 1 die every year because they caught a vaccine-preventable disease from an unvaccinated sibling or a sibling’s friend.

Dr. Andrew Terranella (EIS, 2010) processes blood samples during a December, 2010, epi-aid investigation of whooping cough in central Ohio. Despite widespread vaccination, pertussis has persisted in vaccinated populations, as immunity wanes and the bacterium that causes it, Bordetella pertussis, mutates over time. Low vaccination rates have contributed to the rise whooping cough outbreaks over this past decade. (NATIONAL INSTITUTE FOR OCCUPATIONAL SAFETY AND HEALTH (NIOSH))

But there’s a dark truth that we must not ignore. For all the benefits of vaccines, they are not 100% effective, and they cannot be proven 100% safe. Even if we do our best for ourselves and for our society — and make those optimal decisions — there will still be tragedies.

Some individuals will vaccinate their child, and will observe something horrifying within minutes, hours, or days.

  • They might develop a rash at the injection site, possibly accompanied by crying, fever, nausea, and vomiting.
  • They might begin manifesting the symptoms of autism, including lessened eye contact, no demonstrated interest in toys, unresponsiveness to sounds, voices, or their own name, or even the loss of previously-acquired skills.
  • Or, most horrifically, they might get very ill, experience seizures and/or convulsions, and potentially even die.
Young children, quite unfortunately, frequently experience convulsions and high fevers, running the risk of death when they occur. Literature offering possible treatments before professional medical help arrives dates back more than a century. (WHEN TO SEND FOR THE DOCTOR (1913), F. E. LIPPERT AND A. HOLMES)

I have nothing but compassion for the parents who watch their children go through something like this. It has to be horrifying, perhaps just as horrifying as when my little brother — when he was under a year old — spontaneously ran a temperature of 106 °F. He went into convulsions, and his eyes rolled back in his head. After being rushed to the hospital, we got the good news: he would live, and appeared unharmed. We got lucky.

But this didn’t occur minutes, hours, or even days after getting a vaccine. It just happened one day out of the blue, without any seeming cause at all. If it had occurred shortly after the administration of a vaccine, however, what would our family have concluded? Would there be anything you could say to us that would convince us this had nothing to do with the vaccine?

Probably not.

Whenever you vaccinate an individual, there is a very small statistical risk that there will be an adverse reaction to it. Although most documented reactions are not statistically significant, there are many parents who have vaccinated their children only to see them fall ill or experience a dramatic change in health, behavior or personality in a matter of mere hours. (12019 / PIXABAY)

But that’s why we have science. What we can do, scientifically, is to examine a hypothesis the best way we know how: to investigate it and attempt to disprove it. We may not be able to look at an individual case and know whether it was caused by a vaccine or not, but we can look at the general population of vaccinated and unvaccinated individuals — or at the vaccine schedule versus when injuries, illnesses, or symptoms occur — and draw conclusions from that.

When we do, we certainly have no shortage of negative effects that occur coincident with vaccinations. Injection site inflammation accompanied by fever and vomiting, post-vaccination, is relatively common. This is considered one of the main side-effects of vaccination.

A Salvadoran nurse vaccinates a baby during a Task Force Northstar mission in El Salvador to provide medical care and other humanitarian and civic assistance. The mission involved U.S. military personnel working alongside their Brazilian, Canadian, Chilean, and Salvadoran counterparts. (KIM BROWNE / U.S. MILITARY)

Many vaccines are coincident in time with when autism first manifests. However, the largest study ever of autism rates in vaccinated versus unvaccinated individuals (with respect to the MMR vaccine) showed that unvaccinated individuals were 7% more likely to develop autism than their vaccinated counterparts. This doesn’t mean that not vaccinating causes autism; these results are consistent with autism rates being unaffected by vaccinations.

Finally, some children die after receiving vaccinations. And some children die before receiving them, or independent of receiving them. The most robust analysis ever performed on those populations has shown that there are a few legitimate risks associated with vaccines. In particular,

Serious adverse reactions are uncommon and deaths caused by vaccines are very rare. Healthcare providers can take specific actions to help prevent adverse reactions, including proper screening for contraindications and precautions and observing a 15-minute waiting period after vaccinating to prevent fall-related injuries from syncope.
The HPV vaccine is very safe, and most people don’t have any problems or side effects. Studies have shown the vaccine caused HPV rates to decline 64 percent among teenaged girls ages 14 to 19, and 34 percent among women ages 20 to 24. (KRISTIN HIGH / U.S. MILITARY)

But, at the same time, if you are a parent whose child received a vaccination and then got very ill, even if the science doesn’t support the claim that the illness was vaccine-related, I extend my deepest sympathies to you. I’d also like you to consider the following.

When I was a teenager, I met a young man who became a big brother-like figure in my life for a time. If you spent time with him, you started to notice the scars on his arms and head: they corresponded to injuries requiring over 200 stitches a few years prior. He was involved in a car accident, and wasn’t wearing his seat belt. He went through the windshield, putting his arms up to shield his face. His face and arms were severely lacerated, which is what necessitated the stitches and left the scars. He considered himself lucky to be alive.

A very bad car accident, particularly at high speeds, can launch an unrestrained person inside the vehicle through the windshield. Although, in approximately 199-out-of-200 cases, the accident victim will have a better outcome if they are wearing a safety belt, a small fraction of people will survive an accident due to not wearing one when wearing it would have killed them. (US AIR FORCE)

Here’s the thing: he truly was lucky. The car accident was so bad that it compressed the dashboard into the seat where he had previously been seated. If he had been wearing his seat belt, his legs would have been crushed or severed, and he almost certainly would have died. Even though, in the overwhelming majority of cases, seat belts will save lives, he was the rare 0.5% (or so) of people who had a better outcome by not wearing his seat belt.

We had a lot of conversations about wearing a seat belt after that, because he would never wear a seat belt again, even though he knew all about the statistics. I have a lot of sympathy for him, and I realized I didn’t have a lot of ground to stand on to convince him that he needed to wear a seat belt. I have that same sympathy for parents whose kids do — or even appear to — have an adverse reaction to a vaccine.

Even though there are a small percentage of people who would survive an otherwise fatal accident if they didn’t wear a seat belt, a far larger majority of lives are saved from people who do wear them. We should all be eager to apply this same logic to our vaccination decisions, particularly when we also consider the health and safety of others that we might infect. (GETTY)

My friend didn’t start an anti-seat belt movement. He didn’t think there should be an opt-out of seat belt requirements, even for people like him, who would have died if they had worn them in a near-fatal accident. (Even though I, personally, thought it was unethical for the law to make him wear one after his experience.) Instead, he encouraged me and everyone else to wear our seat belts, because he recognized that, 199 times out of 200, it would help us live.

And that’s what we should want, not just for ourselves and our own children, but for everyone in society. We want to help them live. We want them to be illness and injury-free. And if we can prevent them from infectious diseases, we want to do it. But it takes all of us.

Everyone who can get vaccinated should get vaccinated: fully and on the CDC’s recommended schedule. If you’re feeling hesitant about vaccines, make sure you receive proper screening for contraindications beforehand. Know the risks. And then, if you have a choice, make the right one. You’ll not only protect your own child, but everyone else in the world, to the best of modern science’s capabilities.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, andTreknology: The Science of Star Trek from Tricorders to Warp Drive.

This Is Why Every Parent Should Fully Vaccinate Their Children was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
During the Cambrian era in Earth’s history, some 550–600 million years ago, many examples of multicellular, sexually-reproducing, complex and differentiated life forms emerged for the first time. This period is known as the Cambrian explosion, and heralds an enormous leap in the complexity of organisms found on Earth. (GETTY)We’re a long way from the beginnings of life on Earth. Here’s the key to how we got there.

The Universe was already two-thirds of its present age by the time the Earth formed, with life emerging on our surface shortly thereafter. But for billions of years, life remained in a relatively primitive state. It took nearly a full four billion years before the Cambrian explosion came: where macroscopic, multicellular, complex organisms — including animals, plants, and fungi — became the dominant lifeforms on Earth.

As surprising as it may seem, there were really only a handful of critical developments that were necessary in order to go from single-celled, simple life to the extraordinarily diverse sets of creatures we’d recognize today. We do not know if this path is one that’s easy or hard among planets where life arises. We do not know whether complex life is common or rare. But we do know that it happened on Earth. Here’s how.

This coastline consists of quartzite Pre-cambrian rocks, many of which may have once contained evidence of the fossilized lifeforms that gave rise to modern plants, animals, fungi, and other multicellular, sexually-reproducing creatures. These rocks have undergone intensive folding over their long and ancient history, and do not display the rich evidence for complex life that later, Cambrian-era rocks do. (GETTY)

Once the first living organisms arose, our planet was filled with organisms harvesting energy and resources from the environment, metabolizing them to grow, adapt, reproduce, and respond to external stimuli. As the environment changed due to resource scarcity, competition, climate change and many other factors, certain traits increased the odds of survival, while other traits decreased them. Owing to the phenomenon of natural selection, the organisms most adaptable to change survived and thrived.

Relying on random mutations alone, and passing those traits onto offspring, is extremely limiting as far as evolution goes. If mutating your genetic material and passing it onto your offspring is the only mechanism you have for evolution, you might not ever achieve complexity.

Acidobacteria, like the example shown here, are likely some of the first photosynthetic organisms of all. They have no internal structure or membranes, loose, free-floating DNA, and are anoxygenic: they do not produce oxygen from photosynthesis. These are prokaryotic organisms that are very similar to the primitive life found on Earth some ~2.5–3 billion years ago. (US DEPARTMENT OF ENERGY / PUBLIC DOMAIN)

But many billions of years ago, life developed the ability to engage in horizontal gene transfer, where genetic material can move from one organism to another via mechanisms other than asexual reproduction. Transformation, transduction, and conjugation are all mechanisms for horizontal gene transfer, but they all have something in common: single-celled, primitive organisms that develop a genetic sequence that’s useful for a particular purpose can transfer that sequence into other organisms, granting them the abilities that they worked so hard to evolve for themselves.

This is the primary mechanism by which modern-day bacteria develop antibiotic resistance. If one primitive organism can develop a useful adaptation, other organisms can develop that same adaptation without having to evolve it from scratch.

The three mechanisms by which a bacterium can acquire genetic information horizontally, rather than vertically (through reproduction), are transformation, transduction, and conjugation. (NATURE, FURUYA AND LOWY (2006) / UNIVERSITY OF LEICESTER)

The second major evolutionary step involves the development of specialized components within a single organism. The most primitive creatures have freely-floating bits of genetic material enclosed with some protoplasm inside a cell membrane, with nothing more specialized than that. These are the prokaryotic organisms of the world: the first forms of life thought to exist.

But more evolved creatures contain within them the ability to create miniature factories, capable of specialized functions. These mini-organs, known as organelles, herald the rise of the eukaryotes. Eukaryotes are larger than prokaryotes, have longer DNA sequences, but also have specialized components that perform their own unique functions, independent of the cell they inhabit.

Unlike their more primitive prokaryotic counterparts, eukaryotic cells have differentiated cell organelles, with their own specialized structure and function that allows them to perform many of the cells life processes in a relatively independent fashion from the rest of the cell’s functioning. (CNX OPENSTAX)

These organelles include a cell nucleus, the lysosomes, chloroplasts, golgi bodies, endoplasmic reticulum, and the mitochondria. Mitochondria themselves are incredibly interesting, because they provide a window into life’s evolutionary past.

If you take an individual mitochondria out of a cell, it can survive on its own. Mitochondria have their own DNA and can metabolize nutrients: they meet all of the definitions of life on their own. But they are also produced by practically all eukaryotic cells. Contained within the more complicated, more highly-evolved cells are the genetic sequences that enables them to create components of themselves that appear identical to earlier, more primitive organisms. Contained within the DNA of complex creatures is the ability to create their own versions of simpler creatures.

Scanning electron microscope image at the sub-cellular level. While DNA is an incredibly complex, long molecule, it is made of the same building blocks (atoms) as everything else. To the best of our knowledge, the DNA structure that life is based on predates the fossil record. The longer and more complex a DNA molecule is, the more potential structures, functions, and proteins it can encode. (PUBLIC DOMAIN IMAGE BY DR. ERSKINE PALMER, USCDCP)

In biology, structure and function is arguably the most basic relationship of all. If an organism develops the ability to perform a specific function, then it will have a genetic sequence that encode the information for forming a structure that performs it. If you gain that genetic code in your own DNA, then you, too, can create a structure that performs the specific function in question.

As creatures grew in complexity, they accumulated large numbers of genes that encoded for specific structures that performed a variety of functions. When you form those novel structures yourself, you gain the abilities to perform those functions that couldn’t be performed without those structures. While simpler, single-celled organisms may reproduce faster, organisms capable of performing more functions are often more adaptable, and more resilient to change.

Mitochondria, which are some of the specialized organelles found inside eukaryotic cells, are themselves reminiscent of prokaryotic organisms. They even have their own DNA (in black dots), cluster together at discrete focus points. With many independent components, a eukaryotic cell can thrive under a variety of conditions that their simpler, prokaryotic counterparts cannot. But there are drawbacks to increased complexity, too. (FRANCISCO J IBORRA, HIROSHI KIMURA AND PETER R COOK (BIOMED CENTRAL LTD))

By the time the Huronian glaciation ended and Earth was once again a warm, wet world with continents and oceans, eukaryotic life was common. Prokaryotes still existed (and still do), but were no longer the most complex creatures on our world. For life’s complexity to explode, however, there were two more steps that needed to not only occur, but to occur in tandem: multicellularity and sexual reproduction.

Multicellularity, according to the biological record left behind on planet Earth, is something that evolved numerous independent times. Early on, single-celled organisms gained the ability to make colonies, with many stitching themselves together to form microbial mats. This type of cellular cooperation enables a group of organisms, working together, to achieve a greater level of success than any of them could individually.

Green algae, shown here, is an example of a true multicellular organism, where a single specimen is composed of multiple individual cells that all work together for the good of the organism as a whole. (FRANK FOX / MIKRO-FOTO.DE)

Multicellularity offers an even greater advantage: the ability to have “freeloader” cells, or cells that can reap the benefits of living in a colony without having to do any of the work. In the context of unicellular organisms, freeloader cells are inherently limited, as producing too many of them will destroy the colony. But in the context of multicellularity, not only can the production of freeloader cells be turned on or off, but those cells can develop specialized structures and functions that assist the organism as a whole. The big advantage that multicellularity confers is the possibility of differentiation: having multiple types of cells working together for the optimal benefit of the entire biological system.

Rather than having individual cells within a colony competing for the genetic edge, multicellularity enables an organism to harm or destroy various parts of itself to benefit the whole. According to mathematical biologist Eric Libby:

[A] cell living in a group can experience a fundamentally different environment than a cell living on its own. The environment can be so different that traits disastrous for a solitary organism, like increased rates of death, can become advantageous for cells in a group.
Shown are representatives of all major lineages of eukaryotic organisms, color coded for occurrence of multicellularity. Solid black circles indicate major lineages composed entirely of unicellular species. Other groups shown contain only multicellular species (solid red), some multicellular and some unicellular species (red and black circles), or some unicellular and some colonial species (yellow and black circles). Colonial species are defined as those that possess multiple cells of the same type. There is ample evidence that multicellularity evolved independently in all the lineages shown separately here. (2006 NATURE EDUCATION MODIFIED FROM KING ET AL. (2004))

There are multiple lineages of eukaryotic organisms, with multicellularity evolving from many independent origins. Plasmodial slime molds, land plants, red algae, brown algae, animals, and many other classifications of living creatures have all evolved multicellularity at different times throughout Earth’s history. The very first multicellular organism, in fact, may have arisen as early as 2 billion years ago, with some evidence supporting the idea that an early aquatic fungus came about even earlier.

But it wasn’t through multicellularity alone that modern animal life became possible. Eukaryotes require more time and resources to develop to maturity than prokaryotes do, and multicellular eukaryotes have an even greater timespan from generation to generation. Complexity faces an enormous barrier: the simpler organisms they’re competing with can change and adapt more quickly.

A fascinating class of organisms known as siphonophores is itself a collection of small animals working together to form a larger colonial organism. These lifeforms straddle the boundary between a multicellular organism and a colonial organism. (KEVIN RASKOFF, CAL STATE MONTEREY / CRISCO 1492 FROM WIKIMEDIA COMMONS)

Evolution, in many ways, is like an arms race. The different organisms that exist are continuously competing for limited resources: space, sunlight, nutrients and more. They also attempt to destroy their competitors through direct means, such as predation. A prokaryotic bacterium with a single critical mutation can have millions of generations of chances to take down a large, long-lived complex creature.

There’s a critical mechanism that modern plants and animals have for competing with their rapidly-reproducing single-celled counterparts: sexual reproduction. If a competitor has millions of generations to figure out how to destroy a larger, slower organism for every generation the latter has, the more rapidly-adapting organism will win. But sexual reproduction allows for offspring to be significantly different from the parent in a way that asexual reproduction cannot match.

Sexually-reproducing organisms only deliver 50% of their DNA apiece to their children, with many random elements determining which particular 50% gets passed on. This is why offspring only have 50% of their DNA in common with their parents and with their siblings, unlike asexually-reproducing lifeforms. (PETE SOUZA / PUBLIC DOMAIN)

To survive, an organism must correctly encode all of the proteins responsible for its functioning. A single mutation in the wrong spot can send that awry, which emphasizes how important it is to copy every nucleotide in your DNA correctly. But imperfections are inevitable, and even with the mechanisms organisms have developed for checking and error-correcting, somewhere between 1-in-10,000,000 and 1-in-10,000,000,000 of the copied base pairs will have an error.

For an asexually-reproducing organism, this is the only source of genetic variation from parent to child. But for sexually-reproducing organisms, 50% of each parent’s DNA will compose the child, with some ~0.1% of the total DNA varying from specimen to specimen. This randomization means that even a single-celled organism which is well-adapted to outcompeting a parent will be poorly-adapted when faced with the challenges of the child.

In sexual reproduction, all organisms have two pairs of chromosomes, with each parent contributing 50% of their DNA (one set of each chromosome) to the child. Which 50% you get is a random process, allowing for enormous genetic variation from sibling to sibling, significantly different than either of the parents. (MAREK KULTYS / WIKIMEDIA COMMONS)

Sexual reproduction also means that organisms will have an opportunity to a changing environment in far fewer generations than their asexual counterparts. Mutations are only one mechanism for change from the prior generation to the next; the other is variability in which traits get passed down from parent to offspring.

If there is a wider variety among offspring, there is a greater chance of surviving when many members of a species will be selected against. The survivors can reproduce, passing on the traits that are preferential at that moment in time. This is why plants and animals can live decades, centuries, or millennia, and can still survive the continuous onslaught of organisms that reproduce hundreds of thousands of generations per year.

It is no doubt an oversimplification to state that horizontal gene transfer, the development of eukaryotes, multicellularity, and sexual reproduction are all it takes to go from primitive life to complex, differentiated life dominating a world. We know that this happened here on Earth, but we do not know what its likelihood was, or whether the billions of years it needed on Earth are typical or far more rapid than average.

What we do know is that life existed on Earth for nearly four billion years before the Cambrian explosion, which heralds the rise of complex animals. The story of early life on Earth is the story of most life on Earth, with only the last 550–600 million years showcasing the world as we’re familiar with it. After a 13.2 billion year cosmic journey, we were finally ready to enter the era of complex, differentiated, and possibly intelligent life.

The Burgess Shale fossil deposit, dating to the mid-Cambrian, is arguably the most famous and well-preserved fossil deposit on Earth dating back to such early times. At least 280 species of complex, differentiated plants and animals have been identified, signifying one of the most important epochs in Earth’s evolutionary history: the Cambrian explosion. This diorama shows a model-based reconstruction of what the living organisms of the time might have looked like in true color. (JAMES ST. JOHN / FLICKR)

Further reading on what the Universe was like when:

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The..

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview