Loading...

Follow Starts With A Bang! on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Atomic and molecular configurations come in a near-infinite number of possible combinations, but the specific combinations found in any material determine its properties. While diamonds are classically viewed as the hardest material found on Earth, they are neither the strongest material overall nor even the strongest naturally occurring material. There are, at present, six types of materials that are known to be stronger, although that number is expected to increase as time goes onwards. (MAX PIXEL)If you thought that diamonds were the hardest things of all, this will have you thinking again.

Carbon is one of the most fascinating elements in all of nature, with chemical and physical properties unlike any other element. With just six protons in its nucleus, it’s the lightest abundant element capable of forming a slew of complex bonds. All known forms of life are carbon-based, as its atomic properties enable it to link up with up to four other atoms at a time. The possible geometries of those bonds also enable carbon to self-assemble, particularly under high pressures, into a stable crystal lattice. If the conditions are just right, carbon atoms can form a solid, ultra-hard structure known as a diamond.

Although diamonds commonly known as the hardest material in the world, there are actually six materials that are harder. Diamonds are still one of the hardest naturally occurring and abundant materials on Earth, but these six materials all have it beat.

The web of the Darwin’s bark spider is the largest orb-type web produced by any spider on Earth, and the silk of the Darwin’s bark spider is the strongest of any type of spider silk. The longest single strand is measured at 82 feet; a strand that circled the entire Earth would weigh a mere 1 pound. (CARLES LALUEZA-FOX, INGI AGNARSSON, MATJAŽ KUNTNER, TODD A. BLACKLEDGE (2010))

Honorable mention: there are three terrestrial materials that aren’t quite as hard as diamond is, but are still remarkably interesting for their strength in a variety of fashions. With the advent of nanotechnology — alongside the development of nanoscale understandings of modern materials — we now recognize that there are many different metrics to evaluate physically interesting and extreme materials.

On the biological side, spider silk is notorious as the toughest. With a higher strength-to-weight ratio than most conventional materials like aluminum or steel, it’s also remarkable for how thin and sticky it is. Of all the spiders in the world, Darwin’s bark spiders have the toughest: ten times stronger than kevlar. It’s so thin and light that approximately a pound (454 grams) of Darwin’s bark spider silk would compose a strand long enough to trace out the circumference of the entire planet.

Silicon carbide, shown here post-assembly, is normally found as small fragments of the naturally occurring mineral moissanite. The grains can be sintered together to form complex, beautiful structures such as the one shown here in this sample of material. It is nearly as hard as diamond, and has been synthesized synthetically and known naturally since the late 1800s. (SCOTT HORVATH, USGS)

For a naturally occurring mineral, silicon carbide — found naturally in the form of moissanite — is only slightly less in hardness than diamonds. (It’s still harder than any spider silk.) A chemical mix of silicon and carbon, which occupy the same family in the periodic table as one another, silicon carbide grains have been mass produced since 1893. They can be bonded together through a high-pressure but low-temperature process known as sintering to create extremely hard ceramic materials.

These materials are not only useful in a wide variety of applications that take advantage of hardness, such as car brakes and clutches, plates in bulletproof vests, and even battle armor suitable for tanks, but also have incredibly useful semiconductor properties for use in electronics.

Ordered pillar arrays, shown here in green, have been used by scientists as advanced porous media to separate out various materials. By embedding silica nanospheres, here, scientists can increase the surface area used to separate and filter out mixed materials. The nanospheres shown here are just one particular example of nanospheres, and the self-assembling variety are almost on par with diamonds for material strength. (OAK RIDGE NATIONAL LABORATORIES / FLICKR)

Tiny silica spheres, from 50 nanometers in diameter down to just 2 nanometers, were created for the first time some 20 years ago at the Department of Energy’s Sandia National Laboratories. What’s remarkable about these nanospheres is that they’re hollow, they self-assemble into spheres, and they can even nest inside one another, all while remaining the stiffest material known to humankind, only slightly less hard than diamonds.

Self-assembly is an incredibly powerful tool in nature, but biological materials are weak compared to synthetic ones. These self-assembling nanoparticles could be used to create custom materials with applications from better water purifiers to more efficient solar cells, from faster catalysts to next-generation electronics. The dream technology of these self-assembling nanospheres, though, is printable body armor, custom to the user’s specifications.

Diamonds may be marketed as forever, but they have temperature and pressure limits just like any other conventional material. While most terrestrial materials cannot scratch a diamond, there are six materials that, at least by many measures, are stronger and/or harder than these naturally occurring carbon lattices. (GETTY)

Diamonds, of course, are harder than all of these, and still clock in at #7 on the all-time list of hardest materials found or created on Earth. Despite the fact that they’ve been surpassed by both other natural (but rare) materials and synethetic, human-made ones, they do still hold one important record.

Diamonds remain the most scratch-resistant material known to humanity. Metals like titanium are far less scratch-resistant, and even extremely hard ceramics or tungsten carbide cannot compete with diamonds in terms of hardness or scratch-resistance. Other crystals that are known for their extreme hardness, such as rubies or sapphires, still fall short of diamonds.

But six materials have even the vaunted diamond beat in terms of hardness.

Much like carbon can be assembled into a variety of configurations, Boron Nitride can take on amorphous, hexagonal, cubic, or tetrahedral (wurtzite) configurations. The structure of boron nitride in its wurtzite configuration is stronger than diamonds. Boron nitride can also be used to construct nanotubes, aerogels, and a wide variety of other fascinating applications. (BENJAH-BMM27 / PUBLIC DOMAIN)

6.) Wurtzite boron nitride. Instead of carbon, you can make a crystal out of a number of other atoms or compounds, and one of them is boron nitride (BN), where the 5th and 7th elements on the periodic table come together to form a variety of possibilities. It can be amorphous (non-crystalline), hexagonal (similar to graphite), cubic (similar to diamond, but slightly weaker), and the wurtzite form.

The last of these forms is both extremely rare, but also extremely hard. Formed during volcanic eruptions, it’s only ever been discovered in minute quantities, which means that we’ve never tested its hardness properties experimentally. However, it forms a different kind of crystal lattice — a tetrahedral one instead of a face-centered cubic one — that is 18% harder than diamond, according to the most recent simulations.

Two diamonds from Popigai crater, a crater formed with the known cause of a meteor strike. The object at left (marked a) is composed purely of diamond, while the object at right (marked b) is a mixture of diamond and small amounts of lonsdaleite. If lonsdaleite could be constructed without impurities of any type, it would be superior in terms of strength and hardness to pure diamond. (HIROAKI OHFUJI ET AL., NATURE (2015))

5.) Lonsdaleite. Imagine you have a meteor full of carbon, and therefore containing graphite, that hurtles through our atmosphere and collides with planet Earth. While you might envision a falling meteor as incredibly hot body, it’s only the outer layers that become hot; the insides remain cool for most (or even, potentially, all) of their journey towards Earth.

Upon impact with Earth’s surface, however, the pressures inside become larger than any other natural process on our planet’s surface, and cause the graphite to compress into a crystalline structure. It doesn’t possess the cubic lattice of a diamond, however, but a hexagonal lattice, which can actually achieve hardnesses that are 58% greater than what diamonds achieve. While real examples of Lonsdaleite contain sufficient impurities to make them softer than diamonds, an impurity-free graphite meteorite striking the Earth would undoubtedly produce material harder than any terrestrial diamond.

This image shows a close-up of a rope made with LIROS Dyneema SK78 hollowbraid line. For certain classes of applications where one would use a fabric or steel rope, Dyneema is the strongest fiber-type material known to human civilization today. (JUSTSAIL / WIKIMEDIA COMMONS)

4.) Dyneema. From hereon out, we leave the realm of naturally occurring substances behind. Dyneema, a thermoplastic polyethylene polymer, is unusual for having an extraordinarily high molecular weight. Most molecules that we know of are chains of atoms with a few thousand atomic mass units (protons and/or neutrons) in total. But UHMWPE (for ultra-high-molecular-weight polyethylene) has extremely long chains, with a molecular mass in the millions of atomic mass units.

With very long chains for their polymers, the intermolecular interactions are substantially strengthened, creating a very tough material. It’s so tough, in fact, that it has the highest impact strength of any known thermoplastic. It has been called the strongest fiber in the world, and outperforms all mooring and tow ropes. Despite being lighter than water, it can stop bullets and has 15 times the strength of a comparable amount of steel.

Micrograph of deformed notch in palladium-based metallic glass shows extensive plastic shielding of an initially sharp crack. Inset is a magnified view of a shear offset (arrow) developed during plastic sliding before the crack opened. Palladium microalloys have the highest combined strength and toughness of any known material. (ROBERT RITCHIE AND MARIOS DEMETRIOU)

3.) Palladium microalloy glass. It’s important to recognize that there are two important properties that all physical materials have: strength, which is how much force it can withstand before it deforms, and toughness, which is how much energy it takes to break or fracture it. Most ceramics are strong but not tough, shattering with vice grips or even when dropped from only a modest height. Elastic materials, like rubber, can hold a lot of energy but are easily deformable, and not strong at all.

Most glassy materials are brittle: strong but not particularly tough. Even reinforced glass, like Pyrex or Gorilla Glass, isn’t particularly tough on the scale of materials. But in 2011, researchers developed a new microalloy glass featuring five elements (phosphorous, silicon, germanium, silver and palladium), where the palladium provides a pathway for forming shear bands, allowing the glass to plastically deform rather than crack. It defeats all types of steel, as well as anything lower on this list, for its combination of both strength and toughness. It is the hardest material to not include carbon.

Freestanding paper made of carbon nanotubes, a.k.a. buckypaper, will prevent the passage of particles 50 nanometers and larger. It has unique physical, chemical, electrical and mechanical properties. Although it can be folded or cut with scissors, it’s incredibly strong. With perfect purity, it’s estimated it could reach up to 500 times the strength of a comparable volume of steel. This image shows NanoLab’s buckypaper under a scanning electron microscope. (NANOLAB, INC.)

2.) Buckypaper. It is well-known since the late 20th-century that there’s a form of carbon that’s even harder than diamonds: carbon nanotubes. By binding carbon together into a hexagonal shape, it can hold a rigid cylindrical-shaped structure more stably than any other structure known to humankind. If you take an aggregate of carbon nanotubes and create a macroscopic sheet of them, you can create a thin sheet of them: buckypaper.

Each individual nanotube is only between 2 and 4 nanometers across, but each one is incredibly strong and tough. It’s only 10% the weight of steel but has has hundreds of times the strength. It’s fireproof, extremely thermally conductive, possesses tremendous electromagnetic shielding properties, and could lead to materials science, electronics, military and even biological applications. But buckypaper cannot be made of 100% nanotubes, which is perhaps what keeps it out of the top spot on this list.

Graphene, in its ideal configuration, is a defect-free network of carbon atoms bound into a perfectly hexagonal arrangement. It can be viewed as an infinite array of aromatic molecules. (ALEXANDERALUS/CORE-MATERIALS OF FLICKR)

1.) Graphene. At last: a hexagonal carbon lattice that’s only a single atom thick. That’s what a sheet of graphene is, arguably the most revolutionary material to be developed and utilized in the 21st century. It is the basic structural element of carbon nanotubes themselves, and applications are growing continuously. Currently a multimillion dollar industry, graphene is expected to grow into a multibillion dollar industry in mere decades.

In proportion to its thickness, it is the strongest material known, is an extraordinary conductor of both heat and electricity, and is nearly 100% transparent to light. The 2010 Nobel Prize in Physics went to Andre Geim and Konstantin Novoselov for groundbreaking experiments involving graphene, and the commercial applications have only been growing. To date, graphene is the thinnest material known, and the mere six year gap between Geim and Novoselov’s work and their Nobel award is one of the shortest in the history of physics.

The K-4 crystal consists exclusively of carbon atoms arranged in a lattice, but with an unconventional bond angle compared to either graphite, diamond, or graphene. These inter-atomic properties can lead to drastically different physical, chemical, and material properties even with identical chemical formulas for a variety of structures. (WORKBIT / WIKIMEDIA COMMONS)

The quest to make materials harder, stronger, more scratch-resistant, lighter, tougher, etc., is probably never going to end. If humanity can push the frontiers of the materials available to us farther than ever before, the applications for what becomes feasible can only expand. Generations ago, the idea of microelectronics, transistors, or the capacity to manipulate individual atoms was surely exclusive to the realm of science-fiction. Today, they’re so common that we take all of them for granted.

As we hurtle full-force into the nanotech age, materials such as the ones described here become increasingly more important and ubiquitous to our quality of life. It’s a wonderful thing to live in a civilization where diamonds are no longer the hardest known material; the scientific advances we make benefit society as a whole. As the 21st century unfolds, we’ll all get to see what suddenly becomes possible with these new materials.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

There Are 6 ‘Strongest Materials’ On Earth That Are Harder Than Diamonds was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The largest group of newborn stars in our Local Group of galaxies, cluster R136, contains the most massive stars we’ve ever discovered: over 250 times the mass of our Sun for the largest. The brightest of the stars found here are more than 8,000,000 times as luminous as our Sun. And yet, there are still likely even more massive ones out there. (NASA, ESA, AND F. PARESCE, INAF-IASF, BOLOGNA, R. O’CONNELL, UNIVERSITY OF VIRGINIA, CHARLOTTESVILLE, AND THE WIDE FIELD CAMERA 3 SCIENCE OVERSIGHT COMMITTEE)At the core of the largest star-forming region of the Local Group sits the biggest star we know of.

Mass is the single most important astronomical property in determining the lives of stars.

The (modern) Morgan–Keenan spectral classification system, with the temperature range of each star class shown above it, in kelvin. Our Sun is a G-class star, producing light with an effective temperature of around 5800 K and a brightness of 1 solar luminosity. Stars can be as low in mass as 8% the mass of our Sun, where they’ll burn with ~0.01% our Sun’s brightness and live for more than 1000 times as long, but they can also rise to hundreds of times our Sun’s mass, with millions of times our Sun’s luminosity. (WIKIMEDIA COMMONS USER LUCASVB, ADDITIONS BY E. SIEGEL)

Greater masses generally lead to higher temperatures, greater brightnesses, and shorter lifetimes.

The active star-forming region, NGC 2363, is located in a nearby galaxy just 10 million light-years away. The brightest star visible here is NGC 2363-V1, visible as the isolated, bright star in the dark void at left. Despite being 6,300,000 times as bright as our Sun, it’s only 20 times as massive, having likely brightened recently as the result of an outburst. (LAURENT DRISSEN, JEAN-RENE ROY AND CARMELLE ROBERT (DEPARTMENT DE PHYSIQUE AND OBSERVATOIRE DU MONT MEGANTIC, UNIVERSITE LAVAL) AND NASA)

Since massive stars burn through their fuel so quickly, the record holders are found in actively star-forming regions.

The ‘supernova impostor’ of the 19th century precipitated a gigantic eruption, spewing many Suns’ worth of material into the interstellar medium from Eta Carinae. High mass stars like this within metal-rich galaxies, like our own, eject large fractions of mass in a way that stars within smaller, lower-metallicity galaxies do not. Eta Carinae might be over 100 times the mass of our Sun and is found in the Carina Nebula, but it is not among the most massive stars in the Universe. (NATHAN SMITH (UNIVERSITY OF CALIFORNIA, BERKELEY), AND NASA)

Luminosity isn’t enough, as short-lived outbursts can cause exceptional, temporary brightening in typically massive stars.

The star cluster NGC 3603 is located a little over 20,000 light-years away in our own Milky Way galaxy. The most massive star inside it is, NGC 3603-B, which is a Wolf-Rayet star located at the centre of the HD 97950 cluster which is contained within the large, overall star-forming region. (NASA, ESA AND WOLFGANG BRANDNER (MPIA), BOYKE ROCHAU (MPIA) AND ANDREA STOLTE (UNIVERSITY OF COLOGNE))

Within our own Milky Way, massive star-forming regions, like NGC 3603, house many stars over 100 times our Sun’s mass.

The star at the center of the Heart Nebula (IC 1805) is known as HD 15558, which is a massive O-class star that is also a member of a binary system. With a directly-measured mass of 152 solar masses, it is the most massive star we know of whose value is determined directly, rather than through evolutionary inferences. (S58Y / FLICKR)

As a member of a binary system, HD 15558 A is the most massive star with a definitive value: 152 solar masses.

The Large Magellanic Cloud, the fourth largest galaxy in our local group, with the giant star-forming region of the Tarantula Nebula (30 Doradus) just to the right and below the main galaxy. It is the largest star-forming region contained within our Local Group. (NASA, FROM WIKIMEDIA COMMONS USER ALFA PYXISDIS)

However, all stellar mass records originate from the star forming region 30 Doradus in the Large Magellanic Cloud.

A large section of the Tarantula Nebula, the largest star-forming region in the Local Group, imaged by the Ciel Austral team. At top, you can see the presence of hydrogen, sulfur, and oxygen, which reveals the rich gas and plasma structure of the LMC, while the lower view shows an RGB color composite, revealing reflection and emission nebulae. (CIEL AUSTRAL: JEAN CLAUDE CANONNE, PHILIPPE BERNHARD, DIDIER CHAPLAIN, NICOLAS OUTTERS AND LAURENT BOURGON)

Known as the Tarantula Nebula, it has a mass of ~450,000 Suns and contains over 10,000 stars.

The star forming region 30 Doradus, in the Tarantula Nebula in one of the Milky Way’s satellite galaxies, contains the largest, highest-mass stars known to humanity. The largest collection of bright, blue stars shown here is the ultra-dense star cluster R136, which contains nearly 100 stars that are approximately 100 solar masses or greater. Many of them have brightnesses that exceed a million solar luminosities. (NASA, ESA, AND E. SABBI (ESA/STSCI); ACKNOWLEDGMENT: R. O’CONNELL (UNIVERSITY OF VIRGINIA) AND THE WIDE FIELD CAMERA 3 SCIENCE OVERSIGHT COMMITTEE)

The central star cluster, R136, contains 72 of the brightest, most massive classes of star.

The cluster RMC 136 (R136) in the Tarantula Nebula in the Large Magellanic Cloud, is home to the most massive stars known. R136a1, the greatest of them all, is over 250 times the mass of the Sun. While professional telescopes are ideal for teasing out high-resolution details such as these stars in the Tarantula Nebula, wide-field views are better with the types of long-exposure times only available to amateurs. (EUROPEAN SOUTHERN OBSERVATORY/P. CROWTHER/C.J. EVANS)

The record-holder is R136a1, some 260 times our Sun’s mass and 8,700,000 times as bright.

An ultraviolet image and a spectrographic pseudo-image of the hottest, bluest stars at the core of R136. In this small component of the Tarantula Nebula alone, nine stars over 100 solar masses and dozens over 50 are identified through these measurements. The most massive star of all in here, R136a1, exceeds 250 solar masses, and is a candidate, later in its life, for photodisintegration. (ESA/HUBBLE, NASA, K.A. BOSTROEM (STSCI/UC DAVIS))

Stars such as this cannot be individually resolved beyond our Local Group.

An illustration of the first stars turning on in the Universe. Without metals to cool down the stars, only the largest clumps within a large-mass cloud can become stars. Until enough time has passes for gravity to affect larger scales, only the small-scales can form structure early on. Without heavy elements to facilitate cooling, stars are expected to routinely exceed the mass thresholds of the most massive ones known today. (NASA)

With NASA’s upcoming James Webb Space Telescope, we may discover Population III stars, which could reach thousands of solar masses.

The biggest ‘big idea’ that JWST has is to reveal to us the very first luminous objects in the Universe, including stars, supernovae, star clusters, galaxies, and luminous black holes. (KAREN TERAMURA, UHIFA / NASA)

Mostly Mute Monday tells an astronomical story in images, visuals, and no more than 200 words. Talk less; smile more.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

Is This The Most Massive Star In The Universe? was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Standard candles (L) and standard rulers (R) are two different techniques astronomers use to measure the expansion of space at various times/distances in the past. Based on how quantities like luminosity or angular size change with distance, we can infer the expansion history of the Universe. Using the candle method is part of the distance ladder, yielding 73 km/s/Mpc. Using the ruler is part of the early signal method, yielding 67 km/s/Mpc. (NASA / JPL-CALTECH)Two independent techniques give precise but incompatible answers. Here’s how to resolve it.

If you didn’t know anything about the Universe beyond our own galaxy, there are two different pathways you could take to figure out how it was changing. You could measure the light from well-understood objects at a wide variety of distances, and deduce how the fabric of our Universe changes as the light travels through space before arriving at our eyes. Alternatively, you could identify an ancient signal from the Universe’s earliest stages, and measure its properties to learn about how spacetime changes over time. These two methods are robust, precise, and in conflict with one another. Luc Bourhis wants to know what the resolution might be, asking:

As you pointed out in several of your columns, the cosmic [distance] ladder and the study of CMBR gives incompatible values for the Hubble constant. What are the best explanations cosmologists have come with to reconcile them?

Let’s start by exploring the problem, and then seeing how we might resolve it.

First noted by Vesto Slipher back in 1917, some of the objects we observe show the spectral signatures of absorption or emission of particular atoms, ions, or molecules, but with a systematic shift towards either the red or blue end of the light spectrum. When combined with the distance measurements of Hubble, this data gave rise to the initial idea of the expanding Universe. (VESTO SLIPHER, (1917): PROC. AMER. PHIL. SOC., 56, 403)

The story of the expanding Universe goes back nearly 100 years, to when Edwin Hubble first discovered individual stars of a specific type — Cepheid variable stars — within the spiral nebulae seen throughout the night sky. All at once, this demonstrated that these nebulae were individual galaxies, allowed us to calculate the distance to them, and by adding one additional piece of evidence, revealed that the Universe was expanding.

That additional evidence was discovered a decade prior by Vesto Slipher, who noticed that the spectral lines of these same spiral nebulae were severely redshifted on average. Either they were all moving away from us, or the space between us and them was expanding, just as Einstein’s theory of spacetime predicted. As more and better data came in, the conclusion became overwhelming: the Universe was expanding.

The construction of the cosmic distance ladder involves going from our Solar System to the stars to nearby galaxies to distant ones. Each ‘step’ carries along its own uncertainties. While the inferred expansion rate could be biased towards higher or lower values if we lived in an underdense or overdense region, the amount required to explain this conundrum is ruled out observationally. There are enough independent methods use to construct the cosmic distance ladder that we can no longer reasonably fault one ‘rung’ on the ladder as the cause of our mismatch between different methods. (NASA, ESA, A. FEILD (STSCI), AND A. RIESS (STSCI/JHU))

Once we accepted that the Universe was expanding, it became apparent that the Universe was smaller, hotter, and denser in the past. Light, from wherever it’s emitted, must travel through the expanding Universe in order to arrive at our eyes. When we measure the light we receive from a well-understood object, determining a distance to the objects we observe, we can also measure how much that light has redshifted.

This distance-redshift relation allows us to construct the expansion history of the Universe, as well as measuring its present expansion rate. The distance ladder method was thus born. At present, there are perhaps a dozen different objects we understand well enough to use as distance indicators — or standard candles — to teach us how the Universe has expanded over its history. The different methods all agree, and yield a value of 73 km/s/Mpc, with an uncertainty of just 2–3%.

The pattern of acoustic peaks observed in the CMB from the Planck satellite effectively rules out a Universe that doesn’t contain dark matter, and also tightly constrains many other cosmological parameters. We arrive at a Universe that’s 68% dark energy, 27% dark matter, and just 5% normal matter from this and other lines of evidence, with a best-fit expansion rate of 67 km/s/Mpc. (P.A.R. ADE ET AL. AND THE PLANCK COLLABORATION (2015))

On the other hand, if we go all the way back to the earliest stages of the Big Bang, we know that the Universe contained not only normal matter and radiation, but a substantial amount of dark matter as well. While normal matter and radiation interact with one another through collisions and scattering interactions very frequently, the dark matter behaves differently, as its cross-section is effectively zero.

This leads to a fascinating consequence: the normal matter tries to gravitationally collapse, but the photons push it back out, whereas the dark matter has no ability to be pushed by that radiation pressure. The result is a series of peaks-and-valleys in the large-scale structure that arises on cosmic scales from these oscillations — known as baryon acoustic oscillations (BAO) — but the dark matter is smoothly distributed atop it.

The large-scale structure of the Universe changes over time, as tiny imperfections grow to form the first stars and galaxies, then merge together to form the large, modern galaxies we see today. Looking to great distances reveals a younger Universe, similar to how our local region was in the past. The temperature fluctuations in the CMB, as well as the clustering properties of galaxies throughout time, provide a unique method of measuring the Universe’s expansion history. (CHRIS BLAKE AND SAM MOORFIELD)

These fluctuations show up on a variety of angular scales in the cosmic microwave background (CMB), and also leave an imprint in the clustering of galaxies that occurs later on. These relic signals, originating from the earliest times, allow us to reconstruct how quickly the Universe is expanding, among other properties. From the CMB and BAO both, we get a very different value: 67 km/s/Mpc, with an uncertainty of only 1%.

Because of the fact that there are many parameters we don’t know intrinsically about the Universe — such as the age of the Universe, the normal matter density, the dark matter density, or the dark energy density — we have to allow them all to vary together when constructing our best-fit models of the Universe. When we do, a number of possible pictures arise, but one thing remains unambiguously true: the distance ladder and early relic methods are mutually incompatible.

Modern measurement tensions from the distance ladder (red) with early signal data from the CMB and BAO (blue) shown for contrast. It is plausible that the early signal method is correct and there’s a fundamental flaw with the distance ladder; it’s plausible that there’s a small-scale error biasing the early signal method and the distance ladder is correct, or that both groups are right and some form of new physics (examples shown at top) is the culprit. But right now, we cannot be sure. (ADAM RIESS (PRIVATE COMMUNICATION))

The possibilities for why these discrepancies are occurring are threefold:

  1. The “early relics” group is mistaken. There’s a fundamental error in their approach to this problem, and it’s biasing their results towards unrealistically low values.
  2. The “distance ladder” group is mistaken. There’s some sort of systematic error in their approach, biasing their results towards incorrect, high values.
  3. Both groups are correct, and there is some sort of new physics at play responsible for the two groups obtaining different results.

There are numerous very good reasons indicating that the results of both groups ought to be believed. If that’s the case, there has to be some sort of new physics involved to explain what we’re seeing. Not everything can do it: living in a local cosmic void is disfavored, as is adding in a few percentage points of spatial curvature. Instead, here are the five best explanations cosmologists are considering right now.

Measuring back in time and distance (to the left of “today”) can inform how the Universe will evolve and accelerate/decelerate far into the future. We can learn that acceleration turned on about 7.8 billion years ago with the current data, but also learn that the models of the Universe without dark energy have either Hubble constants that are too low or ages that are too young to match with observations. If dark energy evolves with time, either strengthening or weakening, we will have to revise our present picture. (SAUL PERLMUTTER OF BERKELEY)

1.) Dark energy gets more powerfully negative over time. To the limits of our best observations, dark energy appears to be consistent with a cosmological constant: a form of energy inherent to space itself. As the Universe expands, more space gets created, and since the dark energy density remains constant, the total amount of dark energy contained within our Universe increases along with the Universe’s volume.

But this is not mandatory. Dark energy could either strengthen or weaken over time. If it’s truly a cosmological constant, there’s an absolute relationship between its energy density (ρ) and the negative pressure (p) it exerts on the Universe: p = -ρ. But there is some wiggle room, observationally: the pressure could be anywhere from -0.92ρ to about -1.18ρ. If the pressure gets more negative over time, this could yield a smaller value with the early relics method and a larger value with the distance ladder method. WFIRST should measure this relationship between ρ and p down to about the 1% level, which should constrain, rule out, or discover the truth of this possibility.

The early Universe was full of matter and radiation, and was so hot and dense that it prevented all composite particles from stably forming for the first fraction-of-a-second. As the Universe cools, antimatter annihilates away and composite particles get a chance to form and survive. Neutrinos are generally expected to stop interacting by the time the Universe is ~1 second old, but if there are more interactions than we realize, this could have huge implications for the expansion rate of the Universe. (RHIC COLLABORATION, BROOKHAVEN)

2.) Keeping neutrinos strongly coupled to matter and radiation for longer than expected. Conventionally, neutrinos interact with the other forms of matter and radiation in the Universe only until the Universe cools to a temperature of around 10 billion K. At temperatures cooler than this, their interaction cross-section is too low to be important. This is expected to occur just a second after the Big Bang begins.

But if the neutrinos stay strongly-coupled to the matter and radiation for longer — for thousands of years in the early Universe instead of just ~1 second — this could accommodate a Universe with a faster expansion rate than the early relics teams normally consider. This could arise if there’s an additional self-interaction between neutrinos from what we presently think, which is compelling considering the Standard Model alone cannot explain the full suite of neutrino observations. Further neutrino studies at relatively low and intermediate energies could probe this scenario.

An illustration of clustering patterns due to Baryon Acoustic Oscillations, where the likelihood of finding a galaxy at a certain distance from any other galaxy is governed by the relationship between dark matter and normal matter. As the Universe expands, this characteristic distance expands as well, allowing us to measure the Hubble constant, the dark matter density, and even the scalar spectral index. The results agree with the CMB data, and a Universe made up of 27% dark matter, as opposed to 5% normal matter. Altering the distance of the sound horizon could alter the expansion rate that this data implicates. (ZOSIA ROSTOMIAN)

3.) The size of the cosmic sound horizon is different than what the early relics team has concluded. When we talk about photons, normal matter, and dark matter, there is a characteristic distance scale set by their interactions, the size/age of the Universe, and the rate at which signals can travel through the early Universe. Those acoustic peaks and valleys we see in the CMB and in the BAO data, for example, are manifestations of that sound horizon.

But what if we’ve miscalculated or incorrectly determined the size of that horizon? If you calibrate the sound horizon with distance ladder methods, such as Type Ia supernovae, you obtain a sound horizon that’s significantly larger than the one you get if you calibrate the sound horizon traditionally: with CMB data. If the sound horizon actually evolves from the very early Universe to the present day, this could fully explain the discrepancy. Fortunately, next-generation CMB surveys, like the proposed SPT-3G, should be able to test whether such changes have occurred in our Universe’s past.

If there were no oscillations due to matter interacting with radiation in the Universe, there would be no scale-dependent wiggles seen in galaxy clustering. The wiggles themselves, shown with the non-wiggly part subtracted out (bottom), is dependent on the impact of the cosmic neutrinos theorized to be present by the Big Bang. Standard Big Bang cosmology corresponds to β=1. Note that if there is a dark matter/neutrino interaction present, the perceived expansion rate could be altered. (D. BAUMANN ET AL. (2019), NATURE PHYSICS)

4.) Dark matter and neutrinos could interact with one another. Dark matter, according to every indication we have, only interacts gravitationally: it doesn’t collide with, annihilate with, or experience forces exerted by any other forms of matter or radiation. But in truth, we only have limits on possible interactions; we haven’t ruled them out entirely.

What if dark matter and neutrinos interact and scatter off of one another? If the dark matter is very massive, an interaction between a very heavy thing (like a dark matter particle) and a very light particle (like a neutrino) could cause the light particles to speed up, gaining kinetic energy. This would function as a type of energy injection in the Universe. Depending on when and how it occurs, could cause a discrepancy between early and late measurements of the expansion rate, perhaps even enough to fully account for the differing, technique-dependent measurements.

An illustrated timeline of the Universe’s history. If the value of dark energy is small enough to admit the formation of the first stars, then a Universe containing the right ingredients for life is pretty much inevitable. However, if dark energy comes and goes in waves, with an early amount of dark energy decaying away prior to the emission of the CMB, it could resolve this expanding Universe conundrum. (EUROPEAN SOUTHERN OBSERVATORY (ESO))

5.) Some significant amount of dark energy existed not only at late (modern) times, but also early ones. If dark energy appears in the early Universe (at the level of a few percent) but then decays away prior to the CMB measurements, this could fully explain the tension between the two methods of measuring the expansion rate of the Universe. Again, future improved measurements of both the CMB and of the large-scale structure of the Universe could help provide indications if this scenario describes our Universe.

Of course, this isn’t an exhaustive list; one could always choose any number of classes of new physics, from inflationary add-ons to modifying Einstein’s theory of General Relativity, to potentially explain this controversy. But in the absence of compelling observational evidence for one particular scenario, we have to look at the ideas that could be feasibly tested in the near-term future.

The viewing area of Hubble (top left) as compared to the area that WFIRST will be able to view, at the same depth, in the same amount of time. The wide-field view of WFIRST will allow us to capture a greater number of distant supernovae than ever before, and will enable us to perform deep, wide surveys of galaxies on cosmic scales never probed before. It will bring a revolution in science, regardless of what it finds, and provide the best constraints on how dark energy evolves over cosmic time. If dark energy varies by more than 1% of the value it’s anticipated to have, WFIRST will find it. (NASA / GODDARD / WFIRST)

The immediate problem with most solutions you can concoct to this puzzle is that the data from each of the two main techniques — the distance ladder technique and the early relics technique — already rule out almost all of them. If the five scenarios for new physics you just read seem like an example of desperate theorizing, there’s a good reason for that: unless one of the two techniques has a hitherto-undiscovered fundamental flaw, some type of new physics must be at play.

Based on the improved observations that are coming in, as well as novel scientific instruments that are presently being designed and built, we can fully expect the tension in these two measurements to reach the “gold standard” 5-sigma significance level within a decade. We’ll all keep looking for errors and uncertainties, but it’s time to seriously consider the fantastic: maybe this really is an omen that there’s more to the Universe than we presently realize.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

Ask Ethan: What Could Solve The Cosmic Controversy Over The Expanding Universe? was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
An artist’s rendition of a potentially habitable exoplanet orbiting a distant star. But we might not have to find an Earth-like world to find life; very different planets around very different stars could surprise us in a number of ways. No matter what, more information is needed. (NASA AMES/JPL-CALTECH)If we restrict ourselves to looking for extraterrestrial life on Earth-like worlds, we might miss it entirely.

When we think about life out there in the Universe, far beyond the limits of Earth, we can’t help but look to our own planet as a guide. Earth has a number of features that we think are extremely important — perhaps even essential — for enabling life to arise and thrive. For generations, humans have dreamed of life beyond Earth, striving to find another world similar to our own but with its own unique success story: our own Earth 2.0.

But just because life succeeded here on Earth doesn’t necessarily mean that life is likely to succeed on Earth-like worlds, only that it’s possible. Similarly, just because life hasn’t been found on non-Earth-like worlds doesn’t mean that it isn’t possible. In fact, it’s quite possible that the most common forms of life in the galaxy are very different from terrestrial life forms, and occur more frequently on worlds different than our own. The only way to know is to look, and that necessitates looking for observational signals that might cause us to rethink our place in the Universe.

Artist’s conception of the exoplanet Kepler-186f, which may exhibit Earth-like (or early, life-free Earth-like) properties. As imagination-sparking as illustrations like this are, they’re mere speculations, and the incoming data won’t provide any views akin to this at all. Kepler 186f, like many known Earth-like worlds, isn’t orbiting a Sun-like star, but that might not necessarily mean that life on this world is disfavored. (NASA AMES/SETI INSTITUTE/JPL-CALTECH)

We have the right mix of light and heavy elements to have a rocky planet with a thin-but-substantial atmosphere and the raw ingredients for life. We orbit a star at the right distance for liquid water on our surface, with our planet possessing both oceans and continents. Our Sun is long-enough lived (and low-enough in mass) that life can evolve to become complex, differentiated, and possibly intelligent, but high-enough in mass that flares aren’t so numerous that they’d strip our atmosphere away.

Our planet spins on its axis but isn’t tidally locked, so that we have days and nights throughout the year. We have a large moon to stabilize our axial tilt. We have a large world (Jupiter) outside of our frost line to shield the inner planets from catastrophic strikes. When we think about it in these terms, looking for a world just like Earth — a proverbial ‘Earth 2.0’ — seems like a no-brainer of a decision.

The exoplanet Kepler-452b (R), as compared with Earth (L), a possible candidate for Earth 2.0. Looking at worlds that are similar to Earth is a compelling place to start, but it might not be the most likely place to actually find life in the galaxy or the Universe at large. (NASA/AMES/JPL-CALTECH/T. PYLE)

There are lots of reasons to believe that looking for a world as Earth-like as possible, around a star as Sun-like as possible, might be the best place to look for life elsewhere in the Universe. We know that there are very likely billions of Solar System that have at least somewhat similar properties to the Earth and Sun out there, thanks to our tremendous advances in exoplanet studies over the past three decades.

Since life not only arose but became complex, differentiated, intelligent, and technologically advanced here on Earth, it makes sense to choose worlds that are similar to Earth in our quest to find an inhabited world out there in the galaxy. Surely, if it’s arisen here under the conditions we ourselves have, it must be possible for life to arise again, elsewhere, under similar conditions.

The small Kepler exoplanets known to exist in the habitable zone of their star. Whether the worlds classified as Super-Earth are Earth-like or Neptune-like is an open question, but it may not even be important for a world to orbit a Sun-like star or be in this so-called habitable zone in order for life to have the potential of arising. (NASA/AMES/JPL-CALTECH)

Practically no one in the exoplanet or astrobiology communities thinks that looking for worlds similar to a proverbial ‘Earth 2.0’ is a bad idea. But is it the smartest course of action to invest the overwhelming majority of our resources in solely looking for and investigating worlds that have these similarities to our own, life-rich planet? I had the opportunity to sit down and record a podcast with scientist Adrian Lenardic, who doesn’t agree with this position at all.

If science has taught us anything, it’s that we shouldn’t assume we know the answer before doing the key experiments or making the critical observations. Yes, we have to look where the evidence points, but we also have to look in places where we might think it’s unlikely for life to arise, thrive, or otherwise sustain itself. The Universe is full of surprises, and if we don’t give ourselves the opportunity to allow the Universe to surprise us, we’re going to draw biased — and therefore, fundamentally unscientific — conclusions.

Deep under the sea, around hydrothermal vents, where no sunlight reaches, life still thrives on Earth. How to create life from non-life is one of the great open questions in science today, but if life can exist down here, perhaps undersea on Europa or Enceladus, there’s life, too. It will be more and better data, most likely collected and analyzed by experts, that will eventually determine the scientific answer to this mystery.(NOAA/PMEL VENTS PROGRAM)

Our preconceptions about how life works have been wrong before, as what we thought were necessary restrictions turned out to be circumvented not only plentifully, but possibly easily and frequently.

For example, we once thought that life required sunlight. But the discovery of life around hydrothermal vents many miles beneath the ocean’s surface taught us that even in the absolute absence of sunlight, life can find a way.

We once thought that life couldn’t survive in an arsenic-rich environment, as arsenic is a known poison to biological systems. Yet not only have recent discoveries shown that life is possible in arsenic-rich locations, but that arsenic can even be used in biological processes.

And perhaps most surprisingly, we thought that complex life could never survive in the harsh environment of space. But the tardigrade proved us wrong, entering a state of suspended animation in the vacuum of space, and successfully rehydrating when returned to Earth.

A scanning electron microscope image of a Milnesium tardigradum (Tardigrade, or ‘water bear’) in its active state. Tardigrades have been exposed to the vacuum of space for prolonged periods of time, and have returned to normal biological operation after being returned to liquid water environments. (SCHOKRAIE E, WARNKEN U, HOTZ-WAGENBLATT A, GROHME MA, HENGHERR S, ET AL. (2012))

It has to make you wonder about what else might be out there. Could there be life in the subsurface oceans of Jupiter’s moon Europa, Saturn’s moon Enceladus, Neptune’s moon Triton, or even cold, distant Pluto? All of them orbit large, massive worlds (Pluto’s Charon counts), which exert tidal forces on the planet’s interior, providing a source of heat and energy, even in an environment where no sunlight can penetrate.

On rocky worlds without sufficient atmospheres to house liquid water, a subsurface ocean is still possible. Mars, for example, could have plentiful amounts of liquid groundwater beneath the surface, providing a possible environment for life to still exist. Even a thoroughly uninhabitable environment like Venus could have life, as the region above the cloud-tops, some 60 kilometers up, has Earth-like temperatures and air pressure.

NASA’s hypothetical HAVOC (High-Altitude Venus Operational Concept) mission could look for life in the cloudtops of our nearest planetary neighbor. Despite the unfriendly conditions on the surface of Venus, the area above the cloud-tops has a similar pH, temperature, and atmospheric pressure to the environment we find at Earth’s surface. (NASA Langley Research Center)

Sure, we might look at the most common class of star out there in the Universe — red dwarf (M-class) stars, which make up 75–80% of all stars — and come up with all sorts of reasons why life is unlikely to exist there. Here are just a few:

  • M-class stars will tidally lock any Earth-sized (rocky) planets wherever liquid water is capable of forming on very short (~1 million years or less) timescales.
  • M-class stars flare ubiquitously, and would easily strip away an Earth-like atmosphere on short timescales.
  • X-rays emitted by these stars are too great and numerous, and would sufficiently irradiate the planet to make life as we know it untenable.
  • And that the lack of higher-energy (ultraviolet and yellow/green/blue/violet) light would make photosynthesis impossible, preventing primitive life from coming into existence.
All inner planets in a red dwarf system will be tidally locked, with one side always facing the star and one always facing away, with a ring of Earth-like habitability between the night and day sides. But even though these worlds are so different from our own, we have to ask the biggest question of all: could one of them still potentially be habitable?(NASA/JPL-CALTECH)

If these are your reasons for disfavoring life around the most common class of star in the Universe, where approximately 6% of these stars are thought to contain Earth-sized planets in what we call the habitable zone (at the right distance for a world with Earth-like conditions to have liquid water on its surface), you’re going to have to reconsider your assumptions.

Tidal locking might not be necessarily as bad as we thought, as magnetic fields and substantial atmospheres with high winds could still provide changes in energy inputs. A planet (like Venus) that continuously generated new atmospheric particles could potentially survive solar wind/flare stripping events. Organisms could dive to deeper depths during X-ray events, shielding themselves from the radiation. And photosynthesis, like all life processes on Earth, is only based on the use of 20 amino acids, but more than 60 additional ones are known to occur naturally throughout the Universe.

Scores of amino acids not found in nature are found in the Murchison Meteorite, which fell to Earth in Australia in the 20th century. The fact that 80+ unique types of amino acids exist in just a plain old space rock might indicate that the ingredients for life, or even life itself, might have formed differently elsewhere in the Universe, perhaps even on a planet that didn’t have a parent star at all. (WIKIMEDIA COMMONS USER BASILICOFRESCO)

While we have every reason to believe that life might be ubiquitous — or at least have a chance — on worlds that are very similar to Earth, it’s also very plausible that life may be more plentiful on worlds that aren’t like our own.

Perhaps exomoons orbiting large planets (with large tidal forces) are even more conducive to life originating than a world like Earth is.

Perhaps liquid water on the planet itself isn’t a requirement for life, as perhaps the right kind of cell wall or membrane can enable water to exist in an aqueous state.

Perhaps radioisotope decay, geothermal sources, or even chemical sources of energy could provide life with the external source it needs; perhaps rogue planets — without parent stars at all — might be home to alien life.

When a planet transits in front of its parent star, some of the light is not only blocked, but if an atmosphere is present, filters through it, creating absorption or emission lines that a sophisticated-enough observatory could detect. If there are organic molecules or large amounts of molecular oxygen, we might be able to find that, too. It’s important that we consider not only the signatures of life we know of, but of possible life that we don’t find here on Earth. (ESA / DAVID SING)

Perhaps even super-Earths, arguably more numerous than Earth-sized worlds, might be potentially habitable under the right circumstances. The wonderful thing about this idea is that it’s testable just as easily as an Earth-like world around a Sun-like star. To examine a planet for hints of life, we can approach this puzzle with many different lines of inquiry. We can:

  • wait for a planetary transit and try to perform spectroscopy on the absorbed light, probing the contents of an exo-atmosphere,
  • we can try and resolve the world itself with direct imaging, looking for seasonal variations and signs such as the periodic greening of the world,
  • or we can look for nuclear, neutrino, or techno-signatures that might indicate the presence of a planet being manipulated by its inhabitants, whether they are intelligent or not.
This artist’s impression displays TRAPPIST-1 and its planets reflected in a surface. The potential for water on each of the worlds is also represented by the frost, water pools, and steam surrounding the scene. However, it is unknown whether any of these worlds actually still possess atmospheres, or if they’ve been blown away by their parent star. One thing is certain, however: we won’t know whether they’re inhabited or not unless we examine their properties in depth for ourselves. (NASA/R. HURT/T. PYLE)

It may be the case that life is rare in the Universe, in which case it will require us to look at a lot of candidate planets — possibly with very high precision — in order to reveal a successful detection. But if we search exclusively for planets that have similar properties to Earth, and we restrict ourselves to looking at parent stars and solar systems that are similar to our own, we are doomed to get a biased representation of what’s out there.

You might think, in the search for extraterrestrial life, that more is more, and that the best way to find life beyond Earth is to look at greater numbers of candidate planets that might be the Earth 2.0 we’ve been dreaming of for so long. But non-Earth-like planets could be home to life that we’ve never considered, and we won’t know unless we look. More is more, but “different” is also more. We must be careful, as scientists, not to bias our findings before we’ve even truly started looking.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

Is Alien Life Hiding Beyond Earth 2.0? was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Wolfgang Paul (right, with glasses) in characteristic form outside the Council Chamber at CERN during a Scientific Policy Committee meeting in 1977. He was chair of the committee at the time (1975–1978) and a delegate to Council. (CERN)The world of particle physics sure does have surprises, even for the most educated of physicists out there.

If you ever take a visit to the physical site of CERN, where the Large Hadron Collider is located, you’ll immediately notice something wonderful about the streets. They’re all named after influential, important figures in the history of physics. Titans such as Max Planck, Marie Curie, Niels Bohr, Louis de Broglie, Paul Dirac, Enrico Fermi and Albert Einstein have all been honored, along with many others.

One of the more interesting surprises you might find, if you look hard enough, is a street honoring the physicist Wolfgang Paul. You might immediately think, “oh, someone vandalized the street of Wolfgang Pauli,” the famous physicist whose exclusion principle describes the behavior of all the normal matter in our Universe. But no; Pauli has his own street, and Wolfgang Paul is entirely his own Nobel-winning physicist. Here’s the story you haven’t heard.

The 1989 Nobel Prize in Physics was jointly awarded to Norman Ramsey, Hans Dehmelt, and Wolfgang Paul for their work in the development of atomic precision spectroscopy. Wolfgang Paul’s development of the ion trap was instrumental in this, and the Paul trap, among many other of his achievements, is still in widespread use today. (NOBEL MEDIA)

Wolfgang Paul, not to bury the lede, was awarded the Nobel Prize in Physics back in 1989. Paul’s most important contribution to physics was the development of the ion trap, which enabled physicists to capture charged particles in a system isolated from an external environment. Like most of the modern Nobel Laureates in physics, the critical work that Paul did was completed decades before the Nobel was awarded: way back in 1953.

Ion traps have many uses, from mass spectrometry to quantum computers. Paul’s design, specifically, enabled the 3D-capture of ions owing to the use of both static electric fields and oscillating electric fields. This is not the only type of ion trap in use today, as both Penning traps and Kingdon traps are also used. But even 66 years after they were first developed, the Paul trap is still in widespread use today.

Mass spectrometers are useful in a slew of different circumstances, including particle physics, chemical and medical applications, and even in the study of antimatter or of cosmic particles in space. It was Wolfgang Paul’s work that made much of modern mass spectrometry and ion capture possible. (Uli Deck/picture alliance via Getty Images)

In his early career, Paul achieved his degrees by studying in Munich, Berlin, and then Kiel, working with Hans Geiger (of Geiger counter fame) and then Hans Kopfermann. During World War II, he researched isotope separation, which remains an important component in creating fissionable material for both reactors and nuclear weapons.

The way you separate different isotopes out is based on a simple principle: every element is defined by the number of protons in its atomic nucleus, but different isotopes can contain different numbers of neutrons. When you apply an electric or magnetic field to any atomic nucleus, the force it feels is based on its electric charge (the number of protons), but the acceleration it experiences is proportional to its mass.

Atoms or ions with the same number of protons in the nucleus are all the same element, but if they possess different numbers of neutrons, they’ll have different masses from one another. These are examples of isotopes, and separating out different ions by mass alone is one of the key goals of mass spectrometry. (BRUCEBLAUS / WIKIMEDIA COMMONS)

With the same force acting on a different mass, you can achieve different accelerations for different isotopes, and — in principle — sort the different isotopes of the same element via that method. In practice, the methods and mechanisms used to sort isotopes is far more complex than that, and Paul, along with Kopfermann and many others, worked extensively on this at the University of Bonn in the post-World War II years.

One of the techniques Paul worked to develop is that of mass spectrometry, which enables you to separate out particles based on mass. While this may not work for neutral atoms, which don’t curve or accelerate owing to the presence of electric and magnetic fields, you can separate them easily if you kick even a single electron off of one of them, transforming them into ions. With unique charge-to-mass ratios, you can use electromagnetism to your advantage.

Monopole terms (left) are always spherically symmetric, and arise in electrostatics from something like a net charge. If you have a positive and negative charge separated by a distance, you’ll have a zero monopole term but will have a net dipole electric field. Putting multiple dipoles in the proper configuration can lead to both zero monopole and dipole terms, but will leave a quadrupole field in its wake. Quadrupole electric and magnetic fields have an extraordinary number of applications in physics, chemistry and biology, including at the LHC (and in other laboratories) at CERN. (JOSHUA JORDAN, PH.D. THESIS (2017))

This was where Paul’s work, in the 1950s, really took off. We might be used to electric fields as emanating from a point, where the electric charge itself exists, but these are the simplest kind of electric fields: monopole fields. We can also have dipole fields, where you have a positive and negative charge (for an overall neutral system) that are separated by a small distance.

This results in a field analogous to the magnetic fields you’ve seen for a bar magnet: where you have two poles at opposite ends of the magnet. While you might not find it intuitive, you can also put a series of dipoles in a certain configuration to cancel out the effects of both the monopole and the dipole terms, but still obtain an electric field: a quadrupole electric field. This technique can be extended indefinitely, to octopoles, hexadecapoles, and so on.

Drawing of a schematic Paul Trap (some kind of ion-cage) for the storage of charged particles by the use of an oscillating electric field (blue), generated by a quadrupole (a:end caps) and (b:ring electrode). A particle, indicated in red (here positive) is stored in between caps of the same polarity. The particle is trapped inside a vacuum chamber. The particle is surrounded by a cloud of similarly charged particles in red. (ARIAN KRIESCH / WIKIMEDIA COMMONS)

You might think that, with a properly configured electric field, you could successfully trap a particle and pin it in place. Unfortunately, it’s been known for an extremely long time — since 1842, when Samuel Earnshaw proved it — that no configuration of static electric fields will be successful at this.

Fortunately, Paul figured out a method to trap the ions by using a combination of static electric fields and oscillating electric fields. In all three dimensions, Paul’s setup created electric fields that switched directions rapidly, effectively confining the particles to a very small volume and preventing their escape. In 1953, his laboratory developed the first three-dimensional ion trap, inventing a technique that’s still applied today.

The linear quadrupole ion trap at the University of Calgary, in Dr. Thompson’s laboratory, makes use of the same quadrupole electric field with high-frequency oscillatory electric fields that Paul’s original setup used. (DANFOSTE AND AKRIESCH OF WIKIMEDIA COMMONS)

More specifically, Paul realized that if you set up a static quadrupole electric field and then superimposed this oscillating electric field atop it, could separate ions with the same charge but different masses. This was then further developed into a standardized method to separate ions by mass, now widely used in the process of mass spectrometry.

Further developments led to the Paul trap, which filters ions by mass and allows the desired ones to be kept, with the remainder discarded. Paul’s laboratory was also responsible, along with his fellow Nobel Laureate Hand Dehmelt (independently), for the Penning trap, which is another type of widely-used ion trap.

This schematic of a high-capacity ion trap takes advantage of an extension of Paul’s original work to store many ions in a trap simultaneously, and takes advantage of higher-order electric fields than a simple quadrupole alone. The octopole, for example, is clearly identified in this setup. (MIKE25 / WIKIMEDIA COMMONS)

If you were someone interested in performing spectroscopy on Earth, the ultimate dream would be to observe a single atom or ion. This dream came true only because of three advancements that needed to occur in tandem:

  1. individual atoms or ions needed to be trapped and kept stable in an isolated environment,
  2. these composite particles then needed to be cooled to a low temperature where they could be effectively studied,
  3. and then the sensitivity of the detection apparatus needs to be enhanced so that a single atom or ion could be observed.

The 1989 Nobel Prize in Physics was awarded when this dream was achieved, but the very first step of all — to trap individual atoms and ions — was first accomplished in Paul’s laboratory, using the techniques that he himself pioneered.

This ion trap, whose design is largely based on the work of Wolfgang Paul, is one of the early examples of an ion trap being used for a quantum computer. This 2005 photo is from a laboratory in Innsbruck, Austria, and shows the setup of one component of a now-outdated quantum computer. (MNOLF / WIKIMEDIA COMMONS)

Paul traps are still used today to study and trap ions of all different types, including at the antimatter factory at CERN. Paul himself, meanwhile, went on to make many more important contributions to not only particle physics, but to its role in society. He was a professor of experimental physics at the University of Bonn for 41 years: from 1952 until his death in 1993.

In addition to his work on mass spectrometry, ion traps, and the Paul and Penning traps, he developed molecular beam lenses and worked on two early (circular electron) particle accelerators: the 500 MeV and 2,500 MeV synchrotrons, which were Europe’s first. During the 1960s, he served as CERN’s director of the division of nuclear physics, and in his later life, worked on containing and confining slow neutrons, leading to the first quality measurement of the half-life of an unbound neutron.

A portion of the antimatter factory at CERN, where charged antimatter particles are brought together and can form either positive ions, neutral atoms, or negative ions, depending on the number of positrons that bind with an antiproton. Paul traps work just as well for antimatter as they do for regular matter. (E. SIEGEL)

Yet recognition almost escaped Paul entirely. Upon his retirement, where he became Professor Emeritus, the University took his office away and moved him to a janitor’s closet in the basement. Despite all of his contributions to the University of Bonn (including singlehandedly getting 100% of the funding for the 500 MeV synchrotron and getting it built there) and to physics over the years, he never complained about it.

Yet when Stockholm called, everything changed. They moved him back out of the basement and into his former office, where he continued his work until the end of his days. Of course, posthumously, CERN chose him as one of the physicists to honor with a street all his own. It still exists today, and I assure you, it isn’t a typo.

Route Wolfgang Paul at CERN. No, it isn’t a typo, nor is it an act of vandalism; the sign has nothing at all to do with Wolfgang Pauli, who has his own street at CERN. (E. SIEGEL)

As for the connection between Wolfgang Paul and his much more famous contemporary, Wolfgang Pauli? They finally met in the 1950s in Bonn, when Pauli came to visit. Away from everyone else, Paul approached him, and quipped, in a joke that only a math or physics nerd would appreciate, “Finally! I meet my imaginary part!” May you never think of Wolfgang Paul as a mere typo ever again, and instead fully appreciate his tremendous contributions to our understanding of the matter that makes up this world.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

Wolfgang Paul Was A Great Physicist, Not A Typo Of ‘Wolfgang Pauli’ was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This is a Digitized Sky Survey image of the oldest star with a well-determined age in our galaxy. The ageing star, catalogued as HD 140283, lies over 190 light-years away. The NASA/ESA Hubble Space Telescope was used to narrow the measurement uncertainty on the star’s distance, and this helped to refine the calculation of a more precise age of 14.5 billion years (plus or minus 800 million years). This can be reconciled with a Universe that’s 13.8 billion years old (within the uncertainties), but not with one that’s just 12.5 billion years of age. (DIGITIZED SKY SURVEY (DSS), STSCI/AURA, PALOMAR/CALTECH, AND UKSTU/AAO)There really is a cosmic conundrum about how fast the Universe is expanding. Changing its age won’t help.

One of the most surprising and interesting discoveries of the 21st century is the fact that different methods of measuring the expansion rate of the Universe yield different, inconsistent answers. If you measure the expansion rate of the Universe by looking at the earliest signals — early density fluctuations in the Universe that were imprinted from the early stages of the Big Bang — you find that the Universe expands at one particular rate: 67 km/s/Mpc, with an uncertainty of about 1%.

On the other hand, if you measure the expansion rate using the cosmic distance ladder — by looking at astronomical objects and mapping their redshifts and distances — you get a different answer: 73 km/s/Mpc, with an uncertainty of about 2%. This really is a fascinating cosmic conundrum, but despite claims by one team to the contrary, you cannot fix it by making the Universe a billion years younger. Here’s why.

The expanding Universe, full of galaxies and the complex structure we observe today, arose from a smaller, hotter, denser, more uniform state. It took thousands of scientists working for hundreds of years for us to arrive at this picture, and yet the lack of a consensus on what the expansion rate actually is tells us that either something is dreadfully wrong, we have an unidentified error somewhere, or there’s a new scientific revolution just on the horizon. (C. FAUCHER-GIGUÈRE, A. LIDZ, AND L. HERNQUIST, SCIENCE 319, 5859 (47))

At first glance, you might think that the expansion rate of the Universe has everything to do with how old the Universe is. After all, if we go back to the moment of the hot Big Bang, and we know the Universe was expanding extremely rapidly from this hot, dense, state, we know it must have cooled and slowed as it expanded. The amount of time that has passed since the Big Bang, along with the ingredients (like radiation, normal matter, dark matter and dark energy) it’s made of, determine how fast the Universe should be expanding today.

If it expands 9% faster than we previously suspected, then perhaps the Universe is 9% younger than we’d anticipated. This is the naive (and incorrect) reasoning applied to the problem, but the Universe isn’t as simple as that.

Three different types of measurements, distant stars and galaxies, the large scale structure of the Universe, and the fluctuations in the CMB, enable us to reconstruct the expansion history of our Universe. The fact that different methods of measurement point to different expansion histories may point the way forward to a new discovery in physics, or a greater understanding of what makes up our Universe. (ESA/HUBBLE AND NASA, SLOAN DIGITAL SKY SURVEY, ESA AND THE PLANCK COLLABORATION)

The reason you cannot simply do this is that there are three independent pieces of evidence that have to all fit together in order to explain the Universe.

  1. You must consider the early relic data, from features (known as baryon acoustic oscillations, which represent interactions between normal matter and radiation) that appear in the large-scale structure of the Universe and the fluctuations in the cosmic microwave background.
  2. You must consider the distance ladder data, which uses the apparent brightnesses and measured redshifts of objects to reconstruct both the expansion rate and the change in the expansion rate over time throughout our cosmic history.
  3. And, finally, you must consider the stars and star clusters we know of in our galaxy and beyond, which can have the ages of their stars independently determined through astronomical properties alone.
Constraints on dark energy from three independent sources: supernovae, the CMB (cosmic microwave background) and BAO (which is a wiggly feature seen in the correlations of large-scale structure). Note that even without supernovae, we’d need dark energy for certain, and also that there are uncertainties and degeneracies between the amount of dark matter and dark energy that we’d need to accurately describe our Universe. (SUPERNOVA COSMOLOGY PROJECT, AMANULLAH, ET AL., AP.J. (2010))

If we look at the first two pieces of evidence — the early relic data and the distance ladder data — this is where the huge discrepancy in the expansion rate comes from. You can determine the expansion rate from both, and this is where the 9% inconsistency comes from.

But this is not the end of the story; not even close. You can see, from the graph above, that the distance ladder data (which includes the supernova data, in blue) and the early relic data (which is based on both baryon acoustic oscillations and cosmic microwave background data, in the other two colors) not only intersect and overlap, but that there are uncertainties in both the dark matter density (x-axis) and dark energy density (y-axis). If you have a Universe with more dark energy, it’s going to appear older; if you have a Universe with more dark matter; it’s going to appear younger.

Four different cosmologies lead to the same fluctuations in the CMB, but measuring a single parameter independently (like H_0) can break that degeneracy. Cosmologists working on the distance ladder hope to develop a similar pipeline-like scheme to see how their cosmologies are dependent on the data that is included or excluded. (MELCHIORRI, A. & GRIFFITHS, L.M., 2001, NEWAR, 45, 321)

This is the big issue when it comes to the early relic data and the distance ladder data: the data that we have can fit multiple possible solutions. A slow expansion rate can be consistent with a Universe with the fluctuations we see in the cosmic microwave background, for example (shown above), if you tweak the normal matter, dark matter, and dark energy densities, along with the curvature of the Universe.

In fact, if you look at the cosmic microwave background data alone, you can see that a larger expansion rate is very much possible, but that you need a Universe with less dark matter and more dark energy to account for it. What’s particularly interesting, in this scenario, is that even if you demand a higher expansion rate, the act of increasing the dark energy and decreasing the dark matter keeps the age of the Universe practically unchanged at 13.8 billion years.

Before Planck, the best-fit to the data indicated a Hubble parameter of approximately 71 km/s/Mpc, but a value of approximately 69 or above would now be too great for both the dark matter density (x-axis) we’ve seen via other means and the scalar spectral index (right side of the y-axis) that we require for the large-scale structure of the Universe to make sense. A higher value of the Hubble constant of 73 km/s/Mpc is still allowed, but only if the scalar spectral index is high, the dark matter density is low, and the dark energy density is high. (P.A.R. ADE ET AL. AND THE PLANCK COLLABORATION (2015))

If we work out the math where the Universe has the following parameters:

  • an expansion rate of 67 km/s/Mpc,
  • a total (normal+dark) matter density of 32%,
  • and a dark energy density of 68%,

we get a Universe that’s been around for 13.81 billion years since the Big Bang. The scalar spectral index (ns), in this case, is approximately 0.962.

On the other hand, if we demand that the Universe have the following very different parameters:

  • an expansion rate of 73 km/s/Mpc,
  • a total (normal+dark) matter density of 24%,
  • and a dark energy density of 76%,

we get a Universe that’s been around for 13.72 billion years since the Big Bang. The scalar spectral index (ns), in this case, is approximately 0.995.

Correlations between certain aspects of the magnitude of temperature fluctuations (y-axis) as a function of decreasing angular scale (x-axis) show a Universe that is consistent with a scalar spectral index of 0.96 or 0.97, but not 0.99 or 1.00. (P.A.R. ADE ET AL. AND THE PLANCK COLLABORATION)

Sure, the data we have for the scalar spectral index disfavors this value, but that’s not the point. The point is this: making the Universe expand faster does not imply a younger Universe. Instead, it implies a Universe with a different ratio of dark matter and dark energy, but the age of the Universe remains largely unchanged.

This is very different from what one team has been asserting, and it’s extremely important for a reason we’ve already brought up: the Universe must be at least as old as the stars within it. Although there are certainly substantial error bars (i.e., uncertainties) on the ages of any individual star or star cluster, the full suite of evidence cannot be reconciled very easily with a Universe that’s younger than about 13.5 billion years.

Located around 4,140 light-years away in the galactic halo, SDSS J102915+172927 is an ancient star that contains just 1/20,000th the heavy elements the Sun possesses, and should be over 13 billion years old: one of the oldest in the Universe, and having possibly formed before even the Milky Way. The existence of stars like this informs us that the Universe cannot have properties that lead to an age younger than the stars within it. (ESO, DIGITIZED SKY SURVEY 2)

It takes at least 50-to-100 million years for the Universe to form the first stars of all, and those stars were made of hydrogen and helium alone: they no longer exist today. Instead, the oldest individual stars are found in the outskirts of halos of individual galaxies, and have extraordinarily tiny amounts of heavy elements. These stars are, at best, part of the second generation of stars to form, and their ages are inconsistent with a Universe that’s a billion years younger than the accepted, best-fit 13.8 billion year figure.

But we can go beyond individual stars and look at the ages of globular clusters: dense collections of stars that formed back in our Universe’s early stages. The stars inside, based on which ones have turned into red giants and which ones have yet to do so, give us a completely independent measurement of the Universe’s age.

The twinkling stars you see are evidence of variability, which is due to a unique period/brightness relationship. This is an image of a portion of the globular cluster Messier 3, and the properties of the stars inside it allow us to determine the overall cluster’s age. (JOEL D. HARTMAN)

The science of astronomy began with the studies of the objects in the night sky, and no object is more numerous or apparent to the naked eye than the stars. Through centuries of study, we’ve learned one of the most essential pieces of astronomical science: how stars live, burn through their fuel, and die.

In particular, we know that all stars, when they’re alive and burning through their main fuel (fusing hydrogen into helium), have a specific brightness and color, and remain at that specific brightness and color only for a certain amount of time: until their cores start to run out of fuel. At that point, the brighter, bluer and higher mass stars begin to “turn off” of the main sequence (the curved line on the color-magnitude diagram, below), evolving into giants and/or supergiants.

The life cycles of stars can be understood in the context of the color/magnitude diagram shown here. As the population of stars age, they ‘turn off’ the diagram, allowing us to date the age of the cluster in question. The oldest globular star clusters have an age of at least 13.2 billion years. (RICHARD POWELL UNDER C.C.-BY-S.A.-2.5 (L); R. J. HALL UNDER C.C.-BY-S.A.-1.0 (R))

By looking at where that turn-off-point is for a cluster of stars that all formed at the same time, we can figure out — if we know how stars work — how old those stars in the cluster are. When we look at the oldest globular clusters out there, the ones lowest in heavy elements and whose turn-offs come for the lowest-mass stars out there, many are older than 12 or even 13 billion years, with ages up to around 13.2 billion years.

There are none that are older than the currently accepted age of the Universe, which seems to provide an important consistency check. The objects we see in the Universe would have a tremendously hard time reconciling with an age of the Universe of 12.5 billion years, which is what you’d get if you lowered our best-fit figure (of 13.8 billion years) by 9%. A younger Universe is, at best, a cosmic long-shot.

Modern measurement tensions from the distance ladder (red) with early signal data from the CMB and BAO (blue) shown for contrast. It is plausible that the early signal method is correct and there’s a fundamental flaw with the distance ladder; it’s plausible that there’s a small-scale error biasing the early signal method and the distance ladder is correct, or that both groups are right and some form of new physics (shown at top) is the culprit. But right now, we cannot be sure.(ADAM RIESS (PRIVATE COMMUNICATION))

There may be some who contend we don’t know what the age of the Universe is, and that this conundrum over the expanding Universe could result in a Universe much younger than what we have today. But that would invalidate a large amount of robust data we already have and accept; a far more likely resolution is that the dark matter and dark energy densities are different than we previously suspected.

Something interesting is surely going on with the Universe to provide us with such a fantastic discrepancy. Why does the Universe seem to care which technique we use to measure the expansion rate? Is dark energy or some other cosmic property changing over time? Is there a new field or force? Does gravity behave differently on cosmic scales than expected? More and better data will help us find out, but a significantly younger Universe is unlikely to be the answer.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

No, The Universe Cannot Be A Billion Years Younger Than We Think was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
A solar flare from our Sun, which ejects matter out away from our parent star and into the Solar System, is dwarfed in terms of ‘mass loss’ by nuclear fusion, which has reduced the Sun’s mass by a total of 0.03% of its starting value: a loss equivalent to the mass of Saturn. E=mc², when you think about it, showcases how energetic this is, as the mass of Saturn multiplied by the speed of light (a large constant) squared leads to a tremendous amount of energy produced. Our Sun has about another 5–7 billion years of fusing hydrogen into helium, but there’s much more to come after that. (NASA’S SOLAR DYNAMICS OBSERVATORY / GSFC)An entire Universe of possibilities await stars like our own, even after they run out of fuel.

One of the most profound rules in all the Universe is that nothing lasts forever. With gravitational, electromagnetic and nuclear forces all acting on matter, practically everything we observe to exist today will face changes in the future. Even the stars, the most enormous collections that transform nuclear fuel in the cosmos, will someday all burn out, including our Sun.

But this does not mean that stellar death — when stars run out of nuclear fuel — is actually the end for a star like our Sun. Quite to the contrary, there are a number of fascinating things in store for all stars once they’ve died that first, most obvious death. Although it’s true that our Sun’s fuel is finite and we fully expect it to undergo a “typical” stellar death, this death is not the end. Not for our Sun, and not for any Sun-like stars. Here’s what comes next.

The (modern) Morgan–Keenan spectral classification system, with the temperature range of each star class shown above it, in kelvin. Our Sun is a G-class star, producing light with an effective temperature of around 5800 K, which humans are well-adapted to during the day. The most massive stars are brighter, hotter and bluer, but you only need about 8% the mass of the Sun to begin fusing hydrogen into helium at all, which is something that M-class red dwarfs can do just as well, so long as they achieve critical core temperatures above about 4 million K. (WIKIMEDIA COMMONS USER LUCASVB, ADDITIONS BY E. SIEGEL)

In order to be considered a true star, and not a failed star (like a brown dwarf) or some corpse (like a white dwarf or neutron star), you have to be capable of fusing hydrogen into helium. When a cloud of gas collapses to potentially form a new star, it has a lot of gravitational potential energy in its diffuse state, which gets converted into kinetic (thermal) energy when it collapses. This collapse heats up the matter, and if it gets hot and dense enough, nuclear fusion will begin.

After many generations of studying stars, including where they do and don’t form, we now know they have to reach an internal temperature of about 4 million K to begin fusing hydrogen into helium, and that requires at least ~8% the mass of our Sun, or about 70 times the mass of Jupiter. Being at least that massive is the minimum requirement for becoming a star at all.

This cutaway showcases the various regions of the surface and interior of the Sun, including the core, which is where nuclear fusion occurs. As time goes on, the helium-containing region in the core expands and the maximum temperature increases, causing the Sun’s energy output to increase. When our Sun runs out of hydrogen fuel in the core, it will contract and heat up to a sufficient degree that helium fusion can begin. (WIKIMEDIA COMMONS USER KELVINSONG)

Once that mass/temperature threshold is crossed, the star begins fusing hydrogen into helium, and will encounter one of three different fates. These fates are determines solely by the star’s mass, which in turn determines the maximum temperature that will be reached in the core. All stars begin fusing hydrogen into helium, but what comes next is temperature-dependent. In particular:

  • If your star is too low in mass, it will fuse hydrogen into helium only, and will never get hot enough to fuse helium into carbon. A purely helium composition is the fate of all M-class (red dwarf) stars, below about 40% the Sun’s mass. This describes the majority of stars in the Universe (by number).
  • If your star is like the Sun, it will contract down to higher temperatures when the core runs out of hydrogen, beginning helium fusion (into carbon) when the star swells into a red giant. It will end composed of carbon and oxygen, with the lighter (outer) hydrogen and helium layers blown off. This occurs for all stars between about 40% and 800% the Sun’s mass.
  • If your star is more than 8 times the mass of the Sun, it will not only fuse hydrogen into helium and helium into carbon, but will initiate carbon fusion later on, leading to oxygen fusion, silicon fusion, and eventually, a spectacular death by supernova.
When the most massive stars die, their outer layers, enriched with heavy elements from the result of nuclear fusion and neutron capture, are blown off into the interstellar medium, where they can help future generations of starsby providing them with the raw ingredients for rocky planets and, potentially, life. Our Sun would need to be about eight times as massive to have a shot at this fate, which is well out of the realm of reasonable possibility. (NASA, ESA, J. HESTER, A. LOLL (ASU))

These are the most conventional fates of stars, and by far the three most common. The stars that are massive enough to go supernova are rare: only about 0.1–0.2% of all stars are this massive, and they will leave behind either neutron star or black hole remnants.

The stars that are lowest in mass are the most common star in the Universe, making up somewhere between 75–80% of all stars, and are also the longest-lived. With lifetimes that range from perhaps 150 billion to over 100 trillion years, not a single one has run out of fuel in our 13.8 billion year old Universe. When they do, they will form white dwarf stars made entirely out of helium.

But Sun-like stars, which comprise about a quarter of all stars, experience a fascinating death cycle when they run out of helium in their core. They transform into a planetary nebula/white dwarf duo in a spectacular, but slow, death process.

The planetary nebula NGC 6369’s blue-green ring marks the location where energetic ultraviolet light has stripped electrons from oxygen atoms in the gas. Our Sun, being a single star that rotates on the slow end of stars, is very likely going to wind up looking akin to this nebula after perhaps another 7 billion years. (NASA AND THE HUBBLE HERITAGE TEAM (STSCI/AURA))

During the red giant phase, Mercury and Venus will certainly be engulfed by the Sun, while Earth may or may not, depending on certain processes that have yet to be fully worked out. The icy worlds beyond Neptune will likely melt and sublimate, and are unlikely to survive the death of our star.

Once the Sun’s outer layers are returned to the interstellar medium, all that remains will be a few charred corpses of worlds orbiting the white dwarf remnant of our Sun. The core, largely composed of carbon and oxygen, will total about 50% the mass of our present Sun, but will only be approximately the physical size of Earth.

When lower-mass, Sun-like stars run out of fuel, they blow off their outer layers in a planetary nebula, but the center contracts down to form a white dwarf, which takes a very long time to fade to darkness. The planetary nebula our Sun will generate should fade away completely, with only the white dwarf and our remnant planets left, after approximately 9.5 billion years. On occasion, objects will be tidally torn apart, adding dusty rings to what remains of our Solar System, but they will be transient. (MARK GARLICK / UNIVERSITY OF WARWICK)

This white dwarf star will remain hot for an extremely long time. Heat is an amount of energy that gets trapped inside any object, but can only be radiated away through its surface. Imagine taking half the energy in a star like our Sun, then compressing that energy down into an even smaller volume. What will happen?

It will heat up. If you take gas in a cylinder and compress it rapidly, it heats up: this is how a piston in your combustion engine works. The red giant stars that give rise to white dwarfs are actually much cooler than the dwarf itself. During the contraction phase, temperatures increase from as low as 3,000 K (for a red giant) to up to about 20,000 K (for a white dwarf). This type of heating is due to adiabatic compression, and explains why these dwarf stars are so hot.

When our Sun runs out of fuel, it will become a red giant, followed by a planetary nebula with a white dwarf at the center. The Cat’s Eye nebula is a visually spectacular example of this potential fate, with the intricate, layered, asymmetrical shape of this particular one suggesting a binary companion. At the center, a young white dwarf heats up as it contracts, reaching temperatures tens of thousands of Kelvin hotter than the red giant that spawned it. (NASA, ESA, HEIC, AND THE HUBBLE HERITAGE TEAM (STSCI/AURA); ACKNOWLEDGMENT: R. CORRADI (ISAAC NEWTON GROUP OF TELESCOPES, SPAIN) AND Z. TSVETANOV (NASA))

But now, it’s got to cool down, and it can only radiate away through its small, tiny, Earth-sized surface. If you were to form a white dwarf right now, at 20,000 K, and give it 13.8 billion years to cool down (the present age of the Universe), it would cool down by a whopping 40 K: to 19,960 K.

We’ve got a terribly long time to wait if we want our Sun to cool down to the point where it becomes invisible. However, once our Sun has run out of fuel, the Universe will happily provide ample amounts of time. Sure, all the galaxies in the Local Group will merge together; all the galaxies beyond will accelerate away due to dark energy; star formation will slow to a trickle and the lowest-mass red dwarfs will burn through their fuel. Still, our white dwarf will continue to cool.

An accurate size/color comparison of a white dwarf (L), Earth reflecting our Sun’s light (middle), and a black dwarf (R). When white dwarfs finally radiate the last of their energy away, they will all eventually become black dwarfs. The degeneracy pressure between the electrons within the white/black dwarf, however, will always be great enough, so long as it doesn’t accrue too much mass, to prevent it from collapsing further. This is the fate of our Sun after an estimated 1⁰¹⁵ years. (BBC / GCSE (L) / SUNFLOWERCOSMOS (R))

At last, after somewhere between 100 trillion and 1 quadrillion years (10¹⁴ to 10¹⁵ years) have passed, the white dwarf that our Sun will become will fade out of the visible part of the spectrum and cool down to just a few degrees above absolute zero. Now known as a black dwarf, this ball of carbon and oxygen in space will simply zip through whatever becomes of our galaxy, along with over a trillion other stars and stellar corpses left over from our Local Group.

But that isn’t truly the end for our Sun, either. There are three possible fates that await it, depending on how lucky (or unlucky) we get.

When a large number of gravitational interactions between star systems occur, one star can receive a large enough kick to be ejected from whatever structure it’s a part of. We observe runaway stars in the Milky Way even today; once they’re gone, they’ll never return. This is estimated to occur for our Sun at some point between 1⁰¹⁷ to 1⁰¹⁹ years from now, depending on the density of stellar corpses in what our Local Group becomes. (J. WALSH AND Z. LEVAY, ESA/NASA)

1.) Completely unlucky. About half of all stellar corpses in the galaxy — in most galaxies — originate as singlet star systems, much like our own Sun. While multi-star systems are common, with approximately 50% of all known stars found in binary or trinary (or even richer) systems, our Sun is the only star in our own Solar System.

This is hugely important for the future, because it makes it extraordinarily unlikely that our Sun will merge with a companion, or to swallow a companion or be swallowed by another companion. We’d be defying the odds if we merged with another star or stellar corpse out there. Assuming that we don’t get lucky, all our Sun’s corpse will see in the future is countless gravitational interactions with the other masses, which ought to culminate in what’s left of our Solar System getting ejected from the galaxy after approximately 10¹⁷ to 10¹⁹ years.

Two different ways to make a Type Ia supernova: the accretion scenario (L) and the merger scenario (R). Without a binary companion, our Sun could never go supernova by accreting matter, but we could potentially merge with another white dwarf in the galaxy, which could lead us to revitalize in a Type Ia supernova explosion after all. (NASA / CXC / M. WEISS)

2.) Lucky enough to revitalize. You might think, for good reason, that once the white dwarf that our Sun becomes cools off, there’s no chance for it to ever shine again. But there are many ways for our Sun to get a new lease on life, and to emit its own powerful radiation once again. To do so, all it needs is a new source of matter. If, even in the distant future, our Sun:

  • merges with a red dwarf star or a brown dwarf,
  • accumulates hydrogen gas from a molecular cloud or gaseous planet,
  • or runs into another stellar corpse,

it can ignite nuclear fusion once again. The first scenario will result in at least many millions of years of hydrogen burning; the second will lead to a burst of fusion known as a nova; the last will lead to a runaway supernova explosion, destroying both stellar corpses. If we experience an event like this before we get ejected, our cosmic luck will be on display for everyone remaining in our galaxy to witness.

The nova of the star GK Persei, shown here in an X-ray (blue), radio (pink), and optical (yellow) composite, is a great example of what we can see using the best telescopes of our current generation. When a white dwarf accretes enough matter, nuclear fusion can spike on its surface, creating a temporary brilliant flare known as a nova. If our Sun’s corpse collides with a gas cloud or a clump of hydrogen (such as a rouge gas giant planet), it could go nova even after becoming a black dwarf. (X-RAY: NASA/CXC/RIKEN/D.TAKEI ET AL; OPTICAL: NASA/STSCI; RADIO: NRAO/VLA)

3.) Super lucky, where we’ll get devoured by a black hole. In the outskirts of our galaxy, some 25,000 light-years from the supermassive black hole occupying our galactic center, only the small black holes formed from individual stars exist. They have the smallest cross-sectional area of any massive object in the Universe. As far as galactic targets go, these stellar-mass black holes are some of the hardest objects to hit.

But occasionally, they do get hit. Small black holes, when they encounter matter, accelerate and funnel it into an accretion flow, where some fraction of the matter gets devoured and added to the black hole’s mass, but most of it gets ejected in the form of jets and other debris. These active, low-mass black holes are known as microquasars when they flare up, and they’re very real phenomena.

Although it’s exceedingly unlikely to happen to us, someone’s got to win the cosmic lottery, and those who do will become black hole food for their final act.

When a star or stellar corpse passes too close to a black hole, the tidal forces from this concentrated mass are capable of completely destroying the object by tearing it apart. Although a small fraction of the matter will be devoured by the black hole, most of it will simply accelerate and be ejected back into space. (ILLUSTRATION: NASA/CXC/M.WEISS; X-RAY (TOP): NASA/CXC/MPE/S.KOMOSSA ET AL. (L); OPTICAL: ESO/MPE/S.KOMOSSA (R))

Almost every object in the Universe has a large set of possibilities as far as what’s going to happen to it in the far future, and it’s incredibly difficult to determine a single object’s fate given the chaotic environment of our corner of the cosmos. But by knowing the physics behind the objects we have, and understanding what the probabilities and timescales for each type of object is, we can better estimate what anyone’s fate should be.

For our Sun, we’re going to become a white dwarf after less than another 10 billion years, will fade to a black dwarf after ~10¹⁴-10¹⁵ years, and will get ejected from the galaxy after 10¹⁷-10¹⁹ years. At least, that’s the most probable path. But mergers, gas accumulation, collisions, or even getting devoured are all possibilities too, and they’ll happen to someone, even if it’s probably not us. Our future may not yet be written, but we’d be smart to bet on a bright one for trillions of years to come!

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

This Is What Will Happen To Our Sun After It Dies was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This image of ULAS J1120+0641, a very distant quasar powered by a black hole with a mass two billion times that of the Sun, was created from images taken from surveys made by both the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. The quasar appears as a faint red dot close to the centre. This quasar was the most distant one known from 2011 until 2017, and is seen as it was just 770 million years after the Big Bang. Its black hole is so massive it poses a challenge to modern cosmological theories of black hole growth and formation.(ESO/UKIDSS/SDSS)It’s not impossible according to physics, but we truly don’t know how this object came to exist.

Out in the extremities of the distant Universe, the earliest quasars can be found.

HE0435–1223, located in the centre of this wide-field image, is among the five best lensed quasars discovered to date, where the lensing phenomenon magnifies the light from distant objecst. This effect enables us to see quasars whose light was emitted when the Universe was less than 10% of its current age. The foreground galaxy creates four almost evenly distributed images of the distant quasar around it. (ESA/HUBBLE, NASA, SUYU ET AL.)

Supermassive black holes at the centers of young galaxies accelerate matter to tremendous speeds, causing them to emit jets of radiation.

While distant host galaxies for quasars and active galactic nuclei can often be imaged in visible/infrared light, the jets themselves and the surrounding emission is best viewed in both the X-ray and the radio, as illustrated here for the galaxy Hercules A. (NASA, ESA, S. BAUM AND C. O’DEA (RIT), R. PERLEY AND W. COTTON (NRAO/AUI/NSF), AND THE HUBBLE HERITAGE TEAM (STSCI/AURA))

What we observe enables us to reconstruct the mass of the central black hole, and explore the ultra-distant Universe.

The farther away we look, the closer in time we’re seeing towards the Big Bang. The current record-holder for quasars comes from a time when the Universe was just 690 million years old. (ROBIN DIENEL/CARNEGIE INSTITUTION FOR SCIENCE)

Recently, a new black hole, J1342+0928, was discovered to originate from 13.1 billion years ago: when the Universe was 690 million years old, just 5% of its current age.

As viewed with our most powerful telescopes, such as Hubble, advances in camera technology and imaging techniques have enabled us to better probe and understand the physics and properties of distant quasars, including their central black hole’s properties. (NASA AND J. BAHCALL (IAS) (L); NASA, A. MARTEL (JHU), H. FORD (JHU), M. CLAMPIN (STSCI), G. HARTIG (STSCI), G. ILLINGWORTH (UCO/LICK OBSERVATORY), THE ACS SCIENCE TEAM AND ESA (R))

It has a mass of 800 million Suns, an exceedingly high figure for such early times.

This artist’s rendering shows a galaxy being cleared of interstellar gas, the building blocks of new stars. Winds driven by a central black hole are responsible for this, and may be at the heart of what’s driving this active ultra-distant galaxy behind this newly discovered quasar. (ESA/ATG MEDIALAB)

Even if black holes formed from the very first stars, they’d have to accrete matter and grow at the maximum rate possible — the Eddington limit — to reach this size so rapidly.

The active galaxy IRAS F11119+3257 shows, when viewed up close, outflows that may be consistent with a major merger. Supermassive black holes may only be visible when they’re ‘turned on’ by an active feeding mechanism, explaining why we can see these ultra-distant black holes at all. (NASA’S GODDARD SPACE FLIGHT CENTER/SDSS/S. VEILLEUX)

Fortunately, other methods may also grow a supermassive black hole.

When new bursts of star formation occur, enormous quantities of massive stars are created.

The visible/near-IR photos from Hubble show a massive star, about 25 times the mass of the Sun, that has winked out of existence, with no supernova or other explanation. Direct collapse is the only reasonable candidate explanation, demonstrating that not all stars need to go supernova or experience a stellar cataclysm to form a black hole.(NASA/ESA/C. KOCHANEK (OSU))

These can either directly collapse or go supernova, creating large numbers of massive black holes which then merge and grow.

Simulations of various gas-rich processes, such as galaxy mergers, indicate that the formation of direct collapse black holes should be possible. A combination of direct collapse, supernovae, and merging stars and stellar remnants could produce a young black hole this massive. Complementarily, present LIGO results indicate that black holes merge every 5 minutes somewhere in the Universe. (L. MAYER ET AL. (2014), VIA ARXIV.ORG/ABS/1411.5683)

Only ~20 black holes this large should exist so early in the Universe.

An ultra-distant quasar showing plenty of evidence for a supermassive black hole at its center. How these black holes got so massive so quickly is a topic of contentious scientific debate, but may have an answer that fits within our standard theories. We are uncertain whether that’s true or not at this juncture. (X-RAY: NASA/CXC/UNIV OF MICHIGAN/R.C.REIS ET AL; OPTICAL: NASA/STSCI)

Is this problematic for cosmology? More data will eventually decide.

Mostly Mute Monday tells an astronomical story in images, visuals, and no more than 200 words. Talk less; smile more.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

How Did This Black Hole Get So Big So Fast? was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This large, fuzzy-looking galaxy is so diffuse that astronomers call it a “see-through” galaxy because they can clearly see distant galaxies behind it. The ghostly object, catalogued as NGC 1052-DF2, doesn’t have a noticeable central region, or even spiral arms and a disk, typical features of a spiral galaxy. But it doesn’t look like an elliptical galaxy, either, as its velocity dispersion is all wrong. Even its globular clusters are oddballs: they are twice as large as typical stellar groupings seen in other galaxies. All of these oddities pale in comparison to the weirdest aspect of this galaxy: NGC 1052-DF2 is very controversial because of its apparent lack of dark matter. This could solve an enormous cosmic puzzle. (NASA, ESA, AND P. VAN DOKKUM (YALE UNIVERSITY))Has the mystery really been solved? Doubtful. The real science goes much deeper.

For perhaps the last year or so, a small galaxy located not too far away has captivated the attention of astronomers. The galaxy NGC 1052-DF2, a satellite of the larger NGC 1052, appears to be the first galaxy ever discovered that shows no evidence of dark matter. Paradoxically, that has been reported as indisputable evidence that dark matter must exist! Now, a new team has come out with a result that claims this galaxy cannot be devoid of dark matter, and Yann Guidon wants to know what’s really going on, asking:

I read a study that said the mystery of a galaxy with no dark matter has been solved. But I thought that this anomalous galaxy was previously touted as evidence FOR dark matter? What’s really going on here, Ethan?

We have to be extremely careful here, and dissect the findings of the different teams with all the implications correctly synthesized. Let’s get started.

The full Dragonfly field, approximately 11 square degrees, centred on NGC 1052. The zoom-in shows the immediate surroundings of NGC 1052, with NGC1052–DF2 highlighted in the inset. This is Extended Data Figure 1 from the publication announcing the discovery of DF2. (P. VAN DOKKUM ET AL., NATURE VOLUME 555, PAGES 629–632 (29 MARCH 2018))

Whenever you have a galaxy in the Universe and you want to know how much mass is inside, you have two ways of approaching the problem. The first way is to rely on astronomy to give you the answer.

Astronomically, there are a slew of observations we can make to teach us about the matter content of a galaxy. We can look in a myriad of wavelengths of light to determine the total amount of starlight that’s present, and infer the amount of mass that’s present in stars. We can similarly make additional observations of gas, dust, and the absorption and emission of radiation in order to infer the total amount of normal matter that’s present. We’ve done this for enough galaxies for long enough that simply measuring some basic properties can lead us to infer the total baryonic (made of protons, neutrons, and electrons) matter within a galaxy.

The extended rotation curve of M33, the Triangulum galaxy. These rotation curves of spiral galaxies ushered in the modern astrophysics concept of dark matter to the general field. The dashed curve would correspond to a galaxy without dark matter, which represents less than 1% of galaxies. While initial observations of the velocity dispersion, via globular clusters, indicated that NGC 1052-DF2 was one of them, newer observations throw that conclusion into doubt. (WIKIMEDIA COMMONS USER STEFANIA.DELUCA)

On the other hand, there are additional gravitational measurements we can make that will teach us about the total amount of mass that’s present within a galaxy, irrespective of the type of matter (normal, baryonic matter or dark matter) that we see. By measuring the motions of the stars inside, either through direct line-broadening at different radii or through the velocity dispersion of the entire galaxy, we can get a specific value for the total mass. In addition, we can look at the velocity dispersion of the globular clusters orbiting a galaxy to obtain a second, complementary, independent measurement of total mass.

In most galaxies, the two values for the measured/inferred matter content differ by about a factor of 5-to-6, indicating the presence of substantial amounts of dark matter. But some galaxies are special.

According to models and simulations, all galaxies should be embedded in dark matter halos, whose densities peak at the galactic centers. On long enough timescales, of perhaps a billion years, a single dark matter particle from the outskirts of the halo will complete one orbit. The effects of gas, feedback, star formation, supernovae, and radiation all complicate this environment, making it extremely difficult to extract universal dark matter predictions. (NASA, ESA, AND T. BROWN AND J. TUMLINSON (STSCI))

From a theoretical perspective, we know how galaxies should form. We know that the Universe ought to start out governed by General Relativity, our law of gravity. It should have approximately a 5-to-1 mix of dark matter to normal matter, and should begin almost perfectly uniform, with underdense and overdense regions appearing at about the 1-part-in-30,000 level. Give the Universe time, and let it evolve, and you’ll form structures where the overdense regions were on small, medium and large scales, with vast cosmic voids forming between them, in the initially underdense regions.

In large galaxies, comparable to the Milky Way’s size or larger, very little is going to be capable of changing that dark matter to normal matter ratio. The total amount of gravity is generally going to be too great for any type of matter to escape, unless it speeds rapidly through a gas-rich medium capable of stripping the normal matter away.

A Hubble (visible light) and Chandra (X-ray) composite of galaxy ESO 137–001 as it speeds through the intergalactic medium in a rich galaxy cluster, becoming stripped of stars and gas, while its dark matter remains intact. (NASA, ESA, CXC)

But for smaller galaxies, there are interesting processes that can occur that are vitally important to this ratio of normal matter (which determines the astronomical properties) to dark matter (which, combined with the normal matter, determines the gravitational properties).

When most small, low-mass galaxies form, the act of forming stars is an act of violence against all the other matter inside. Ultraviolet radiation, stellar cataclysms (like supernovae), and stellar winds all heat up the normal matter. If the heating is severe enough and the mass of the galaxy is low enough, enormous quantities of normal matter (in the form of gas and plasma) can get ejected from the galaxy. As a result, many low-mass galaxies will exhibit dark matter to normal matter ratios far in excess of 5-to-1, with some of the lowest-mass galaxies achieving ratios of hundreds-to-1.

Only approximately 1000 stars are present in the entirety of dwarf galaxies Segue 1 and Segue 3, which has a gravitational mass of 600,000 Suns. The stars making up the dwarf satellite Segue 1 are circled here. If new research is correct, then dark matter will obey a different distribution depending on how star formation, over the galaxy’s history, has heated it. The dark matter-to-normal matter ratio of nearly 1000-to-1 is the greatest ratio ever seen in the dark matter-favoring direction. (MARLA GEHA AND KECK OBSERVATORIES)

But there’s another process that can arise, on rare occasion, to produce galaxies with either very small or even, in theory, no amounts of dark matter. When larger galaxies merge together, they can produce an extreme phenomenon known as a starburst: where the entire galaxy becomes an enormous star-forming region.

The merger process, coupled with this star-formation, can impart enormous tidal forces and velocities to some of the normal matter that’s present. In theory, this could be powerful enough to rip substantial quantities of normal matter out of the main, merging galaxies, forming smaller galaxies that will have far less dark matter than the typical 5-to-1 dark matter-to-normal matter ratio. In some extreme cases, this might even create galaxies made of normal matter alone. Around large, dark matter-dominated galaxies, there might be smaller ones that are entirely dark matter-free.

A decade ago, there were a small number of scientists who claimed that the observed lack of these dark matter-free galaxies was a clear falsification of the dark matter paradigm. The overwhelming majority of scientists countered with claims that these galaxies should be rare, faint, and that it was no surprise we hadn’t observed them yet. With more data, better observations, and superior instrumentation and techniques, small galaxies with either small amounts of dark matter, or even none at all, ought to emerge.

Last year, a team of Yale researchers announced the discovery of the galaxy NGC 1052-DF2 (DF2 for short), a satellite galaxy of the large galaxy NGC 1052, that appeared to have no dark matter at all. When the scientists looked at the globular clusters orbiting DF2, they found the velocity dispersion was extremely small: at least a factor of 3 below the predicted speeds of ±30 km/s, which would have corresponded to this typical 5-to-1 ratio.

The KCWI spectrum of the galaxy DF2 (in black), as taken directly from the new paper at arXiv:1901.03711, with the earlier results from a competing team using MUSE superimposed in red. You can clearly see that the MUSE data is lower resolution, smeared out, and artificially inflated compared to the KCWI data. The result is an artificially large velocity dispersion inferred by the prior researchers. (SHANY DANIELI (PRIVATE COMMUNICATION))

About 8 months later, another team, using a different instrument (rather than the unique Dragonfly instrument used by the Yale team), argued that the stars, rather than the globular clusters, should be used to determine the galaxy’s mass. Using their new data, they found an equivalent velocity dispersion of ±17 km/s, about twice as great as the Yale team had measured.

Undaunted, the Yale team made an even more precise measurement of the stars in DF2 using the upgraded KCWI instrument, and went back and measured the motions of the globular clusters orbiting it once again. With a superior instrument, they got a result with much smaller error bars, and both techniques agreed. From the stellar velocity dispersion, they got a value of ±8.4 km/s, with the globulars giving ±7.8 km/s. For the first time, it looked like we truly had found a dark matter-free galaxy.

The predictions (vertical bars) for what the velocity dispersions ought to be if the galaxy contained a typical amount of dark matter (right) versus no dark matter at all (left). The Emsellem et al. result was taken with the insufficient MUSE instrument; the latest data from Danieli et al. was taken with the KCWI instrument, and provides the best evidence yet that this really is a galaxy with no dark matter at all. (DANIELI ET AL. (2019), ARXIV:1901.03711)

But perhaps something was flawed. When scientists are truly engaging in good science, they’ll try to take any hypothesis, novel result, or unexpected find and poke holes in it. They’ll try to knock it down, discredit it, or find a fatal flaw with the result whenever possible. Only the most robust, well-scrutinized results will stand up and become accepted; controversies are at their hottest when a new result threatens to decide the issue once and for all.

The latest attempt to knock the DF2 results down come from a group at the Instituto de Astrofísica de Canarias (IAC) led by Ignacio Trujillo. Using a new measurement of DF2, his team claims that the galaxy is actually closer than previously thought: 42 million light-years instead of 64 million. This would mean it isn’t a satellite of NGC 1052 after all, but rather a galaxy some 22 million light-years closer, in the cosmic foreground.

The ultra-diffuse galaxy KKS2000]04 (NGC1052-DF2), towards the constellation of Cetus, was considered to be a galaxy completely devoid of dark matter. The results of Trujillo et al. dispute that, claiming that the galaxy is much closer, and therefore has a different mass-to-luminosity ratio (and a different velocity dispersion) than was previously thought. This is extremely controversial. (TRUJILLO ET AL. (2019))

This could change the story dramatically. The distance to a galaxy is extremely important to the intrinsic brightness you infer, which in turn tells you how much matter must be present in the form of stars. If the galaxy is much closer than previously thought, then there’s actually more mass present, and the inferred velocity dispersion will be higher, indicating the need for dark matter, after all.

Case closed, right?

Not even close. First off, DF2 isn’t the only galaxy that exhibits this effect anymore; there’s another satellite of NGC 1052 (known as DF4) that exhibits the same dark matter-free nature, so both would have to have their distances mis-estimated. Second, even if they are at the closer distance preferred by the Trujillo et al. team, that still renders DF2 and DF4 both extremely low-dark matter galaxies, which still necessitates a mechanism to separate normal matter from dark matter. And third, the Yale team had previously (in August) published a calibration-free distance measurement to the galaxy, from surface-brightness-fluctuations, inconsistent at 3.5 sigma with Trujillo’s results.

The galaxy NGC 1052-DF2 was imaged in great detail by the KCWI spectrograph instrument aboard the W.M. Keck telescope on Mauna Kea, enabling scientists to detect the motions of stars and globular clusters inside the galaxy to unprecedented precisions. (DANIELI ET AL. (2019), ARXIV:1901.03711)

In other words, even if the distance estimates by Trujillo et al. are correct, which they probably aren’t, these galaxies are extremely low in dark matter, with DF4 possibly still even being dark matter-free. Neither team has yet observed this galaxy with the Hubble Space Telescope, but that will provide the most unambiguous distance estimate at all. Subsequent observations of DF4 with Hubble are slated for later in 2019, which should help clarify this ambiguity.

A short distance for these galaxies does not actually resolve the central issue: that they have much less dark matter, no matter how you massage it, than a naive, conventional dark matter-to-normal matter ratio would indicate. Only if dark matter is real, and experiences different physics in star-forming and collisional environments than normal matter, can galaxies like DF2 or DF4 exist at all.

Many nearby galaxies, including all the galaxies of the local group (mostly clustered at the extreme left), display a relationship between their mass and velocity dispersion that indicates the presence of dark matter. NGC 1052-DF2 is the first known galaxy that appears to be made of normal matter alone, and was later joined by DF4 earlier in 2019. (DANIELI ET AL. (2019), ARXIV:1901.03711)

The one takeaway, if you learn nothing else, is this: this new result resolves nothing. Stay tuned, because more and better data is coming. These galaxies are likely extremely low in dark matter, and possibly entirely free of dark matter. If the Yale team’s initial results hold up, these galaxies must be fundamentally different in composition from all the other galaxies we’ve ever found.

If all galaxies follow the same underlying rules, only their compositions can differ. The discovery of a dark matter-free galaxy, if that result holds up, is an extremely strong piece of evidence for a dark matter-rich Universe. Keep your eyes open for more news on DF2 and DF4, because this story is far from over.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

Ask Ethan: What’s The Real Story Behind This Dark Matter-Free Galaxy? was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This visualization of a planetary system around a red dwarf star is commonly thought to have every planet be uninhabitable. But this may not be the case at all, and we won’t know unless we look. (JPL-Caltech/NASA)If we only look for life on worlds like our own, we might miss the most commonly inhabited planets of all.

With all the planets out there in the galaxy and Universe, it’s only a matter of time and data until we find another one with life on it. (Probably.) But while most of the searches have focused on finding the next Earth, sometimes called Earth 2.0, that’s very likely an overly restrictive way to look for life. Biosignatures, or more conservatively, bio-hints, might not only be plentiful on worlds very different from our own, but around Solar Systems other than our own. Earth-like worlds, in fact, might not even be the most ubiquitous places for life to arise in the Universe.

I’m happy to welcome scientist Adrian Lenardic onto the Starts With A Bang podcast, and explore what just might be out there if we look for life beyond our idea of Earth 2.0!

The Starts With A Bang podcast is make possible through the support of our patrons on Patreon, which you can join today!

Starts With A Bang Podcast #45: Beyond Earth 2.0 was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview