Follow Starts With A Bang! on Feedspot

Continue with Google
Continue with Facebook


Simulations of how the black hole at the center of the Milky Way may appear to the Event Horizon Telescope, depending on its orientation relative to us. These simulations assume the event horizon exists, that the equations governing relativity are valid, and that we’ve applied the right parameters to our system of interest. (Imaging an Event Horizon: submm-VLBI of a Super Massive Black Hole, S. Doeleman et al.)They seem almost indistinguishable in some regards, but only one of them represents our physical Universe.

When it comes to describing the physical world, we can do it anecdotally, as we commonly do, or we can use science. That means gathering quantitative data, finding correlations between observables, formulating physical laws and theories, and writing down equations that allow us to predict the outcomes of various situations. The more advanced the physical situation we’re describing, the more abstract and complex the equations and the theoretical framework gets. But in the act of formulating those theories, and writing the equations that describe what will happen under a variety of conditions, aren’t we leaping into the realm of mathematics, rather than physics? Where is that line? That’s the question of our Patreon supporter Rob Hansen, who asks:

Where does one draw the line between abstract mathematics and physics? Is Noether’s Theorem part of the scientific corpus of knowledge, or the mathematical? What about Maldacena’s conjecture?

Luckily, we don’t have to go to such complicated examples to find the difference.

At any point along its trajectory, knowing a particle’s position and velocity will allow you to arrive at a solution to when and where it will hit the ground. But mathematically, you get two solutions; you must apply physics to choose the correct one. (Wikimedia commons users MichaelMaggs and (edited by) Richard Bartz)

Imagine that you do something as simple as throwing a ball. At any instant in time, if you tell me where it is (its position) and how it’s moving (its velocity), I can predict for you exactly where and when it will hit the ground. Except, if you simply write down and solve the equations governed by Newton’s laws of motion, you won’t get a single, correct answer. Instead, you’ll get two answers: one that corresponds to the ball hitting the ground in the future, and one that corresponds to where the ball would have hit the ground in the past. The mathematics of the equations doesn’t tell you which answer, the positive or the negative one, is physically correct. It’s like asking what the square root of four is: your instinct is to say “two,” but it could just as easily be negative two. Math, on its own, isn’t always deterministic.

Drop five chopsticks, and you’re likely to get a triangle. But, like in many math problems, you’re very likely to get more than one triangle. When there exists more than one possible mathematical solution, it’s physics that will show us the way. (Sian Zelbo / 1001 Math Problems)

In fact, there’s no universal rule at all that you can apply to tell you which answer is the one you’re looking for! That, right there, is the biggest distinction between math and physics: math tells you what the possible solutions are, but physics is what allows you to choose the solution that describes our Universe.

This is, of course, a very simplistic example, and one where we can apply a straightforward rule: pick the solution that’s forward in time and ahead in space. But that rule won’t apply in the context of every theory, like relativity and quantum mechanics. When the equations are less physically intuitive, it’s much more difficult to know which possible solution is the physically meaningful one.

The mathematics governing General Relativity is quite complicated, and General Relativity itself offers many possible solutions to its equations. But it’s only through specifying the conditions that describe our Universe, and comparing the theoretical predictions with our measurements and observations, that we can arrive at a physical theory. (T. Pyle/Caltech/MIT/LIGO Lab)

What, then, are you supposed to do when the mathematics gets more abstract? What do you do when you get to General Relativity, or Quantum Field Theory, or even more far afield into the speculative realms of cosmic inflation, extra dimensions, grand unified theories, or string theory? The mathematical structures that you build to describe these possibilities simply are what they are; on their own, they won’t offer you any physical insights. But if you can pull out either observable quantities, or connections to physically observable quantities, that’s when you start crossing over into something that you can test and observe.

The quantum fluctuations that occur during inflation do indeed get stretched across the Universe, but they also cause fluctuations in the total energy density, leaving us with some non-zero amount of spatial curvature left over in the Universe today. These field fluctuations cause density imperfections in the early Universe, which then lead to the temperature fluctuations we experience in the cosmic microwave background. (E. Siegel / Beyond the Galaxy)

In inflationary cosmology, for example, there are all sorts of complicated equations that govern what’s going on. It sounds a lot like mathematics, and in many of the discussions, it sounds very little like physics. But the key is to connect what these mathematical equations predict with physical observables. For example, based on the fact that you have quantum fluctuations in the fabric of space itself, but space is stretching and expanding at an exponential rate during inflation, you’ll expect there to be ripples and imperfections in the value of the quantum field causing inflation all across the Universe. When inflation ends, those fluctuations become density fluctuations, which we can then go and look for as temperature fluctuations in the Big Bang’s leftover glow. This prediction of the 1980s was verified by satellites like COBE, WMAP, and Planck many years later.

The quantum fluctuations that occur during inflation get stretched across the Universe, and when inflation ends, they become density fluctuations. This leads, over time, to the large-scale structure in the Universe today, as well as the fluctuations in temperature observed in the CMB. (E. Siegel, with images derived from ESA/Planck and the DoE/NASA/ NSF interagency task force on CMB research)

Noether’s theorem is an interesting example of a mathematical theorem that is powerful all on its own in mathematics, but has a very special application to physics. In general, the theorem tells you that if you have a system that takes the integral of a Lagrangian, and that system has a symmetry to it, there must be a conserved quantity associated with that symmetry. In physics, the integral of a Lagrangian function corresponds to what we physically call the “action,” and so any system that can be modeled with a Lagrangian alone, if it contains that symmetry, you can derive a conservation law from it. In physics, this allows us to derive things like the conservation of energy, the conservation of momentum, and the conservation of electric charge, among others.

Different frames of reference, including different positions and motions, would see different laws of physics if the conservation of momentum is invalid. The fact that we have a symmetry under ‘boosts,’ or velocity transformations, tells us we have a conserved quantity: linear momentum. (Wikimedia Commons user Krea)

What’s interesting about this is that if we couldn’t describe the Universe with these mathematical equations that contained these symmetries, there would be no reason to expect that these quantities would be conserved. This puzzles a lot of people, then, when they learn that in General Relativity, there is no universal time-translation symmetry, which means there isn’t a conservation of energy law for the expanding Universe we inhabit! Individual interaction in quantum field theory do obey that symmetry, so they do conserve energy. But on the scale of the entire Universe? Energy isn’t even defined, meaning we don’t know whether it’s conserved or not.

A 2-D projection of a Calabi-Yau manifold, one popular method of compactifying the extra, unwanted dimensions of String Theory. The Maldacena conjecture says that anti-de Sitter space is mathematically dual to conformal field theories in one fewer dimension. (Wikimedia Commons user Lunch)

The Maldacena conjecture gets even more complicated. Also known as the AdS/CFT correspondence, it shows that there’s a mathematical duality — meaning the same equations govern both systems — between a Conformal Field Theory (like a force in quantum mechanics) and a string theory in Anti-de Sitter space, with one extra dimension. If two systems are governed by the same equations, that means their physics must be the same. So, in principle, we should be able to describe aspects of our four-dimensional (three space and one time) Universe equally as well by going to five-dimensional Anti-de Sitter spacetime, and choosing the right parameters. It’s the closest example we’ve ever found to an application of the holographic principle as it applies to our Universe.

Now, string theory (or, more accurately, string theories) have their own constraints governing them, as do the forces in our Universe, so it isn’t provably clear that there’s a one-to-one correspondence between our four-dimensional Universe with gravity, electromagnetism, and the nuclear forces and any version of string theory. It’s an interesting conjecture, and it has found some applications to the real world: in the study of quark-gluon plasmas. In that sense, it’s more than mathematics: it’s physics. But where it strays from physics into pure mathematics is not yet fully determined.

The Standard Model Lagrangian is a single equation encapsulating the particles and interactions of the Standard Model. It has five independent parts: the gluons (1), the weak bosons (2), how matter interacts with the weak force and the Higgs field (3), the ghost particles that subtract the Higgs-field redundancies (4), and the Fadeev-Popov ghosts, which affect the weak interaction redundancies (5). Neutrino masses are not included. Also, this is only what we know so far; it may not be the full Lagrangian describing 3 of the 4 fundamental forces. (Thomas Gutierrez, who insists there is one ‘sign error’ in this equation)

What all of this seems to be getting at is a more general question: why, and when, can we use mathematics to learn something about our physical Universe? We don’t know the answer to why, but we do know the answer to when: when it agrees with our experiments and observations. So long as the laws of physics remain the laws of physics, and do not whimsically turn on-and-off or vary in some ill-defined way, we know we can describe them mathematically, at least in principle. Mathematics, then, is the toolkit we use to describe the functioning of the Universe. It’s the raw materials: the nails, the boards, the hammers and saws. Physics is how you apply that mathematics. Physics is how you put it all together to make sense of your materials, and wind up with a house, for example, instead of a collection of parts that could, in principle, be used to build something entirely different.

It’s possible to write down a variety of equations, like Maxwell’s equations, that describe the Universe. We can write them down in a variety of ways, but only by comparing their predictions with physical observations can we draw any conclusion about their validity. It’s why the version of Maxwell’s equations with magnetic monopoles don’t correspond to reality, while the ones without do. (Ed Murdock)

If you describe the Universe precisely, and you can make quantitative predictions about it, you’re physics. If those predictions turn out to be accurate and reflective of reality, then you’re physics that’s correct and useful. If those predictions are demonstrably wrong, you’re physics that doesn’t describe our Universe: you’re a failed attempt at a physical theory. But if your equations have no connection at all to the physical Universe, and cannot be related to anything you can ever hope to someday observe or measure, you’re firmly in the realm of mathematics; the divorce from physics will then be final. Mathematics is the language we use to describe physics, but not everything mathematical is physically meaningful. The connection, and where it breaks down, can only be determined by looking at the Universe itself.

Send in your Ask Ethan questions to startswithabang at gmail dot com!

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

Ask Ethan: Where Is The Line Between Mathematics And Physics? was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The history of the Universe and the arrow of time, which always flows forward in the same direction and at the same rate for any observer.(NASA / GSFC)The past is gone, the future not yet here, only the present is now. But why does it always flow the way it does for us?

Every moment that passes finds us traveling from the past to the present and into the future, with time always flowing in the same direction. At no point does it ever appear to either stand still or reverse; the “arrow of time” always points forwards for us. But if we look at the laws of physics — from Newton to Einstein, from Maxwell to Bohr, from Dirac to Feynman — they appear to be time-symmetric. In other words, the equations that govern reality don’t have a preference for which way time flows. The solutions that describe the behavior of any system obeying the laws of physics, as we understand them, are just as valid for time flowing into the past as they are for time flowing into the future. Yet we know from experience that time only flows one way: forwards. So where does the arrow of time come from?

A ball in mid-bounce has its past and future trajectories determined by the laws of physics, but time will only flow into the future for us.(Wikimedia commons users MichaelMaggs and (edited by) Richard Bartz)

Many people believe there might be a connection between the arrow of time and a quantity called entropy. While most people normally equate “disorder” with entropy, that’s a pretty lazy description that also isn’t particularly accurate. Instead, think about entropy as a measure of how much thermal (heat) energy could possibly be turned into useful, mechanical work. If you have a lot of this energy capable of potentially doing work, you have a low-entropy system, whereas if you have very little, you have a high-entropy system. The second law of thermodynamics is a very important relation in physics, and it states that the entropy of a closed (self-contained) system can only increase or stay the same over time; it can never go down. In other words, over time, the entropy of the entire Universe must increase. It’s the only law of physics that appears to have a preferred direction for time.

Still from a lecture on entropy by Clarissa Sorensen-Unruh. (C. Sorensen-Unruh / YouTube)

So, does that mean that we only experience time the way we do because of the second law of thermodynamics? That there’s a fundamentally deep connection between the arrow of time and entropy? Some physicists think so, and it’s certainly a possibility. In an interesting, 2016 collaboration between the MinutePhysics YouTube channel and physicist Sean Carroll, author of The Big Picture, From Eternity To Here, and an entropy/time’s arrow fan, they attempt to answer the question of why time doesn’t flow backwards. Unsurprisingly, they point the finger squarely at entropy.

It’s true that entropy does explain the arrow of time for a number of phenomena, including why coffee and milk mix but don’t unmix, why ice melts into a warm drink but never spontaneously arises along with a warm beverage from a cool drink, and why a cooked scrambled egg never resolves back into an uncooked, separated albumen and yolk. In all of these cases, an initially lower-entropy state (with more available, capable-of-doing-work energy) has moved into a higher-entropy (and lower available energy) state as time has moved forwards. There are plenty of examples of this in nature, including of a room filled with molecules: one side full of cold, slow-moving molecules and the other full of hot, fast-moving ones. Simply give it time, and the room will be fully mixed with intermediate-energy particles, representing a large increase in entropy and an irreversible reaction.

A system set up in the initial conditions on the left and let to evolve will become the system on the right spontaneously, gaining entropy in the process. (Wikimedia Commons users Htkym and Dhollm)

Except, it isn’t irreversible completely. You see, there’s a caveat that most people forget when it comes to the second law of thermodynamics and entropy increase: it only refers to the entropy of a closed system, or a system where no external energy or changes in entropy are added or taken away. A way to reverse this reaction was first thought up by the great physicist James Clerk Maxwell way back in the 1870s: simply have an external entity that opens a divide between the two sides of the room when it allows the “cold” molecules to flow onto one side and the “hot” molecules to flow onto the other. This idea became known as Maxwell’s demon, and it enables you to decrease the entropy of the system after all!

A representation of Maxwell’s demon, which can sort particles according to their energy on either side of a box. (Wikimedia Commons user Htkym)

You can’t actually violate the second law of thermodynamics by doing this, of course. The catch is that the demon must spend a tremendous amount of energy to segregate the particles like this. The system, under the influence of the demon, is an open system; if you include the entropy of the demon itself in the total system of particles, you’ll find that the total entropy does, in fact, increase overall. But here’s the kicker: even if you lived in the box and failed to detect the existence of the demon — in other words, if all you did was live in a pocket of the Universe that saw its entropy decrease — time would still run forward for you. The thermodynamic arrow of time does not determine the direction in which we perceive time’s passage.

No matter how we change the entropy of the Universe around us, time continues to pass for all observers at the rate of one second per second. (public domain)

So where does the arrow of time that correlates with our perception come from? We don’t know. What we do know, however, is that the thermodynamic arrow of time isn’t it. Our measurements of entropy in the Universe know of only one possible tremendous decrease in all of cosmic history: the end of cosmic inflation and its transition to the hot Big Bang. (And even that may have represented a very large increase in entropy, going from an inflationary state to a matter-and-radiation-filled state.) We know our Universe is headed to a cold, empty fate after all the stars burn out, after all the black holes decay, after dark energy drives the unbound galaxies apart from one another and gravitational interactions kick out the last remaining bound planetary and stellar remnants. This thermodynamic state of maximal entropy is known as the “heat death” of the Universe. Oddly enough, the state from which our Universe arose — the state of cosmic inflation — has exactly the same properties, only with a much larger expansion rate during the inflationary epoch than our current, dark energy-dominated epoch will lead to.

The quantum nature of inflation means that it ends in some “pockets” of the Universe and continues in others, but we do not yet understand either what the amount of entropy was during inflation or how it gave rise to the low-entropy state at the start of the hot Big Bang. (E. Siegel / Beyond the Galaxy)

How did inflation come to an end? How did the vacuum energy of the Universe, the energy inherent to empty space itself, get converted into a thermally hot bath of particles, antiparticles and radiation? And did the Universe go from an incredibly high-entropy state during cosmic inflation to a lower-entropy one during the hot Big Bang, or was the entropy during inflation even lower due to the eventual capacity of the Universe to do mechanical work? At this point, we have only theories to guide us; the experimental or observational signatures that would tell us the answers to these questions have not been uncovered.

From the end of inflation and the start of the hot Big Bang, entropy always increases up through the present day. (E. Siegel, with images derived from ESA/Planck and the DoE/NASA/ NSF interagency task force on CMB research)

We do understand the arrow of time from a thermodynamic perspective, and that’s an incredibly valuable and interesting piece of knowledge. But if you want to know why yesterday is in the immutable past, tomorrow will arrive in a day and the present is what you’re living right now, thermodynamics won’t give you the answer. Nobody, in fact, understands what will.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

We Still Don’t Understand Why Time Only Flows Forward was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Artist’s depiction of the intercalated multilayer-graphene inductor (center blue spiral) which relies on kinetic inductance. Background images show its predecessors which rely on magnetic inductance, a vastly inferior and less efficient concept for microelectronics. (Peter Allen / UC Santa Barbara)One of the three basic circuit elements just got a lot smaller for the very first time, in what promises to be a trillion-dollar breakthrough.

In the race for ever-improving technology, there are two related technical capabilities that drive our world forward: speed and size. These are related, as the smaller a device is, the less distance the electrical signal driving your device has to travel. As we’ve been able to cut silicon thinner, print circuit elements smaller, and develop increasingly miniaturized transistors, gains in computing speed-and-power and decreases in device size have gone hand-in-hand. But at the same time these advances have comes in leaps and bounds, one fundamental circuit element — the inductor — has had its design remain exactly the same. Found in everything from televisions to laptops to smartphones to wireless chargers, radios, and transformers, it’s one of the most indispensable electronic components in existence.

Since their 1831 invention by Michael Faraday, their design has remained basically unchanged. Until last month, that is, when a UC Santa Barbara team led by Kaustav Banerjee demonstrated a fundamentally new type of inductor. Without the limitations of the original inductor design, it should allow a new breakthrough in miniaturization and speed, potentially paving the way for a more connected world.

One of the earliest applications of Faraday’s law of induction was to note that a coil of wire, which would create a magnetic field inside, could magnetize a material, causing a change in its internal magnetic field. This changing field would then induce a current in the coil on the other side of the magnet, causing the needle (at right) to deflect. Modern inductors still rely on this same principle. (Wikimedia Commons user Eviatar Bach)

The classic way inductors work is one of the simplest designs possible: a simple coil of wire. When you pass a current through a loop or coil of wire, it creates a magnetic field through the center. But according to Faraday’s law of induction, that changing magnetic field then induces a current in the next loop, a current which opposes the one you’re trying to create. If you create a greater coil density, or (even better) put a core of magnetizable material inside the inductor, you can greatly increase the inductance of your device. This results in inductors which are very effective, but also which are required to be physically quite large. Despite all the advances we’ve made, the fundamental limitation of this design style means there’s been a limit to how small an inductor can get.

Even with all the revolutions the 19th, 20th and 21st centuries have brought along in electronics, the conventional magnetic inductor, in concept, remains virtually unchanged from Faraday’s original designs. (Shutterstock)

The applications, however, are tremendous. Along with capacitors and resistors, inductors are one of the three passive elements that are the foundations of all electronics. Create an electric current of the right magnitude and frequency, and you’ll build an induction motor. Pass the magnetic core in-and-out through the coil, and you’ll generate electricity from a mechanical motion. Send both AC and DC currents down your circuit, and the inductor will block AC while allowing DC to pass through. They can separate signals of different frequencies, and when you use a capacitor along with an inductor, you can make a tuned circuit, of paramount importance in television and radio receivers.

The photograph shows the large grains of a practical energy-storage material, calcium-copper-titanate (CCTO), which is one of the world’s most efficient and practical ‘supercapacitors.’ The density of the CCTO ceramic is 94 percent of the maximum theoretical density. Capacitors and resistors have been thoroughly miniaturized, but inductors lag behind. (R. K. Pandey/Texas State University)

But while resistors have been miniaturized with, for example, the development of the surface mount resistor, and capacitors have given way to supercapacitor materials that approach the theoretical limit, the basic design of inductors has remained the same throughout the centuries. Despite being invented way back in 1831, nothing about their basic design has changed in nearly 200 years. They function on the principle of magnetic inductance, where a current, a coil of wire, and a core of magnetizable material are used in tandem.

But there is another approach, in theory, that inductors can take. There’s also a phenomenon known as kinetic inductance, where instead of a changing magnetic field inducing an opposing current as in magnetic inductance, it’s the inertia of the particles that carry the electric current themselves — such as electrons — that oppose a change in their motion.

As current flows uniformly through a conductor, it obeys Newton’s law of an object (the individual charges) remaining in uniform motion unless acted upon by an outside force. But even if they are acted on by an outside force, their inertia resists that change: the concept behind kinetic inductance. (Wikimedia Commons users lx0 / Menner)

If you envision an electric current as a series of charge carriers (like electrons) all moving steadily, in a row, and at a constant speed, you can imagine what it would take to change that current: an additional force of some type. Each of those particles would need a force to act on them, causing them to accelerate or decelerate. The same principle that creates Newton’s most famous law of motion, F = ma, tells us that if we want to change the motions of these charged particles, we need to exert a force on them. In this equation, it’s their masses, or the m in the equation, that resists that change in motion. That’s where kinetic inductance comes from. Functionally, it’s indistinguishable from magnetic inductance, it’s just that kinetic inductance has only ever been practically large under extreme conditions: either in superconductors or in extremely high-frequency circuits.

An on-chip metal inductor, center, still relies on the Faraday-inspired concept of magnetic inductance. There are limits to its efficiency and how well it can be miniaturized, and in the smallest electronics, these inductors can take up a full 50% of the total surface area available for electronic components. (H. Wang et al., Journal of Semiconductors, 38, 11 (2017))

In conventional metallic conductors, kinetic inductance is negligible, and so it’s never been applied in conventional circuits before. But if it could be applied, it would be a revolutionary advance for miniaturization, since unlike magnetic inductance, its value doesn’t depend on the inductor’s surface area. With that fundamental limitation removed, it could be possible to create a kinetic inductor that’s far smaller than any magnetic inductor we’ve ever made. And if we can engineer that advance, perhaps we can take the next great leap forward in miniaturization.

On-chip metal inductors revolutionized radio frequency electronics two decades ago, but there are inherent limitations to their scalability. With the breakthroughs inherent to replacing magnetic inductance with kinetic inductance, it may be possible to engineer another, greater revolution still. (Shutterstock)

That’s where the work of Banerjee’s Nanoelectronics Research Lab and their collaborators comes in. By exploiting the phenomenon of kinetic inductance, they were able to, for the first time, demonstrate the effectiveness a fundamentally different kind of inductor that didn’t rely on Faraday’s magnetic inductance. Instead of using conventional metal inductors, they used graphene — carbon bonded together into an ultra-hard, highly-conductive configuration that also has a large kinetic inductance — to make the highest inductance-density material ever created. In a paper last month published in Nature Electronics, the group demonstrated that if you inserted bromine atoms between various layers of graphene, in a process known as intercalation, you could finally create a material where the kinetic inductance exceeded the theoretical limit of a traditional Faraday inductor.

The novel graphene design for the kinetic inductor (right) has finally surpassed traditional inductors in terms of inductance density, as the central panel (in blue and red, respectively) demonstrates. (J. Kang et al., Nature Electronics 1, 46–51 (2018))

Already achieving 50% greater inductance for its size, in a scalable way that should allow material scientists to miniaturize this type of device even further. If you can make the intercalation process more efficient, which is exactly what the team is now working on, you should be able to increase the inductance density even further. According to Banerjee,

We essentially engineered a new nanomaterial to bring forward the previously ‘hidden physics’ of kinetic inductance at room temperature and in a range of operating frequencies targeted for next-generation wireless communications.

With connected devices and the Internet of Things poised to become a multi-trillion dollar enterprise by the mid-2020s, this new type of inductor could be exactly the kind of revolution the burgeoning industry has been hoping for. Next-generation communications, energy storage, and sensing technologies could be smaller, lighter, and faster than ever. And thanks to this great leap in nanomaterials, we might finally be able to go beyond the technology that Faraday brought to our world nearly 200 years ago.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

The Last Barrier To Ultra-Miniaturized Electronics Is Broken, Thanks To A New Type Of Inductor was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The extremely high-excitation nebula shown here is powered by an extremely rare binary star system: a Wolf-Rayet star orbiting an O-star. The stellar winds coming off of the central Wolf-Rayet member are between 10,000,000 and 1,000,000,000 times as powerful as our solar wind, and illuminated at a temperature of 120,000 degrees. (The green supernova remnant off-center is unrelated.) Systems like this are estimated, at most, to represent 0.00003% of the stars in the Universe. (ESO)Most stars obey very similar rules, making them almost entirely predictable. But then, there are the weirdos. Catch this live-blog event to learn more.

When we look out at the Universe with our most powerful telescopes, we often think about distant galaxies at the astrophysical limits of what we can perceive. In each one, on average, are hundreds of billions of stars, each with their own one-of-a-kind history. But if we want to learn about what stars are out there, we have to look close by. Only in our own relatively nearby cosmic backyard, in the Milky Way and other galaxies no more than a few million light years away, can we resolve individual stars in detail. Thanks to tremendous surveys like Hipparcos, Pan-STARRS and the ongoing Gaia mission, we’ve been able to measure and categorize literally millions upon millions of stars. When we look at what we find, there are a few general things that most of them have in common. And then, beyond those, there are the outliers.

The (modern) Morgan–Keenan spectral classification system, with the temperature range of each star class shown above it, in kelvin. The overwhelming majority (75%) of stars today are M-class stars, with only 1-in-800 being massive enough for a supernova. Yet as hot as O-stars get, they’re not the hottest stars in the entire Universe; there are some special ones that are among the rarest stars of all. (Wikimedia Commons user LucasVB, additions by E. Siegel)

Typically, whenever you form stars, they arise from the collapse of a molecular cloud of gas. The cloud fragments, forming a wide variety of stars: large numbers of low-mass stars, smaller numbers of higher-mass stars, and if the gas cloud is large enough, still smaller but possibly significant numbers of high-mass stars. All of the stars will fuse hydrogen into helium, which is how they create the nuclear energy that powers them. Normally, we break stars like this up into seven different classes, with M-class being the smallest, lowest-mass, reddest and coolest, and O-class being the largest, most-massive, bluest and hottest stars.

The largest group of newborn stars in our Local Group of galaxies, cluster R136 contains the most massive stars we’ve ever discovered: over 250 times the mass of our Sun for the largest. Over the next 1–2 million years, there will likely be a large number of supernovae to come from this region of the sky. (NASA, ESA, and F. Paresce, INAF-IASF, Bologna, R. O’Connell, University of Virginia, Charlottesville, and the Wide Field Camera 3 Science Oversight Committee)

If this were all we had — these types of stars in isolation — then we think we know how they’d all evolve. Individual stars would grow as large as possible from the molecular clouds they formed from, cooling from their elements, heating up from gravitational collapse, growing until radiation pressure from the internal processes like fusion created an upper limit. Then:

  • The lowest-mass M-class stars, up until about 40% of the Sun’s mass, would burn hydrogen into helium slowly, eventually dying by contracting into a helium white dwarf.
  • Mid-range K-class through B-class stars, from about 40% to 800% of the Sun’s mass, burn hydrogen into helium, then heat up to fuse helium into carbon, becoming a red giant, and finally dying in a planetary nebula accompanied by a carbon/oxygen white dwarf.
  • And the highest-mass stars, including the heaviest B-class and the O-class stars, will go beyond helium fusion into stages like carbon burning, oxygen burning, and all the way to silicon burning, leading to a supernova with either a neutron star or black hole at their cores.

This, at least, is our typical picture of stellar evolution.

The visible/near-IR photos from Hubble show a massive star, about 25 times the mass of the Sun, that has winked out of existence, with no supernova or other explanation. Direct collapse is the only reasonable candidate explanation. (NASA/ESA/C. Kochanek (OSU))

But then there are the weirdos. There are the supermassive stars that collapse directly to black holes, with no supernovae. There are the stars that get so hot that they start spontaneously producing electron/positron pairs on the inside, leading to a special kind of supernova.

This diagram illustrates the pair production process that astronomers think triggered the hypernova event known as SN 2006gy. When high-enough-energy photons are produced, they will create electron/positron pairs, causing a pressure drop and a runaway reaction that destroys the star. (NASA/CXC/M. Weiss)

There are binary stars that steal mass off of one of the members, sometimes siphoning off all the massive hydrogen from a giant star. There are stars that should have a collapsed object at the center of a still-alive giant star, known as a Thorne-Zytkow object. There are stars, young-and-old, that exhibit extremely rare flaring behavior, like Herbig-Haro objects or Wolf-Rayet stars.

The violent stellar winds surrounding the Wolf-Rayet star WR124 have created an incredible nebula known as M1–67. These stars are so tumultuous that their ejecta spans many light years, with the globs of ejected gas weighing many times the Earth apiece. (Hubble Legacy Archive, NASA, ESA; Processing: Judy Schmidt)

And, yet unconfirmed, there are stars made completely from pristine clouds of gas, composed solely of hydrogen and helium: the first stars in the Universe. Stars from this era may reach as much as 1,000 solar masses, and will hopefully be revealed by the James Webb Space Telescope, which was built — in part — to decipher the Universes secrets from exactly this early stage.

Illustration of the distant galaxy CR7, which, in 2016, was discovered to house the best-ever candidate for a pristine population of stars formed from the material direct from the Big Bang. One of the discovered galaxies definitely houses stars; the other may not have formed any yet. (M. Kornmesser / ESO)

So what do we know so far? And what do we expect to find out about these weird and wild objects in the near future? That’s the subject of Emily Levesque’s public lecture, on The Weirdest Objects In The Universe, at the Perimeter Institute, from March 7th, at 7:00 PM ET/4:00 PM PT. You can, at any time, tune in right here to watch it:

And follow along, below, as I’ll be live-blogging it! Feel free to follow along and live-tweet any questions with the hashtag #piLIVE. You won’t want to miss it!

(Live blog begins at 3:50 PM. All times given in Pacific Time.)

3:50 PM: Welcome, everyone! I’ve been very excited about this talk, because I don’t know which rare/weird stars Emily will be talking about. For the first time, I don’t know what the subject of a public lecture that I’m live blogging will be, for perhaps the first time ever. It puts me in a unique situation, and I guess I’ll have to be ready for anything!

The ‘supernova impostor’ of the 19th century precipitated a gigantic eruption, spewing many Suns’ worth of material into the interstellar medium from Eta Carinae. High mass stars like this within metal-rich galaxies, like our own, eject large fractions of mass in a way that stars within smaller, lower-metallicity galaxies do not. (Nathan Smith (University of California, Berkeley), and NASA)

3:53 PM: For example, will we be talking about events that happen in ultramassive stars towards the ends of their lives? Will we touch on bizarre things that may be really uncommon, like supernova impostors (above)?

An artist’s conception of what the Universe might look like as it forms stars for the first time. While we don’t yet have a direct picture, the new indirect evidence from radio astronomy points to the existence of these stars turning on when the Universe was between 180 and 260 million years old. (NASA/JPL-Caltech/R. Hurt (SSC))

3:56 PM: Or will it focus more on the first stars in the Universe: the kind we’re struggling but hoping to detect, the ones made of pristine elements? There are so many things we don’t yet know about stars, including how, exactly, they form at a variety of stages.

The evolution of solar-mass star on H-R diagram from pre-main-sequence phase to the end of fusion. Every star of every mass will follow a different curve. (Wikimedia Commons user Szczureq)

4:00 PM: Or, perhaps, are we going to be talking about the short-lived, and hence, rare-and-weird, stages in a star’s potential life? Or, just maybe, Emily will cover it all. No matter what, it’s time to get excited; it’s about to start!

4:03 PM: Emily is being introduced, and wow… is her list of awards and fellowships she’s already won enough to make anyone feel inadequate. Remember, we’re not the imposters, it’s the failed supernovae that are the imposters!

An optical composite/mosaic of the Crab Nebula as taken with the Hubble Space Telescope. The different colors correspond to different elements, and reveal the presence of hydrogen, oxygen, silicon and more, all segregated by mass. (NASA, ESA, J. Hester and A. Loll (Arizona State University))

4:05 PM: Well, this is assuring… Emily is saying that we will actually be talking about “weird” objects that I’ve mostly seen or heard of before, like the Crab supernova remnant or, as we showed you above, Eta Carinae.

The color-magnitude diagram of notable stars. The brightest red supergiant, Betelgeuse, is shown at the upper right. (European Southern Observatory)

4:07 PM: See, there’s nothing to be afraid of, here. Emily is telling us how stars, in general work, and it’s nice and simple and straightforward. You burn through your fuel when you’re on the main sequence, or this big streaky diagonal line. As you burn enough fuel and run out of hydrogen in your core, you evolve off of this line, towards the right (and up), and that’s when you enter the red giant or supergiant phase… and that’s where the fun begins.

The Sun, today, is very small compared to giants, but will grow to the size of Arcturus in its red giant phase. A monstrous supergiant like Antares will be forever beyond our Sun’s reach. (English Wikipedia author Sakurambo)

4:09 PM: It’s true: when you become a star like this, you become very different from how the Sun is now. But this doesn’t mean you’re “weird” in any real way… it means you’re obeying your normal phase of stellar evolution. And that’s only weird from the perspective of normalizing us. In reality, there’s a wide variety of what “normal” is. Perhaps we should learn that stellar lesson for ourselves, at the moments where we feel we’re not normal: there’s a wide variety of what normal looks like.

The Omega nebula, known also as Messier 17, is an intense and active region of star formation, viewed edge-on, which explains its dusty and beam-like appearance. (ESO / VST survey)

4:13 PM: What’s fun about stars and stellar evolution is that these very massive stars, the ones that become the red supergiants, are actually the shortest-lived of all stars. We find them even in star-forming regions, as they’ve burn through their hydrogen fuel in their core so fast, and when they expand, they cool, so drastically that they can actually form stable molecules (like titanium dioxide) in their outer atmospheres.

O-stars, the hottest of all stars, actually have weaker absorption lines in many cases, because the surface temperatures are great enough that most of the atoms at its surface are at too great of an energy to display the characteristic atomic transitions that result in absorption. (NOAO/AURA/NSF, modified by E. Siegel)

4:16 PM: What’s interesting is that these stellar atmospheres are so large and so cool, that the molecules forming at the edges can absorb blue light, preferentially, which shift the fitted temperatures of these stars to values that were too low: in theory, stars that were too cool to exist. It’s an interesting study in how we can fool ourselves if we don’t account for all the physical effects, including, oddly, molecules on the surfaces of stars!

The anatomy of a very massive star throughout its life, culminating in a Type II Supernova when the core runs out of nuclear fuel. The final stage of fusion is silicon-burning, producing iron and iron-like elements in the core for only a brief while before a supernova ensues. (Nicole Rager Fuller/NSF)

4:20 PM: Okay, so how do you go through stellar evolution, and go supernova? To hold up your star against gravitational collapse, you have to fuse elements: the outward push of radiation fights gravity. When you run out of hydrogen to fuse, radiation starts to lose, and gravitational collapse happens. That means, though, that you heat up as you get compressed, and if you have enough mass, you can heat up fast enough to start fusing helium.

This goes on: you fuse helium into carbon, carbon into oxygen… all the way up until you make iron, nickel, and cobalt. And then, my friend, you die.

4:23 PM: This is fast: while these different stages of burning last from days (like silicon) to thousands of years (for carbon/oxygen) to hundreds of thousands (for helium)… but supernovae occur in seconds.

Ejecta from the eruption of the star V838 Monocerotis. (NASA, ESA and H.E. Bond (STScI))

4:26 PM: But not everything is smooth like you think of. Emily’s now telling us about luminous blue variables, which throw off ejecta as they go through their late-stages in life. This is an interesting process that isn’t fully understood: why do some stars (usually the ones with more heavy elements) do this, while others don’t? This kind of open question is part of why astronomy and astrophysics, despite all we know, is nowhere close to coming to an end!

A neutron star is one of the densest collections of matter in the Universe, but there is an upper limit to their mass. Exceed it, and the neutron star will further collapse to form a black hole. (ESO/Luís Calçada)

4:30 PM: The tough thing about a public talk like this is when you do a survey of objects or phenomena, you can’t go too far in-depth into anything. Emily talked about neutron stars and specifically the ones that are pulsars, but then went right on to black holes. Why? Because if you want to cover it all, you can’t spend too much time talking about any one thing in particular. There are, as a result, going to be lots of questions that flash through your mind, and then are lost as you go onto your next topic.

An illustration of a very high energy process in the Universe: a gamma-ray burst. (NASA / D. Berry)

4:32 PM: But on the other hand, it’s also really cool, because you get to have a great survey of a whole slew of topics, like gamma-ray bursts… which we know now, thanks to LIGO/Virgo, are at least partially due to neutron star mergers!

4:35 PM: Here’s something that you don’t often get to appreciate in science: when you detect a rare or important event, here’s the process for how it works.

  1. You get a notification that something interesting and timely occurred.
  2. People get kicked off of their observation runs, and the big/important telescopes turn to point at what you’re seeking to detect.
  3. These follow-up observations, across a variety of wavelengths, give you a slew of data to look at.
  4. And it’s the data, not a pretty picture, that tells you the interesting physics/astrophysics/astronomy that’s going on.

And finally, you don’t announce it, you post your results in a publication and then the community synthesizes the suite of what all the astronomers have to determine exactly what went on.

The galaxy NGC 4993, located 130 million light years away, had been imaged many times before. But just after the August 17, 2017 detection of gravitational waves, a new transient source of light was seen: the optical counterpart of a neutron star-neutron star merger. (P.K. Blanchard / E. Berger / Pan-STARRS / DECam)

4:38 PM: This is really a vital part of the process: being careful and making sure you’re seeing what you think you’re seeing. Science isn’t always about being first or fastest or the one who puts all the pieces together; it’s about learning as much as possible and getting it right in the end. It’s how we combined gravitational wave astronomy, gamma-ray astronomy, and then multiwavelength follow-ups across over 70 observatories.

Aerial view of the Virgo gravitational-wave detector, situated at Cascina, near Pisa (Italy). Virgo is a giant Michelson laser interferometer with arms that are 3 km long, and complements the twin 4 km LIGO detectors. (Nicola Baldocchi / Virgo Collaboration)

4:41 PM: I have to say, by the way, how exciting it is to see a pure astronomer like Emily, not an astrophysicist but an astronomer, talking about gravitational wave astronomy. That’s right, something that was once purely in the realm of physics, and then astrophysics, has made it to the point where astronomers talk about this as actual astronomy. This isn’t just physics anymore; astronomers no longer need telescopes to do astronomy!

4:43 PM: By the way, it’s important that Emily talks about these sensitive, transient events that are happening quickly, as time-domain astronomy. In other words, when time is of the essence, you absolutely have to look, because if you don’t jump at your chance to take that data, you’ll miss it!

A solar flare, visible at the right of the image, occurs when magnetic field lines split apart and reconnect, far more rapidly than prior theories have predicted. (NASA)

4:45 PM: Also, it’s important to recognize that sometimes there are false positives. For example, potassium-flare stars. Who sees stars flaring and emitting signatures of potassium? The answer is one telescope does, in France, and no others. It wasn’t due to potassium in the star, though, but potassium in the detector apparatus room, because people were striking matches.

4:48 PM: But… it turns out that there may be actual potassium-flare stars, since a non-smoker (haha) observed a similar signature. It is easy to fool yourself if a source you didn’t account for is causing an effect, but that doesn’t mean the effect you’re seeing isn’t actually real! For example, at the Parkes radio observatory, using the microwave at lunchtime, and opening the door, caused a brief flash of radio waves that made people think they were seeing a fast radio burst, but no, it was the microwave. Yet… fast radio bursts are real, and now we know more about them and have seen a bunch!

This artist’s impression shows the supergiant star Betelgeuse as it was revealed thanks to different state-of-the-art techniques on ESO’s Very Large Telescope (VLT), which allowed two independent teams of astronomers to obtain the sharpest ever views of the supergiant star Betelgeuse. They show that the star has a vast plume of gas almost as large as our Solar System and a gigantic bubble boiling on its surface. (ESO/L. Calçada)

4:51 PM: Here’s a fun thing to imagine: what happens if you have a binary star system, where both are large and will go supernova? Well, one will go first, and perhaps it will produce a neutron star. Now, what happens if they spiral in, and merge? The neutron star will sink to the core, and so you get a red supergiant (eventually) with a neutron star at its core. This is what a Thorne-Zyktow object is, and it makes very explicit predictions for what you’ll observe at the surface!

Here’s what a Thorne-Zyktow object should do, where 1-out-of-70 observed red supergiant stars showed the spectral signature you expect. (Screenshot from Emily Levesque’s Perimeter Institute lecture)

4:54 PM: How fun, that what’s going on is a combination of nuclear physics, thermal physics, and chemistry… and that when an atomic nucleus touches the surface of the neutron star, it only stays there for about 10 milliseconds, and will produce a chemical signature we don’t see anywhere else. And, lo and behold, you can find this odd, predictive chemical signature in a very small number of red supergiants, one-out-of-70, leading us to conclude that Thorne-Zyktow objects are real!

4:57 PM: I love the care that Emily is taking in calling this object a candidate, though. We have to make sure that there isn’t something else mimicking the effect we expect. Even when an observation fits your theory perfectly, you need confirmation from multiple objects and multiple lines of evidence. This is the way scientists work: we have..

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The evolution of large-scale structure in the Universe, from an early, uniform state to the clustered Universe we know today. The type and abundance of dark matter would deliver a vastly different Universe if we altered what our Universe possesses. (Angulo et al. 2008, via Durham University)There have been a lot of public advocates from the “no dark matter” camp, getting lots of popular attention. But the Universe still needs dark matter. Here’s why.

If you took a look at all the galaxies in the Universe, measured where all the matter you could detect was, and then mapped out how these galaxies were moving, you’d find yourself quite puzzled. Whereas in the Solar System, the planets orbit the Sun with decreasing speed the farther away from the center you go — just as the law of gravitation predicts — the stars around the galactic center do no such thing. Even though the mass is concentrated towards the central bulge and in a plane-like disk, the stars in the outer regions of a galaxy whip around it at the same speeds as they do in the inner regions, defying predictions. Obviously, something is missing. Two solutions spring to mind: either there’s some type of unseen mass out there making up the deficit, or we need to modify the laws of gravity, as we did when we leapt from Newton to Einstein. While both of these possibilities seem reasonable, the unseen mass explanation, known as dark matter, is far and away the superior option. Here’s why.

Individual galaxies could, in principle, be explained by either dark matter or a modification to gravity, but they are not the best evidence we have for what the Universe is made of, or how it got to be the way it is today. (Stefania.deluca of Wikimedia Commons)

First off, the answer has nothing to do with individual galaxies. Galaxies are some of the messiest objects in the known Universe, and when you’re testing the very nature of the Universe itself, you want the cleanest environment possible. There’s an entire field of study devoted to this, known as physical cosmology. (Full disclosure: it’s my field.) When the Universe was first born, it was very close to uniform: almost exactly the same density everywhere. It’s estimated that the densest region the Universe began with was less than 0.01% denser than the least dense region at the start of the hot Big Bang. Gravitation works very simply and in a very straightforward fashion, even on a cosmic scale, when we’re dealing with small departures from the average density. This is known as the linear regime, and provides a great cosmic test of both gravitation and dark matter.

Large scale projection through the Illustris volume at z=0, centered on the most massive cluster, 15 Mpc/h deep. Shows dark matter density (left) transitioning to gas density (right). The large-scale structure of the Universe cannot be explained without dark matter. (Illustris Collaboration / Illustris Simulation)

On the other hand, when we’re dealing with large departures from the average, this places you into what’s called the non-linear regime, and these tests are far more difficult to draw conclusions from. Today, a galaxy like the Milky Way may be be a million times denser than the average cosmic density, which places it firmly in the non-linear regime. On the other hand, if we look at the Universe on either very large scales or at very early times, the gravitational effects are much more linear, making this your ideal laboratory. If you want to probe whether modifying gravity or adding the extra ingredient of dark matter is the way to go, you’ll want to look where the effects are clearest, and that’s where the gravitational effects are most easily predicted: in the linear regime.

Here are the best ways to probe the Universe in that era, and what they tell you.

The fluctuations in the Cosmic Microwave Background were first measured accurately by COBE in the 1990s, then more accurately by WMAP in the 2000s and Planck (above) in the 2010s. This image encodes a huge amount of information about the early Universe, including its composition, age, and history. (ESA and the Planck Collaboration)

1.) The fluctuations in the Cosmic Microwave Background. This is our earliest true picture of the Universe, and the fluctuations in the energy density at a time just 380,000 years after the Big Bang. The blue regions correspond to overdensities, where matter clumps have begun their inevitable gravitational growth, heading down their path to form stars, galaxies, and galaxy clusters. The red regions are underdense regions, where matter is being lost to the denser regions surrounding it. By looking at these temperature fluctuations and how they correlate — which is to say, on a specific scale. what’s the magnitude of your average fluctuation away from the mean temperature — you can learn an awful lot about the composition of your Universe.

The relative heights and positions of these acoustic peaks, derived from the data in the Cosmic Microwave Background, are definitively consistent with a Universe made of 68% dark energy, 27% dark matter, and 5% normal matter. Deviations are tightly constrained. (Planck 2015 results. XX. Constraints on inflation — Planck Collaboration (Ade, P.A.R. et al.) arXiv:1502.02114)

In particular, the positions and heights (especially the relative heights) of the seven peaks identified above agree spectacularly with a particular fit: a Universe that’s 68% dark energy, 27% dark matter, and 5% normal matter. If you don’t include dark matter, the relative sizes of the odd-numbered peaks and the even-numbered peaks cannot be made to match up. The best that modified gravity claims can do are to either get you the first two peaks (but not the third or beyond), or to get you the right spectrum of peaks by also adding some dark matter, which defeats the whole purpose. There are no known modifications to Einstein’s gravity that can reproduce these predictions, even after-the-fact, without also adding dark matter.

An illustration of clustering patterns due to Baryon Acoustic Oscillations, where the likelihood of finding a galaxy at a certain distance from any other galaxy is governed by the relationship between dark matter and normal matter. As the Universe expands, this characteristic distance expands as well, allowing us to measure the Hubble constant. (Zosia Rostomian)

2.) The large-scale structure in the Universe. If you have a galaxy, how likely are you to find another galaxy a certain distance away? And if you look at the Universe on a certain volumetric scale, what departures from the “average” numbers of galaxies do you expect to see there? These questions are at the heart of understanding large-scale structure, and their answers depend very strongly on both the laws of gravity and what’s in your Universe. In a Universe where 100% of your matter is normal matter, you’ll have big suppressions of structure formation on specific, large scales, while if your Universe is dominated by dark matter, you’ll get only small suppressions superimposed on a smooth background. You don’t need any simulations or nonlinear effects to probe this; this can all be calculated by hand.

The data points from our observed galaxies (red points) and the predictions from a cosmology with dark matter (black line) line up incredibly well. The blue lines, with and without modifications to gravity, cannot reproduce this observation without dark matter.(S. Dodelson, from http://arxiv.org/abs/1112.1320)

When we look out at the Universe on these largest scales, and compare with the predictions of these different scenarios, the results are incontrovertible. Those red points (with error bars, as shown) are the observations — the data — from our own Universe. The black line is the prediction of our standard ΛCDM cosmology, with normal matter, dark matter (in six times the amount of normal matter), dark energy, and general relativity as the law governing it. Note the small wiggles in it and how well — how amazingly well — the predictions match up to the data. The blue lines are the predictions of normal matter with no dark matter, in both standard (solid) and modified gravity (dotted) scenarios. And again, there are no modifications to gravity that are known that can reproduce these results, even after-the-fact, without also including dark matter.

The pathway that protons and neutrons take in the early Universe to form the lightest elements and isotopes: deuterium, helium-3, and helium-4. The nucleon-to-photon ratio determines how much of these elements we will wind up with in our Universe today. These measurements allow us to know the density of normal matter in the entire Universe very precisely. (E. Siegel / Beyond The Galaxy)

3.) The relative abundance of light elements formed in the early Universe. This isn’t specifically a dark matter-related question, nor is it extremely dependent on gravity. But due to the physics of the early Universe, where atomic nuclei are blasted apart under high-enough energy conditions when the Universe is extremely uniform, we can predict exactly how much hydrogen, deuterium, helium-3, helium-4, and lithium-7 should be left over from the Big Bang in the primordial gas we see today. There’s only one parameter that all of these results depend on: the ratio of photons to baryons (protons and neutrons combined) in the Universe. We’ve measured the number of photons in the Universe thanks to both the WMAP and Planck satellites, and we’ve also measured the abundances of those elements.

The predicted abundances of helium-4, deuterium, helium-3 and lithium-7 as predicted by Big Bang Nucleosynthesis, with observations shown in the red circles. (NASA / WMAP Science Team)

Putting that together, they tell us the total amount of normal matter in the Universe: it’s 4.9% of the critical density. In other words, we know the total amount of normal matter in the Universe. Its a number that’s in spectacular agreement with both the cosmic microwave background data and the large-scale structure data, and yet, it’s only about 15% of the total amount of matter that has to be present. There is, again, no known modification of gravity that can give you those large-scale predictions and also give you this low abundance of normal matter.

Cluster MACS J0416.1–2403 in the optical, one of the Hubble Frontier Fields that reveals, through gravitational lensing, some of the deepest, faintest galaxies ever seen in the Universe. (NASA / STScI)

4.) The gravitational bending of starlight from large cluster masses in the Universe. When we look at the largest clumps of mass in the Universe, the ones that are closest to still being in the linear regime of structure formation, we notice that the background light from them is distorted. This is due to the gravitational bending of starlight in relativity known as gravitational lensing. When we use these observations to determine what the total amount of mass present in the Universe is, we get that same number we’ve gotten all along: about 30% of the Universe’s total energy must be present in all forms of matter, added together, to reproduce these results. With only 4.9% present in normal matter, this implies there must be some sort of dark matter present.

Gravitational lensing in galaxy cluster Abell S1063, showcasing the bending of starlight by the presence of matter and energy. (NASA, ESA, and J. Lotz (STScI))

When you look at the full suite of data, rather than just some small details of what goes on in the messy, complex, nonlinear regime, there’s no way to obtain the Universe we have today without adding in dark matter. People who use Occam’s Razor (incorrectly) to argue in favor of MOND, or MOdified Newtonian Dynamics, need to consider that modifying Newton’s law will not solve these problems for you. If you use Newton, you miss out on the successes of Einstein’s relativity, which are too numerous to list here. There’s the Shapiro time delay. There’s gravitational time dilation and gravitational redshift. There’s the framework of the Big Bang and the concept of the expanding Universe. There’s the Lens-Thirring effect. There are the direct detections of gravitational waves, with their measured speed equal to the speed of light. And there are the motions of galaxies within clusters and of the clustering of galaxies themselves on the largest scales.

On the largest scales, the way galaxies cluster together observationally (blue and purple) cannot be matched by simulations (red) unless dark matter is included. (Gerard Lemson & the Virgo Consortium, with data from SDSS, 2dFGRS and the Millennium Simulation)

And for all of these observations, there is no single modification of gravity that can reproduce these successes. There are a few vocal individuals in the public sphere who advocate for MOND (or other modified gravity incarnations) as a legitimate alternative to dark matter, but it simply isn’t one at this point. The cosmology community isn’t dogmatic at all about the need for dark matter; we “believe in” it because all of these observations demand it. Yet despite all the efforts going into modifying relativity, there are no known modifications that can explain even two of these four points, much less all four. But dark matter can, and does.

Just because dark matter appears to be a fudge factor to some, compared to the idea of modifying Einstein’s gravity, doesn’t give the latter any additional weight. As Umberto Eco wrote in Foucault’s Pendulum, “As the man said, for every complex problem there’s a simple solution, and it’s wrong.” If someone tries to sell you modified gravity, ask them about the cosmic microwave background. Ask them about large-scale structure. Ask them about Big Bang Nucleosynthesis and the full suite of other cosmological observations. Until they have a robust answer that’s as good as dark matter’s, don’t let yourself be satisfied.

Four colliding galaxy clusters, showing the separation between X-rays (pink) and gravitation (blue), indicative of dark matter. On large scales, cold dark matter is necessary, and no alternative or substitute will do.(X-ray: NASA/CXC/UVic./A.Mahdavi et al. Optical/Lensing: CFHT/UVic./A. Mahdavi et al. (top left); X-ray: NASA/CXC/UCDavis/W.Dawson et al.; Optical: NASA/ STScI/UCDavis/ W.Dawson et al. (top right); ESA/XMM-Newton/F. Gastaldello (INAF/ IASF, Milano, Italy)/CFHTLS (bottom left); X-ray: NASA, ESA, CXC, M. Bradac (University of California, Santa Barbara), and S. Allen (Stanford University) (bottom right))

Modified gravity cannot successfully predict the large-scale structure of the Universe the way that a Universe full of dark matter can. Period. And until it can, it’s not worth paying any mind to as a serious competitor. You cannot ignore physical cosmology in your attempts to decipher the cosmos, and the predictions of large-scale structure, the microwave background, the light elements, and the bending of starlight are some of the most basic and important predictions that come out of physical cosmology. MOND does have a big victory over dark matter: it explains the rotation curves of galaxies better than dark matter ever has, including all the way up to the present day. But it is not yet a physical theory, and it is not consistent with the full suite of observations we have at our disposal. Until that day comes, dark matter will deservedly be the leading theory of what makes up the mass in our Universe.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

Only Dark Matter (And Not Modified Gravity) Can Explain The Universe was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
By changing our clocks on Sunday, we’ll allegedly save daylight and improve our lives in a number of ways. Yet, by contrast, the evidence shows a vastly different outcome. (Pixabay user annevais)For most of us, it’s something we just accept every year. But there’s arguably no good that comes from it.

Every year on the second Sunday in March, most places in the United States “spring” their clocks forward one hour for Daylight Saving Time. It’s a time-honored tradition that’s always been in effect for the overwhelming majority of people alive today.

But why do we have Daylight Saving Time? There are a few traditional justifications* for it, but upon closer scrutiny, they’re all myths.

NASA photographs of the Earth at night show us a measure of how much energy we use. While it’s true that we use more energy when it’s dark than when it’s light outside, the total number of hours of night/daylight remain unchanged by Daylight Saving Time… and so does overall energy use. (NASA’s Earth Observatory/NOAA/DOD)

1.) It saves fuel. When Ben Franklin visited France in 1784, he decried the Parisians’ wasting of daylight by sleeping in with their windows shuttered. “Saving candles” was the rationale for altering their schedules then; “saving fuel” is the rationale given today for altering our clocks. Yet energy use and clock changes were only studied in-depth with large studies beginning last decade. By 2008, even government studies showed no energy or fuel savings, with other independent groups showing it actually increases energy demand.

Italian farmer Loris Martini takes care of a veal in northwestern Italy. Animals need caretaking on a regular schedule, and changing the clocks doesn’t change the schedule on which they need that care. (Marco Bertorello/AFP/Getty Images)

2.) It helps farmers. An urban legend that Daylight Saving practices benefits farmers, they’ve continuously opposed this since 1918, on the grounds it would disrupt their farming schedules and practices. Milking cows, for example, need to be milked on a regular schedule, and don’t particularly care what the clock says. Farmers successfully orchestrated the repeal of Daylight Saving Time in 1919, but their interests were overridden in 1966 with the passage of the Uniform Time Act.

The most dangerous traffic accidents occur at high speeds, and often involve head-on collisions. Fatality rates increase the day after the time change due to Daylight Saving Time, with ‘drowsy driving’ cited as the likely culprit. (Derek Davis/Portland Press Herald via Getty Images)

3.) Daylight Saving Time improves safety. Does additional daylight reduce traffic accidents? You might think so, since driving in the daylight seems safer than driving at night. But the science says otherwise. The day after we “spring forward” and “fall back” both see an increase in fatal traffic accidents, which has been verified to be significant at about the 8% level. There is no corresponding decrease to balance it out. Meanwhile, workplace accidents and heart attacks are both more common in the week after the time change, too. Daylight Saving Time actually causes more deaths, rather than reducing them.

The clocks change by one hour in Sao Paulo, Brazil, on February 18th to signify the end of their (Southern Hemisphere) Daylight Saving Time. The initial estimate of the Ministry of Mines and Energy was to save R $ 147.5 million with Daylight Saving Time, which is extraordinarily unlikely to pan out as planned. (Cris Faga/NurPhoto via Getty Images)

4.) We enacted it for energy conservation. Don’t be fooled by the title of Nixon’s bill from 1973: the Emergency Daylight Saving Time Energy Conservation Act. It wasn’t advocated for or pushed forward by any group associated with energy at all. Rather, Daylight Saving was pushed through by the U.S. Chamber of Commerce, to benefit businesses. The golf, grill, and recreation industry all see increased business owing to the time change, as they helped push forward an additional month of the time change in the mid-1980s. It’s not even all good news for businesses: the television and transportation industries both take hits from the schedule change.

The status of active legislation as of 2015 concerning Daylight Saving Time. Many states (in red) follow Daylight Saving Time and have made no move to abolish it, but many others (in all other colors) have significant movements that indicate the time change does not serve them well. (Ray Harwood / Time Zone Report)

5.) Daylight Saving Time is now standard. Only about half the world’s countries use it, and many states/regions don’t obey it. We just amended Daylight Saving Time last decade, to extend into November. Why? At the urging of the National Association of Convenience Stores, to increase candy sales with an extra “hour” of trick-or-treating on Halloween. After this change, Halloween became the #2 commercial holiday in the United States, behind only Christmas.

The legislation that led to Daylight Saving Time extending into November went into effect only 11 years ago, and has led to a windfall for the candy industry. Coincident with the change, Halloween has risen to the #2 holiday, in terms of net sales, in the United States. (Kris Connor/Getty Images for Reese’s)

Other than standardizing time zones, which seems independent of changing our clocks, the traditional justifications for keeping Daylight Saving Time appear to be unsupported by the full suite of scientific evidence. The one benefit, according to studies, is that you’re less likely to get robbed after the time change, with a 27% drop in the evening hour that now sees daylight the day after the clocks change. That may be the only scientifically-supported benefit of enacting Daylight Saving Time.

* — Full disclosure: Ethan Siegel used to be in favor of Daylight Saving Time, on the grounds that bars that closed at 2 AM on Sundays could stay open for an extra hour. The full suite of evidence has convinced him that the negatives outweigh that benefit.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

The 5 Reasons To Keep Daylight Saving Time Have No Science To Back Them Up was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Simulations of planet formation tend to give us planets forming in a disk-like configuration, similar to what we observe in our own Solar System. (Thomas Quinn et al., Pittsburgh Supercomputing Center)The possibilities were almost limitless, so why does everything line up?

Our Solar System is an orderly place, with the four inner planets, the asteroid belt, and the gas giant worlds all orbiting in the same plane around the Sun. Even as you go farther out, the Kuiper belt objects appear to line up with that same exact plane. Given that the Sun is spherical and that there are stars appearing with planets orbiting in every direction imaginable, it seems too much of a coincidence to be random chance that all these worlds line up. In fact, practically every Solar System we’ve observed outside of our own appears to have their worlds line up in the same plane, too, wherever we’ve been able to detect it. Here’s the science behind what’s going on, to the best of our knowledge.

The eight planets of the Solar System orbit the Sun in almost an identical plane, known as the Invariable Plane. This is typical of solar systems as we know them so far. (Joseph Boyle of Quora)

Today, we’ve mapped out the orbits of the planets to incredible precision, and what we find is that they go around the Sun — all of them — in the same two-dimensional plane, to within an accuracy of, at most, 7° difference.

In fact, if you take Mercury out of the equation, the innermost and most inclined planet, you’ll find that everything else is really well-aligned: the deviation from the Solar System’s invariable plane, or the average plane-of-orbit of the planets, is only about two degrees.

If you take Mercury out of the equation, the innermost, most-inclined planet, you find that all the worlds of the Solar System are perfectly aligned to within two degrees, a remarkable precision for nature to achieve. (Wikimedia commons author Lookang, based on the work of Todd K. Timberlake and Francisco Esquembre (L); screenshot from Wikipedia (R))

They’re also pretty closely lined up with the Sun’s rotation axis: just as the planets all spin as they orbit the Sun, the Sun itself spins. And as you might expect, the axis that the Sun rotates about is — again — within approximately 7° of all the planets’ orbits.

And yet, this isn’t what you would have imagined unless something caused these planets to all be sandwiched down into the same plane. You would’ve expected the orbits to be oriented randomly, since gravity — the force that keeps the planets in these steady orbits — works the same in all three dimensions. You would’ve expected something more like a swarm than a nice, orderly set of nearly perfect circles. The thing is, if you go far enough away from our Sun — beyond the planets and asteroids, beyond the Halley-like comets and even beyond the Kuiper Belt — that’s exactly what you find.

While the Oort cloud is hypothesized to exist in an enormous, sphere-like swarm, the Kuiper belt itself is still mostly plane-like, aligning with the invariable plane that the planets orbit in. (NASA and William Crochot)

So what is it, exactly, that caused our planets to wind up in a single disk? In a single plane orbiting our Sun, rather than as a swarm? To understand this, let’s travel back in time to when our Sun was first forming: from a molecular cloud of gas, the very thing that gives rise to all new stars in the Universe.

A large molecular cloud, many of which are clearly visible in the Milky Way and other local group galaxies, will often fragment, contract, and give birth to new, massive stars as time goes on. (Yuri Beletsky (Las Campanas Observatory, Carnegie Institution for Science) (L); J. Alves, M. Lombardi and C. J. Lada, A&A, 462 1 (2007) L17-L21 (R))

When a molecular cloud grows to be massive enough, gravitationally bound and cool enough to contract-and-collapse under its own gravity, like the Pipe Nebula (above, left), it will form dense enough regions where new star clusters will be born (circles, above right).

You’ll notice, immediately, that this nebula — and any nebula like it — is not a perfect sphere, but rather takes on an irregular, elongated shape. Gravitation is unforgiving of imperfections, and because of the fact that gravity is an accelerative force that quadruples every time you halve the distance to a massive object, it takes even small differences in an initial shape and magnifies them tremendously in short order.

This visible-light image composite of the Orion Nebula was created by the Hubble Space Telescope team back in 2004–2006. (NASA, ESA, M. Robberto (Space Telescope Science Institute/ESA) and the Hubble Space Telescope Orion Treasury Project Team)

The result is that you get a star-forming nebula that’s incredibly asymmetric in shape, where the stars form in the regions where the gas gets densest. The thing is, when we look inside, at the individual stars that are in there, they’re pretty much perfect spheres, just like our Sun is.

Inside the Orion Nebula, in visible light (L) and infrared light (R), a star-forming nebula houses a massive star cluster inside, evidence that these nebulae are giving birth to new solar systems in an active fashion. (NASA; K.L. Luhman (Harvard-Smithsonian Center for Astrophysics, Cambridge, Mass.); and G. Schneider, E. Young, G. Rieke, A. Cotera, H. Chen, M. Rieke, R. Thompson (Steward Observatory, University of Arizona, Tucson, Ariz.); NASA, C.R. O’Dell and S.K. Wong (Rice University))

But just as the nebula itself became very asymmetric, the individual stars that formed inside came from imperfect, overdense, asymmetric clumps inside that nebula. They’re going to collapse in one (of the three) dimensions first, and since matter — stuff like you and me, atoms, made of nuclei and electrons — sticks together and interacts when you smack it into other matter, you’re going to wind up with an elongated disk, in general, of matter. Yes, gravitation will pull most of that matter in towards the center, which is where the star(s) will form, but around it you’ll get what’s known as a protoplanetary disk. Thanks to the Hubble Space Telescope, we’ve seen these disks directly!

These protoplanetary disks in the Orion Nebula, some ~1300 light years away, will someday grow up to be solar systems not very different from our own. (Mark McCughrean (Max-Planck–Inst. Astron.); C. Robert O’Dell (Rice Univ.); NASA)

That’s your first hint that you’re going to wind up with something that’s more aligned in a plane than a randomly swarming sphere. To go to the next step, we have to turn to simulations, since we haven’t been around long enough to watch this process unfold — it takes about a million years — in any young solar system. But here’s the story that the simulations tell us.

According to simulations, asymmetric clumps of matter contract all the way down in one dimension first, where they then start to spin. That “plane” is where the planets form, and many intermediate stages have been directly observed by observatories like Hubble. (STScl OPO — C Burrows and J. Krist (STScl), K. Stabelfeldt (JPL) and NASA)

The protoplanetary disk, after going “splat” in one dimension, will continue to contract down as more and more matter gets attracted to the center. But while much of the material gets funnelled inside, a substantial amount of it will wind up in a stable, spinning orbit in this disk.


There’s a physical quantity that has to be conserved: angular momentum, which tells us how much the entire system — gas, dust, star and all — is intrinsically spinning. Because of how angular momentum works overall, and how its shared pretty evenly between the different particles inside, this means that everything in the disk needs to move, roughly, in the same (clockwise or counterclockwise) direction overall. Over time, that disk reaches a stable size and thickness, and then small gravitational instabilities begin to grow those instabilities into planets.

Sure, there are small, subtle differences (and gravitational effects occurring between interacting planets) between different parts of the disk, as well as slight differences in initial conditions. The star that forms at the center isn’t a single point, but rather an extended object somewhere in the ballpark of a million kilometers in diameter. And when you put all of this together, itwill lead to everything not winding up in a perfectly singular plane, but it’s going to be extremely close. In fact, we’ve only recently — as in just three years ago — discovered the very first planetary system beyond our own that we’ve caught in the process of forming new planets in a single plane.

The star HL Tauri, as imaged in the optical (in the upper-left), is brand new and contains a protoplanetary disk around it. (ESA / NASA)

The young star in the upper left of the image above, on the outskirts of a nebular region — HL Tauri, about 450 light years away — is surrounded by a protoplanetary disk. The star itself is only about one million years old. Thanks to ALMA, a long-baseline array that measures light of quite long (millimeter) wavelengths, or more than a thousand times longer than what our eyes can see, returned the following image.

The protoplanetary disk around the young star, HL Tauri, as photographed by ALMA. The gaps in the disk indicate the presence of new planets. (ALMA (ESO/NAOJ/NRAO))

It’s clearly a disk, with everything in the same plane, and yet there are dark “gaps” in there. Those gaps each correspond to a young planet that’s attracted all the matter within its vicinity! We don’t know which of these will merge together, which ones will get kicked out, and which ones will migrate inwards and get swallowed by their parent star, but we are witnessing a pivotal step in the development of a young solar system. While we’d observed young planets before, we’ve never seen this particular stage. From the early ones to the intermediate ones to the later stages of more-complete solar systems, they’re all spectacular, and all consistent with the same story.

Direct imaging of four planets orbiting the star HR 8799 129 light years away from Earth, a feat accomplished through the work of Jason Wang and Christian Marois. (J. Wang (UC Berkeley) & C. Marois (Herzberg Astrophysics), NExSS (NASA), Keck Obs.)

So why are all the planets in the same plane? Because they form from an asymmetric cloud of gas, which collapses in the shortest direction first; the matter goes “splat” and sticks together; it contracts inwards but winds up spinning around the center, with planets forming from imperfections in that young disk of matter; they all wind up orbiting in the same plane, separated only by a few degrees — at most — from one another.

It’s a case where observations and simulations, based on theoretical calculations, agree remarkably with one another. It’s a remarkable story, and one that — thanks to not only simulations but now observations of the Universe itself — illustrates in incredible detail how rich and fascinating it is that all the planets orbit in the same plane no matter where in the Universe you go!

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

Why Do All The Planets Orbit In The Same Plane? was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
An artist’s conception of what the Universe might look like as it forms stars for the first time. While we don’t yet have a direct picture, the new indirect evidence from radio astronomy points to the existence of these stars turning on when the Universe was between 180 and 260 million years old. (NASA/JPL-Caltech/R. Hurt (SSC))The indirectly find was completely unexpected, and, if it holds up, could give the James Webb Space Telescope its first tantalizing target.

In the quest to understand our Universe, and the story of where we come from on a cosmic scale, two of the most important questions are what the Universe is made of and how the first stars formed. These are related questions, since you can only form stars if you have enough matter to gravitationally collapse, and even at that, the matter needs to be dense enough and cool enough for this process to work. The earliest stars we’ve ever detected directly come from the Hubble Space Telescope’s imaging of the ultra-distant galaxy GN-z11, whose light comes to us from when the Universe was just 400 million years old: 3% of its current age. Today, after two years of careful analysis, a study from Judd D. Bowman and collaborators was published in Nature, announcing an indirect detection of starlight from when the Universe was only 180 million years old, where the details support the existence and presence of dark matter.

Schematic diagram of the Universe’s history, highlighting reionization. Before stars or galaxies formed, the Universe was full of light-blocking, neutral atoms. While most of the Universe doesn’t become reionized until 550 million years afterwards, a few fortunate regions are mostly reionized at much earlier times. (S. G. Djorgovski et al., Caltech Digital Media Center)

Seeing back to the first stars is a complicated task, since a whole slew of factors are working against you. For one, the Universe is expanding, meaning that even the most energetic ultraviolet light emitted by stars has its wavelength stretched as the fabric of space stretches. As that light journeys to Earth, it gets shifted into the visible, near-infrared, and eventually into the mid-infrared before it arrives at our eyes, rendering it invisible to most telescopes. For another, the Universe is filled with neutral atoms at the earliest times, which means it absorbs (and is opaque to) starlight. It’s only though continued exposure to energetic, ionizing photons that enables the Universe to become transparent. This combination of effects already means Hubble can never see the first stars.

The first stars in the Universe will be surrounded by neutral atoms of (mostly) hydrogen gas, which absorbs the starlight. The hydrogen makes the Universe opaque to visible, ultraviolet, and a large fraction of infrared light, but radio-light can transmit unimpeded. (Nicole Rager Fuller / National Science Foundation)

If we want to see this light directly, we have no choice but to look at very long wavelengths with an ultra-sensitive space telescope: exactly what James Webb is designed to be! But with James Webb still on the ground, undergoing its final series of tests and being readied for launch, it will be at least another 18 months before it can search for these early stars and galaxies. Due to a clever effect, however, the neutral atoms that ultraviolet, optical, and infrared telescopes struggle to see through actually provide a signal we can detect: a very particular emission line in the radio portion of the spectrum, at a wavelength of 21 centimeters. The physics of how this works is spectacular.

A young, star-forming region found within our own Milky Way. Note how the material around the stars gets ionized, and over time becomes transparent to all forms of light. Until that happens, however, the surrounding gas absorbs the radiation, emitting light of its own of a variety of wavelengths. (NASA, ESA, and the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration; Acknowledgment: R. O’Connell (University of Virginia) and the WFC3 Scientific Oversight Committee)

When you form stars, they impart energy to all the atoms, molecules, ions, and other particles that surround them. In the earliest stages of the Universe, 92% of the atoms that exist (by number) are hydrogen atoms: a single proton with a single electron orbiting it. The starlight that first gets emitted will ionize a portion of the atoms, but will also cause a generic absorption effect, where the electrons within the atoms get kicked up to a higher-energy state. As electrons reattach to protons and/or fall down into the ground state, which they do spontaneously, there’s a 50/50 chance that they’ll wind up with their spins either aligned or anti-aligned with the spin of the central proton. If they’re anti-aligned, they’ll stay there forever. But if they’re aligned, they’ll eventually flip, emitting a very specific quantum of energy with a wavelength of 21 centimeters.

The 21-centimeter hydrogen line comes about when a hydrogen atom containing a proton/electron combination with aligned spins (top) flips to have anti-aligned spins (bottom), emitting one particular photon of a very characteristic wavelength. (Tiltec of Wikimedia Commons)

This photon emission feature should travel through the Universe unperturbed, arriving at our eyes after being redshifted and stretched to even longer wavelengths. For the first time, an all-sky average of the radio emissions has been taken to unprecedented sensitivity, and this ultra-distant signature has remarkably shown up! The data collected shows that this neutral hydrogen gas emits this 21-cm line over a very specific duration: from a redshift of 15 to 20, or an age of the Universe between 180 and 260 million years. For the first time, we have actual data that indicates when the earliest stars formed in great-enough abundance to begin affecting the neutral gas in the Universe.

The enormous ‘dip’ that you see in the graph here, a direct result of the latest study from Bowman et al. (2018), shows the unmistakable signal of 21-cm emission from when the Universe was between 180 and 260 million years in age. This corresponds, we believe, to the turn-on of the first wave of stars and galaxies in the Universe. (J.D. Bowman et al., Nature, 555, L67 (2018))

The data also indicates a temperature for the gas, which turns out to appear much cooler than our standard models predict. This could be explained by a number of pathways, including:

  • radiation from stars and stellar remnants,
  • a hotter-than-expected cosmic background of radiation,
  • or an additional cooling due to interactions between normal matter and dark matter.

The first possibility is well-understood and is unlikely to account for this effect, while the second has been measured to incredible precision and is easily ruled out. But the third explanation could be the long-sought-after clue as to the particle properties that dark matter possesses.

The formation of cosmic structure, on both large scales and small scales, is highly dependent on how dark matter and normal matter interact. With the observed, cool temperatures of the neutral gas that emits the 21-cm line, this may be a clue that dark matter and normal matter are interacting to cool the gas in a novel, unexpected way. (Illustris Collaboration / Illustris Simulation)

But, as with all things, it’s important to exercise caution. Cooling is expected to proceed differently within a cloud of gas when it’s solely comprised of hydrogen versus when it contains heavy elements, but all the clouds we’ve observed previously contain these heavy elements; they’ve formed previous generations of stars. Furthermore, we have extremely cold places within our galaxy, such as the Boomerang Nebula, which is at a mere ~1 K, colder than even the deepest voids in intergalactic space. Given that the first stars were likely very different than the ones we have today, it’s reasonable to think that we may not understand how the radiation from stars and stellar remnants in the early Universe works as well as we think they do.

An artist’s impression of the environment in the early Universe after the first few trillion stars have formed, lived and died. The existence and life cycle of stars is the primary process that enriches the Universe beyond merely hydrogen and helium, while the radiation emitted by the first stars makes it transparent to visible light. (NASA/ESA/ESO/Wolfram Freudling et al. (STECF))

Still, this is a tremendous advance and our first window into the stars that existed in the Universe beyond the limits of Hubble. It’s an incredibly suggestive and hopeful find for dark matter hunters, indicating that there may be a measurable interaction between dark matter and normal matter after all. And it gives the James Webb Space Telescope something to look for: populations of early stars and galaxies turning on in a specific redshift window.

With the detection of this 21-cm signal originating from when the Universe was between 180 and 260 million years old, we’ve now pushed back the timeline of the first stars and galaxies to well beyond the reach of our direct detection limits. Still, this find helps us better understand how the Universe came to be the way it is today. (Nicole Rager Fuller / National Science Foundation)

While astronomers are normally cautious, this find has sparked a flurry of speculation. Avi Loeb, quoted in the Associated Press, said, “If confirmed, this discovery deserves two Nobel Prizes,” for discovering the first evidence of these ultra-distant stars and for the connection to dark matter. As Katie Mack wrote in Scientific American:

It is the earliest indication of any kind of structure in the Universe, and a direct window into the processes that led all that unassuming hydrogen gas to condense, under gravity, into stars, and galaxies, and, eventually, life.

And most importantly, this is a glimpse into what it’s like to push back the frontiers of science. The first evidence for anything new is almost always indirect, weak, and difficult to interpret. But these unexplained signals have the power to explain what we don’t yet fully understand: how the Universe came to be the way it is today. For the first time, the Universe has given us an observational clue of where and when and what to look for. It’s up to us to take the next step.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

Earliest Evidence For Stars Smashes Hubble’s Record And Points To Dark Matter was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The enormous, 25-meter Giant Magellan Telescope (GMT) will not only usher in a new era in ground-based astronomy, but will take the first cutting-edge images of the Universe where stars are seen exactly as they actually are: without diffraction spikes. (Giant Magellan Telescope — GMTO Corporation)One of astronomy’s most iconic sights in an artifact of faulty optics. Here’s how a new, great design will overcome it.

When you look out at the greatest images of the Universe, there are a few sights that light up our memories and fire our imaginations. We can see the planets in our own Solar System to incredible detail, galaxies lying millions or even billions of light years away, nebulae where new stars are being birthed, and stellar remnants that give an eerie, fatalistic look into our cosmic past and our own Solar System’s future. But the most common sight of all are stars, lying everywhere and in any direction we care to look, both in our own Milky Way and beyond. From ground-based telescopes to Hubble, stars almost always come with spikes on them: an image artifact due to how telescopes are constructed. As we prepare for the next generation of telescopes, however, one of them — the 25-meter Giant Magellan Telescope — stands out: it’s the only one that won’t have those artificial spikes.

Hickson compact group 31, as imaged by Hubble, is a spectacular “constellation”, but almost as prominent are the few stars from our own galaxy visible, noted by the diffraction spikes. In only one case, that of the GMT, will those spikes be absent. (ASA, ESA, S. Gallagher (The University of Western Ontario), and J. English (University of Manitoba))

There are a lot of ways to make a telescope; in principle, all you need to do is collect-and-focus light from the Universe onto a single plane. Early telescopes were built on the concept of a refractor, where the incoming light passes through a large lens, focusing it down to a single point, where it can then be projected onto an eye, a photographic plate, or (in more modern fashion) a digital imaging system. But refractors are limited, fundamentally, by how large you can physically build a lens to the necessary quality. These telescopes barely top 1 meter in diameter, at maximum. Since the quality of what you can see is determined by the diameter of your aperture, both in terms of resolution and light-gathering power, refractors fell out of fashion over 100 years ago.

Reflecting telescopes surpassed refractors long ago, as the size you can build a mirror greatly surpasses the size to which you can build a similar-quality lens. (The Observatories of the Carnegie Institution for Science Collection at the Huntington Library, San Marino, Calif.)

But a different design — the reflecting telescope — can be far more powerful. With a highly reflective surface, a properly shaped mirror can focus incoming light onto a single point, and mirrors can be created, cast and polished to much greater sizes than lenses can. The largest single-mirror reflectors can be up to a whopping 8-meters in diameter, while segmented mirror designs can go even larger. At present, the segmented Gran Telescopio Canarias, with a 10.4 meter diameter, is the largest in the world, but two (and potentially three) telescopes will break that record in the coming decade: the 25-meter Giant Magellan Telescope (GMT) and the 39-meter Extremely Large Telescope (ELT).

A comparison of the mirror sizes of various existing and proposed telescopes. When GMT comes online, it will be the world’s largest, and will be the first 25 meter+ class optical telescope in history, later to be surpassed by the ELT. But all of these telescopes have mirrors, and each of the ones shown in color (foreground) are reflecting telescopes. (Wikimedia Commons user Cmglee)

Both of these are reflecting telescopes with many segments, poised to image the Universe like never before. The ELT is larger, is made of more segments, is more expensive, and should be completed a few years after GMT, while the GMT is smaller, made of fewer (but larger) segments, is less expensive, and should reach all of its major milestones first. These include:

  • excavations that began in February of 2018,
  • concrete pouring in 2019,
  • a completed enclosure against weather by 2021,
  • the delivery of the telescope by 2022,
  • the installation of the first primary mirrors by early 2023,
  • first light by the end of 2023,
  • first science in 2024,
  • and a scheduled completion date by the end of 2025.

That’s soon! But even with that ambitious schedule, there’s one huge optical advantage that GMT has, not only over the ELT, but over all reflactors: it won’t have diffraction spikes on its stars.

The star powering the Bubble Nebula, estimated at approximately 40 times the mass of the Sun. Note how the diffraction spikes, due to the telescope itself, interferes with nearby detailed observations of fainter structures. (NASA, ESA, Hubble Heritage Team)

These spikes that you’re used to seeing, from observatories like Hubble, come about not from the primary mirror itself, but from the fact that there needs to be another set of reflections that focus the light onto its final destination. When you focus that reflected light, however, you need some way to place-and-support a secondary mirror to refocus that light onto its final destination. There’s simply no way to avoid having supports to hold that secondary mirror, and those supports will get in the way of the light. The number and the arrangement of the supports for the secondary mirror determine the number of spikes — four for Hubble, six for James Webb — you’ll see on all of your images.

Comparison of diffraction spikes for various strut arrangements of a reflecting telescope. The inner circle represents the secondary mirror, while the outer circle represents the primary, with the “spike” pattern shown underneath. (Wikimedia Commons / Cmglee)

All ground-based reflectors have these diffraction spikes, and so will the ELT. The gaps between the 798 mirrors, despite making up just 1% of the surface area, contribute to the magnitude of the spikes. Whenever you image something faint that unluckily happens to be near something close and bright — like a star — you have these diffraction spikes to contend with. Even by using shear imaging, which takes two almost-identical images that are only slightly mis-positioned and subtract them, you can’t get rid of those spikes entirely.

The Extremely Large Telescope (ELT), with a main mirror 39 metres in diameter, will be the world’s biggest eye on the sky when it becomes operational early in the next decade. This is a detailed preliminary design, showcasing the anatomy of the entire observatory. (ESO)

But with seven enormous, 8-meter diameter mirrors arranged with one central core and six symmetrically-positioned circles surrounding it, the GMT is brilliantly designed to eliminate these diffraction spikes. These six outer mirrors, the way they’re arranged, allows for six very small, narrow gaps that extend from the edge of the collecting area all the way into the central mirror. There are multiple “spider arms” that hold the secondary mirror in place, but each arm is precisely positioned to run exactly in between those mirror gaps. Because the arms don’t block any of the light that’s used by the outer mirrors, there are no spikes at all.

The 25-meter Giant Magellan Telescope is currently under construction, and will be the greatest new ground-based observatory on Earth. The spidar arms, seen holding the secondary mirror in place, are specially designed so that their line-of-sight falls directly between the narrow gaps in the GMT mirrors. (Giant Magellan Telescope / GMTO Corporation)

Instead, owing to this unique design — including the gaps between the different mirrors and the spider arms crossing the central primary mirror — there’s a new set of artifacts: a set of circular beads that appear along ring-like paths (known as Airy rings) surrounding each star. These beads will appear as empty spots in the image, and are inevitable based on this design whenever you look. However, these beads are low-amplitude and are only instantaneous; as the sky and the telescope rotate over the course of a night, these beads will be filled in as a long-exposure image is accumulated. After about 15 minutes, a duration that practically every image should attain, those beads will be completely filled in.

The core of the globular cluster Omega Centauri is one of the most crowded regions of old stars. GMT will be able to resolve more of them than ever before, all without diffraction spikes.(NASA/ESA and The Hubble Heritage Team (STScI/AURA))

The net result is that we’ll have our first world-class telescope that will be able to see stars exactly as they are: with no diffraction spikes around them! There is a slight trade-off in the design to achieve this goal, the biggest of which is that you lose a little bit of light-gathering power. Whereas the end-to-end diameter of the GMT, as designed, is 25.4 meters, you “only” have a collecting area that corresponds to a 22.5 meter diameter. The slight loss of resolution and light-gathering power, however, is more than made up for when you consider what this telescope can do that places it apart from all others.

A selection of some of the most distant galaxies in the observable Universe, from the Hubble Ultra Deep Field. GMT will be capable of imaging all of these galaxies with ten times the resolution of Hubble. (NASA, ESA, and N. Pirzkal (European Space Agency/STScI))

It will achieve resolutions of between 6–10 milli-arc-seconds, depending on what wavelength you look at: 10 times as good as what Hubble can see, at speeds 100 times as fast. Distant galaxies will be imaged out to distances of ten billion light years, where we can measure their rotation curves, look for signatures of mergers, measure galactic outflows, look for star formation regions and ionization signatures. We can directly image Earth-like exoplanets, including Proxima b, out to somewhere between 15–30 light years distant. Jupiter-like planets will be visible out to more like 300 light years. We’ll also measure the intergalactic medium and the elemental abundances of matter everywhere we look. We’ll find the earliest supermassive black holes.

The more distant a quasar or supermassive black hole is, the more powerful a telescope (and camera) you need to find it. GMT will have the advantage of being able to do spectroscopy on these ultra-distant objects that it finds. (NASA and J. Bahcall (IAS) (L); NASA, A. Martel (JHU), H. Ford (JHU), M. Clampin (STScI), G. Hartig (STScI), G. Illingworth (UCO/Lick Observatory), the ACS Science Team and ESA (R))

And we’ll make direct, spectroscopic measurements of individual stars in crowded clusters and environments, probe the substructure of nearby galaxies, and observe close-in binary, trinary and multi-star systems. This even includes stars in the galactic center, located some 25,000 light years away. All, of course, without diffraction spikes.

This image illustrates the improvement in resolution in the central 0.5” of the Galaxy from seeing-limited to Keck + Adaptive Optics to future Extremely Large Telescopes like GMT with adaptive optics. Only with GMT will the stars appear without diffraction spikes. (A. Ghez / UCLA Galactic Center Group — W.M. Keck Observatory Laser Team)

Compared to what we can presently see with the world’s greatest observatories, the next generation of ground-based telescopes will open up a slew of new frontiers that will peel back the veil of mystery that enshrouds the unseen Universe. In addition to planets, stars, gas, plasma, black holes, galaxies, and nebulae, we’ll be looking for objects and phenomena that we’ve never seen before. Until we look, we have no way of knowing exactly what wonders the Universe has waiting for us. Owing to the clever and innovative design of the Giant Magellan Telescope, however, the objects we’ve missed due to diffraction spikes of bright, nearby stars will suddenly be revealed. There’s a whole new Universe to be observed, and this one, unique telescope will reveal what no one else can see.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

World’s Largest Telescope To Finally See Stars Without Artificial Spikes was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Bubble chamber tracks from Fermilab, revealing the charge, mass, energy and momentum of the particles created. If a newly created particle is not stable to arbitrary lifetimes, it will have an inherent uncertainty to its mass. (FNAL/NSF/DOE)In the quantum world of the unstable, even identical particles don’t have identical masses.

In the microscopic world of the quantum particle, there are certain rules that are wholly unfamiliar to us on a macroscopic scale. If you measure a particle’s position and ask “where are you,” the more accurately you learn the answer, you’ll fundamentally know its motion, or its momentum, less well. Other properties, however, like electric charge, remain perfectly well-known at all times, regardless of what else you measure. For purely stable particles, whether elementary or composite (including electrons and protons), mass is one of those perfectly-known properties. If you know the mass of one electron under one set of conditions, you know it for all electrons everywhere in the Universe. But this isn’t the case for all the particles we know of. The shorter-lived an unstable particle is, the more uncertain its mass is. This isn’t just a hypothesized effect, but rather one that’s been experimentally observed and verified for decades.

The quantum nature of the Universe tells us that certain quantities have an inherent uncertainty built into them, and that pairs of quantities have their uncertainties related to one another.(NASA/CXC/M.Weiss)

From a theoretical standpoint, quantum uncertainty ought to play a role wherever two physical properties that are related in a certain way exist. That particular relationship is one that we call “non-commutative,” and it’s weird to think about. If I measure your position (where you are), for example, and then I measure your momentum (a measure of your motion), you’d expect that I’d get the same results as if I measured first your momentum and then your position. In classical physics, all variables commute: it doesn’t matter whether you measure position and then momentum, or momentum and then position. You get the same answers either way. But in quantum physics, there’s an inherent uncertainty that arises, and measuring position and then momentum is fundamentally different from measuring momentum and then position.

A visualization of QCD illustrates how particle/antiparticle pairs pop out of the quantum vacuum for very small amounts of time as a consequence of Heisenberg uncertainty. If you have a large uncertainty in energy (ΔE), the lifetime (Δt) of the particle(s) created must be very short. (Derek B. Leinweber)

It’s as though I told you that “3 + 4” was somehow fundamentally different than “4 + 3”. In the quantum Universe, this is a fundamental and unavoidable property known as Heisenberg uncertainty, and it tells you that for quantities like position (Δx) and momentum (Δp), there’s this inherent uncertainty between them, and hence an inherent uncertainty in each variable. This isn’t restricted to position and momentum, either. There are plenty of physical quantities out there — often for esoteric reasons in quantum physics — that have that same uncertainty relation between them. This happens for every pair of conjugate variables we have, just like position and momentum are. They include:

  • Energy (ΔE) and time (Δt),
  • Electric potential, or voltage (Δφ) and free electric charge (Δq),
  • Angular momentum (ΔL) and orientation, or angular position (Δθ),

along with many others. It tells you that these two quantities, multiplied together, have to be greater than or equal to some finite value: ℏ/2.

An illustration between the inherent uncertainty between position and momentum at the quantum level. (E. Siegel / Wikimedia Commons user Maschen)

While position and momentum are the usual examples we talk about, in this case, it’s the energy-and-time relation that leads to the bizarre and confusing behavior. If a particle is completely stable, then the uncertainty in its lifetime doesn’t really matter: any finite uncertainty (Δt) added on to an infinite lifetime is inconsequential. But if a particle is unstable, then there’s an uncertainty in how long it survives that’s roughly equal to its mean lifetime: Δt. That means there’s an inherent uncertainty to its energy as well; using our uncertainty formula, it tells us that if you multiply your energy uncertainty (ΔE) by your time uncertainty (Δt), it has to be greater than or equal to ℏ/2.

And the shorter your particle’s lifetime is, the larger your energy uncertainty needs to be.

The first robust, 5-sigma detection of the Higgs boson was announced a few years ago by both the CMS and ATLAS collaborations. But the Higgs boson doesn’t make a single ‘spike’ in the data, but rather a spread-out bump, due to its inherent uncertainty in mass. (The CMS Collaboration, “Observation of the diphoton decay of the Higgs boson and measurement of its properties”, (2014))

But an uncertainty in energy, for a particle, means there must be an uncertainty inherent in its mass, too, since E = mc². If it has a bigger energy uncertainty, it has a bigger mass uncertainty, and the shorter-lived a particle is, the bigger its mass uncertainty has to be. A lot of people noticed, when they first detected the Higgs boson, that it showed up as a “bump” in the data (above). If the Higgs boson were, instead always the same exact, single mass, we’d reconstruct it to be an infinitely narrow “spike,” where the only uncertainty came from our own measurements.

The inherent width, or half the width of the peak in the above image when you’re halfway to the top, is measured to be 2.5 GeV: an inherent uncertainty of about +/- 3% of the total mass. (ATLAS Collaboration (Schieck, J. for the collaboration) JINST 7 (2012) C01012)

Now, it’s true that there are measurement/detector uncertainties, and these do play a role. But many particles — like the Higgs boson, the Z boson, the W+ and W- bosons, and the top quark — are incredibly short-lived, with lifetimes on the order of 10^-24 seconds! (Or in the case of the top quark, even less than that.) Every time you create a Higgs particle, it could be (in terms of energy) 124.5 GeV, 125.0 GeV, 125.5 GeV, or 126.0 GeV, or anywhere in between. When you create a Z boson, it could range anywhere from about 88 GeV to 94 GeV. And, most remarkably, when you create a top quark, it could have a rest mass of anywhere from about 165 GeV all the way up to over 180 GeV: the largest range of any known elementary particle.

The reconstructed mass distributions of the top quarks in the CDF detector at Fermilab, prior to the turn-on of the LHC, showed a large uncertainty in the top quark’s mass. While most of this was due to detector uncertainties, there is an inherent uncertainty to the mass itself that shows up as part of this broad peak. (S. Shiraishi, J. Adelman, E. Brubaker, Y.K. Kim for the CDF collaboration)

This means that, literally, when you create one of these particles and measure how much energy it had, it’s fundamentally and inherently different than the next particle of exactly the same typeyou’ll create. This is a non-intuitive property of quantum particles that only comes up when they’re unstable. Any electron that you create is indistinguishable from any other electron in the Universe, but each top quark that exists will have its own unique set of particles and energies that decay from it, with an uncertainty inherent to all of their properties, including their total mass/energy.

The masses of the fundamental particles can be quantified, including the neutrinos, but only the particles that are truly stable can ever have an exact mass assigned to them. Otherwise, it’s only ‘average’ mass that can be stated with any certainty.(Hitoshi Murayama of http://hitoshi.berkeley.edu/)

It’s one of the most remarkable and counterintuitive results of the quantum Universe, that every unstable particle that you make has an inherent uncertainty to the most seemingly fundamental property of all: mass. You can know what the average mass of a typical particle of any particular type, and you can measure its width, which is directly related to its mean lifetime through the Heisenberg uncertainty principle. But every time you create one new particle, there’s no way to know what its actual mass will be; all you can do is calculate the probabilities of having a varieties of masses. In order to know for sure, all you can do is measure what comes out and reconstruct what actually existed. Quantum uncertainty, first seen for position and momentum, can now be convincingly stated to extend all the way to the rest energy of a fundamental particle. In a quantum Universe, even mass itself isn’t set in stone.

Starts With A Bang is now on Forbes, and republished on Medium thanks to our Patreon supporters. Ethan has authored two books, Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.

In A Quantum Universe, Even Mass Is Uncertain was originally published in Starts With A Bang! on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview