Loading...

Follow PhysicsMatt on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Cartoon of the structure of dark matter objects. From Buckley and Peter

This is an explainer for my recent paper with David Hogg and Adrian Price-Whelan. This is a very different kind of paper for me, as evidenced by the fact that it is coming out on arXiv’s astro-ph (astrophysics) list and not even cross-listed to hep-ph (high energy phenomenology). In the end, the goal of the research that produced this paper is to learn about dark matter, but this paper by itself barely mentions the subject. There is a connection though.

One of my research interests of late (as described in detail in my paper with Annika Peter) is trying to learn about the particle physics of dark matter by studying gravitationally bound systems of dark matter in the Universe, and seeing if their properties are consistent with a “vanilla” type of dark matter without significant non-gravitational interactions, or if there is some evidence of new dark matter physics within those objects.

The type of dark matter system I think has a lot of promise are the smallest dark matter structures: collections of dark matter much smaller than the Milky Way. We know of some of these objects: dwarf galaxies, but what about even smaller ones?

The reason we don’t know much about such objects is that, as you decrease the amount of dark matter in a galaxy, you also decrease the amount of normal matter. Eventually, there isn’t enough gas around to make stars, or at least, not that many stars. Galaxies an order of magnitude less massive than the smallest known dwarf galaxies might only have ${\cal O}(10-100)$ stars; smaller ones might have none.

That makes the smallest star-containing galaxies very dim, and hard to find if they’re in “the field” (far from the Milky Way). But if they are close enough to the Earth for us to see, then the gravitational tides of the Galaxy will start pulling the dwarf galaxy apart, so what we would see is less a gravitationally-bound object, and more a shredded “stream” of stars.

Indeed, we suspect that our Milky Way was built from such ripped-up dwarf galaxies; finding the evidence for the “tidal debris” of the galaxies consumed by the Milky Way could tell us about the structure of dark matter over cosmological history. These streams are not theoretical objects: we know of many streams of stars that come from either dark-matter dominated dwarf galaxies, or tidally-disrupted balls of stars called globular clusters (which are not believed to contain significant amounts of dark matter). And we’re finding even more, thanks to the Gaia mission (about which more later). I’d like to know what these objects can tell us about the physics of dark matter. One thing I’d like to know is which streams came from dwarf galaxies, and what was the mass of the dwarf galaxy before it was tidally disrupted.

But again, such objects are torn apart, no longer gravitationally bound. And that presents a big problem for measuring the mass of a stream. All our methods of directly measuring the mass of astronomical objects rely on those objects being self-gravitating: the stars (or galaxies, or planets or whatever) need to be orbiting in their own static (or nearly static) gravitational potential for a long time. Streams are orbiting in the gravity of the Milky Way, and are changing over time. None of the typical techniques will work to measure their mass. The best you can do is compare the properties of the stars in a stream with known objects, and make an estimate that way. I’d like something more direct.

The usual techniques to measure mass use conserved quantities for the system, things like energy or angular momentum. The problem with the streams is that they are exchanging energy and momentum with the larger Galaxy, so these are not useful conserved quantity. So I need something else.

There is something that is conserved during the merger of a dwarf galaxy and the Milky Way (indeed, conserved in many dynamic systems). Something called the phase-space density. Phase space is the set of coordinates that describe where a point (or a star) is and how it’s moving. In our Universe, there are six numbers you need: the position (3 numbers) and the velocity (another 3 numbers). If you specify these 6 numbers, you’ve given the phase-space location. If you have a collection of points (like the stars in a cluster), you have a bunch of phase-space coordinates. You can then describe the phase-space density: the number of points per length cubed per speed cubed. There will be positions and speeds with lots of stars; these have high phase-space density, and places where there are few stars. These have low phase-space density.

Now here’s the cool bit: under most dynamic evolution, phase-space density is conserved. This is called Liouville’s Theorem. That is, if there is a dense region of phase space (i.e., a region with lots of stars), then after being shredded by the Galaxy, there will still be a region with lots of stars. Another region of phase-space (another set of locations and velocities) perhaps, but still: there’s something conserved.

Simulation of a stream forming from a Globular Cluster in the Milky Way (galaxy center marked with Gold star).

In particular, a cluster being disrupted by the Galaxy will become extended in position-space (it’s turning into a stream, after all). But all the stars will be moving in the same direction (around the Galaxy on nearly the same orbit), so they become much more compact in velocity-space. Result: conservation of phase-space density.

So this suggests a way to measure the mass of a cluster of stars, even if it has been ripped apart by the Galaxy. Measure the 6D location of each star in the cluster or stream in phase space. Determine the density in phase space. Then, compare that to the predicted phase-space density distribution of stars in a (simulated) cluster that before tidal disruption, and figure out what mass (and other structural parameters, like the radius) of the cluster most matches the observed densities. Simple: mass measurement of systems that aren’t in equilibrium.

Of course, there are a few obstacles. The first is that you have to measure the phase-space locations of a bunch of stars. Which means you need to know the position and velocities of each star. That’s hard. I don’t know if you know this, but stars are kind of far away. That makes it hard to judge the distance to a star. Further, stars are moving at hundreds of km/s (relative to the Earth), but due to their immense distance, it isn’t easy to see that motion. So how do you figure out their velocities?

The solution is the Gaia satellite. Gaia is a European Space Agency mission whose goal is measuring the positions and velocities of the nearest billion stars. It does this by repeatedly measuring the angular location of the stars on the sky, and looking for the drift of the star over time. That gives the “proper motion” (motion across the plane of the sky), and also the parallax (the change in the apparent position due to the different location of the Gaia satellite as it orbits around the Sun every year). That allows you to get the 3D position and 3D velocities for stars. To get the required accuracy, the Gaia satellite is capable of measuring a star’s position on the sky to within a few microarcseconds, which is the equivalent of measuring the diameter of a human hair in San Francisco while standing in New York City.

In reality, I’m oversimplifying: Gaia will get full 6D phase space information with decent errors only for the closest or brightest stars. In this first paper, I’ll work around that issue. Long term, I think it should be possible to combine data sets with other star surveys or use only the brightest tracer stars to measure phase-space positions of stars.

The next problem is measuring the phase-space density. This is non-trivial, and there isn’t really a great way to do it. I’m using a code called EnBid, which is a very cool piece of code designed for measuring $N$-body simulations of galaxies. It takes a list of 6D phase-space points and subdivides them recursively, giving a density estimate. It requires a bit of baby-sitting to find the right splitting parameters that give the right densities, but it works better than anything else I found right now. But then there are two more serious problems, and trying to overcome them is what this whole paper is all about. Basically, to extract good estimates of the densities, I need to deal with the fact that

  1. Gaia measurements aren’t perfect for streams of stars, and
  2. the best way to measure density requires knowing the orbit. But knowing the orbit requires the potential of the Galaxy, which we don’t know.

The first problem is the problem of measurement errors. Gaia, like all experiments, is not perfect: you can never measure anything perfectly, and so Gaia reports both the best-estimate for position and velocity, but also the estimate of how wrong those measurements are. Usually the error in velocity dominates, since velocity is measured by measuring positions over and over, so the errors accumulate.

M4 Globular Cluster, NASA/ESA Hubble

What this does is make the cluster of stars more diffuse. Mismeasurement takes things that are clumped close together in position and velocity, and smears them out. I've been describing errors as the equivalent of taking a box of stars, and shaking it. If the errors were real changes in the star's phase-space coordinates, the cluster would be "hotter" than it was before the errors were applied. That is, it injects entropy into a system.

Preferentially, this makes all the stars look like they have less density than they really do. You can understand this via the analogy with entropy: it is statistically unlikely for the errors to move a star in towards the center of the phase-space overdensity that corresponds to a cluster; it is much more likely that a random error will move the star away from everyone else, because there are just more directions that move the star "out" rather than "in."

This entropy injection and the reduction of phase-space density if a huge problem if I want to use the distribution of phase-space density to estimate the mass. You can see this in simulation. In the paper I work with a specific collection of stars: the M4 cluster, the closest globular cluster to Earth. It's a ball of stars with a mass of around 100,000 $M_\odot$. It isn't a tidally disrupted object, but that means I can worry only about the measurement error issue, without the additional complication that I'll address next. M4 is 2000 pc away (about 6200 lightyears). This means that Gaia doesn't have great measurement of the distance and radial velocity (the speed towards or away from the Earth) for individual stars, so I have to work in 4D phase space. This is a minor complication for spherically symmetric objects, but I mention it for completeness.

Anyway, if I simulate a M4-like globular cluster without any errors, I show in the first slide of the slideshop what the distribution of stars with respect to radius and speed would look like. You can see a sharp edge: for each radius, there's a maximum speed a star can be moving before it would be gravitationally unbound and escape the cluster. So if you wanted to measure the mass of the cluster directly, you'd use that maximum velocity as a function of radius to get the best possible measurement. But for me, that would be cheating, since I want a method that doesn't rely on the system being gravitationally bound (as I want to apply it to tidally disrupted systems eventually).


Simulated stars in a M4-like globular cluster. Color and size of each star represent the relative phase space-density (larger points are lower density).


Simulated stars in a M4-like globular cluster including measurement errors and foreground stars. Color and size of each star represent the relative phase space-density (larger points are lower density).


Simulated stars in a M4-like globular cluster including measurement errors and foreground stars, after a cut on stars with large errors is applied. Color and size of each star represent the relative phase space-density (larger points are lower density).

In the next slide, I show what happens if you add in realistic Gaia measurement errors and also include stars that aren't gravitationally bound to the cluster, but just happen to be passing through (or in front of or behind) the cluster and so appear to be part of the system. You can see the "heating" of the cluster now: stars at a given radius appear to be moving much faster than they should be. Critically, for my purposes, these errors also greatly reduce the average phase-space density of the stars, destroying my measurement technique.

So the first thing to try is just take the well-measured stars: those stars that Gaia believes have small errors (this is calculated by the Gaia mission by considering how long they looked at each star). The 3rd slide shows the simulated cluster after this cut on the error is applied. You can see that I've mostly restored the sharp edge in the $r-v$ diagram, but there are still stars that have way too small densities, and are moving way too fast to be bound to the cluster. These are mostly the foreground stars, it turns out. I can throw those out by placing a second cut on phase-space density, which implies there might be a cool way to determine cluster membership using phase-space density. Something to come back to in the future, perhaps.


Best-fit mass and King profile radius for simulated M4 globular cluster, after cutting stars with large velocity errors. Correct answer is shown with the gold star, best-fit is the black star.


Best-fit mass and King profile radius for simulated M4 globular cluster, after cutting stars with large velocity errors and correcting for entropy-injection. Correct answer is shown with the gold star, best-fit is the black star.

..
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
PhysicsMatt by Matthew Buckley - 3w ago

I just got back from a two week workshop on dark matter hosted by the Aspen Physics Center. Aspen Physics is a really special place to think: the workshops are limited to only a few planned meetings per week. You’re supposed to just talk and work and think. So I took this trip as an opportunity to take a bit of a vacation from my existing projects, and try to think about interesting things that I wasn’t working on, but maybe should.

Is this dark matter? I legitimately have no idea. (Daylan et al)

The thing that could my attention was the saga of the Goodenough-Hooperon. Ten years ago, Lisa Goodenough and Dan Hooper noticed that there were slightly too many gamma rays coming from the Galactic Center (as measured by the Fermi Gamma-Ray Space Telescope) than one might expect from known astrophysical sources. Interestingly, these gamma rays had a spatial distribution around the center of the Galaxy as a function of distance from the center that went approximately like one would expect from the square of a Navarro-Frenk-White (NFW) density profile. This NFW profile is what we generically expect dark matter to have in a galaxy, and if dark matter was annihilating with itself into gamma rays, the distribution of resulting photons should go like the square of the density (since you need two particles to find each other to annihilate).

In many follow-up studies, it is now pretty clear that there is in fact an excess of gamma rays coming from the Galactic Center. This excess is consistent with the distribution one would expect from dark matter annihilation, and can be well-fit by very simple dark matter models where dark matter has a mass of roughly 50 GeV, annihilating into some pair of Standard Model particles ($b$-$\bar{b}$ quarks is the canonical choice, but not the only one), that decays into a shower of short-lived particles, which themselves decay into particles including gamma rays (as per the usual naming scheme for particles, this dark matter candidate is often dubbed the Hooperon or the Gooperon).

As Dan Hooper said at a talk once, there is $40\sigma$ evidence for the Galactic Center excess. To which I gather Tim Tait responded “Dan, you don’t know we are having this conversation at $40\sigma$.” (this is absolutely true: there’s always the chance someone slipped you some hallucinogens, and your brain has decided to invent a rather boring conversation out of whole cloth.)

The sage of the Hooperon hit a bit of a snag in 2015, when a paper came out demonstrating that the signal from the Galactic Center was consistent with coming from a number of point sources that are just below the Fermi detection threshold, rather than a smooth distribution as one would expect from dark matter annihilation. This suggested that the gamma rays were coming from some non-dark matter astrophysical sources; the leading candidate being gamma rays generated by millisecond pulsars.

Millisecond Pulsars are so boring.

Artists Impression of a Millisecond Pulsar and Binary Companion (Credit: NASA)

This of course is tremendously disappointing. I want to find something cool, like dark matter, not something boring, like the corpses of dead stars that spun themselves up to tremendous speeds by consuming another star and which are now using magnetic fields powerful enough to kill you from across a star system to accelerate subatomic particles to unimaginable energies. Pulsars, man. I hate ‘em.

Now, even with this result — that the Galactic Center excess was point-sourcy — this didn’t necessarily kill the dark matter interpretation. Indeed, we have no reason to suspect that there should be enough millisecond pulsars in the Galactic Center to fit the excess, or that the distribution of such pulsars (if they exist) should be approximately NFW-squared. But the sociology of the field moved on, for a number of reasons, some good, some bad. In the end, the real problem is that the simplest models of dark matter that fit the excess don’t predict any other signals that are easily measured. When building models of dark matter, it is trivial to make your dark matter as dark as you like — to avoid existing experimental bounds. But in the case of the Galactic Center excess, I don’t even need to appeal to what I refer to as “stupid theorist tricks” to evade other limits: just build the dumbest model of dark matter and you get the Galactic Center excess but should not have expected to find a signal in direct detection, or be terribly surprised that the LHC hasn’t seen it yet. So, without much else to say, the field moved on.

But recently, one of the authors of the original paper (Tracy Slatyer) indicating a preference for point-sources has put out a new paper (along with a postdoc, Rebecca Leane) suggesting that the evidence for point sources might be due to issues with the modeling of other sources of gamma rays in the Galactic Center. Rebecca was at Aspen Physics, along with other authors of the original paper, and there were a lot of really interesting discussions as to what we know from the Fermi data. There are a lot of differing views, set forth by people whose scientific opinions I hold in extremely high regard. My current view is that the Fermi data cannot distinguish between the point-source and non-point source options. I might be wrong, but I’m not sure there will be resolution that I feel very confident in.

This makes me really concerned. My (somewhat self-assigned) job is to find dark matter. I’ve spent a lot of time on it, and so have many others. The entire conference I was just at was about finding dark matter. Now here’s a signal that might be dark matter. And I don’t know what to do with it. Moreover, the signal would be from a completely standard WIMP-type dark matter candidate — no stupid-clever theorist tricks. If we can’t “find” this kind of dark matter, despite what might an actual signal of it, what does that mean about this whole effort?

So I decided to spend most of my Aspen stay thinking about ways to prove or disprove the dark matter interpretation of the Galactic Center excess. Obviously, this isn’t the first time I’ve thought about it, or others, and so I didn’t expect necessarily to succeed. I came up with a number of ideas, some of which might work (though don’t get your hopes up), and some would definitely work, if we had experimental facilities that are optimistically ten years away (if not further).

Two of them are terrible ideas that won’t work, which are the ones I’ll tell you about now. I’ll tell you because the proof they won’t work are examples of “Fermi problems” so beloved by theorists. And because sometimes seeing why ideas are won’t work can be useful. And who knows, maybe I’m missing something.

The Moon: What a Useless Rock

pokes Moon with Stick Come ON Moon, do something.

Apollo 11, Image Credit: NASA (Also Going-to-the-moon-credit: NASA)

Gamma rays similar to the Galactic Center excess have actually been seen in a number of places. I found it in the Large Magellanic Cloud, and it’s been seen in the core and outskirts of the Andromeda Galaxy. But the problem in every case is that we can’t say for sure that there aren’t non-dark matter sources in those locations too. Millisecond pulsars can lurk everywhere.

Those jerks.

So I starting thinking about what line of sight in the sky could I be pretty sure didn’t have a millisecond pulsar hiding. One came to mind: I’m reasonably sure there isn’t a pulsar between me and the Moon. There are certainly pulsars far beyond the Moon, but since the Moon is opaque to gamma rays, any contribution such sources have to gamma ray signal from the Moon would be completely negligible. So the idea is to look at the gamma ray signal from the direction Moon, and see if there’s any evidence of the dark matter between us and the Moon annihilating. This seems vaguely plausible: a rough estimate of the number of annihilations (for a Galactic Center-like dark matter candidate) in the volume of space occupied by the cone that stars at your eye and ends on the Moon’s surface is 100,000 annihilations per second. That’s not bad!

Unfortunately, reality ensues quickly. The energy from those annihilations goes out in all directions, only a very few of which end on the surface of the Fermi telescope. We classify the potential strength of a dark matter signal as seen by Fermi using a number called the $J$-factor. The bigger this number, the easier it is to see the dark matter signal. The $J$-factor for the Galactic Center is $\sim 10^{24}~{\rm GeV^2/cm^5}$. A good-sized dwarf galaxy has a $J$-factor of $10^{19}$ in these units.

The dark matter between the Earth and the Moon? It has a $J$-factor of $10^5~{\rm GeV^2/cm^5}$. So any signal is 1/100,000,000,000,000,000 as strong as the Galactic Center. This is because the Moon is not very far away, so there’s not much dark matter between us and it.

As a result, despite my best efforts, the Moon continues to not pull its weight around here, particle physically speaking.

Space: Too Damn Empty

Shown: Not at all what a Interstellar Gas Cloud Looks like

Image Credit: Paramount Pictures

My next idea was at least not as dumb as the previous. The problem with the previous thought was that all those annihilations of dark matter weren’t visible because the energy is flying off into space, and not ending up in our detector. So let’s find some system which somehow responds to that annihilation energy. But how?

In space there are these clouds of cold gas. We can see how cold they are, and (assuming they are in thermal equilibrium), that means we can determine how much energy they are radiating per second. If dark matter was somehow dumping energy into such a cloud, it would heat up, and thus reveal the presence of dark matter. (Well, really, we wouldn’t be able to tell if the dark matter was doing the heating, or some unseen star, but at least we could set a bound.)

If you estimate how much energy dark matter annihilation will be producing per second per volume, you should take the number density squared, multiply it by the energy each annihilation produces (roughly twice the mass), and then by the “velocity-averaged cross section” (which gives you an estimate of how likely an annihilation is). For Galactic Center-excess compatible dark matter near the Earth, assuming a mass of 50 GeV, this is:

\[ \left(\frac{0.3~{\rm GeV/cm^3}}{50~{\rm GeV}}\right)^2 \times 100~{\rm GeV} \times 3\times 10^{-26}~{\rm cm^3/s} \sim 10^{-28}~{\rm GeV/cm^3/s} ~ 10^{-31}~{\rm erg/cm^3/s} \]

Put that gas cloud in the Galactic Center, and the increased density of dark matter gains you 2-3 orders of magnitude. Now, this energy deposition is tiny, but it is also about the energy loss rate of cold gas clouds in the Galaxy. So there’s a potential limit to be extracted here!

Unfortunately, the energy is coming out in the form of hard radiation: high energy gamma rays very fast electrons, and the like. Interstellar dust clouds are very diffuse, and they don’t interact very much with such energetic particles. So the dark matter annihilation products won’t heat the gas, but rather speed through it effortlessly. Some tiny amount would be measured here on Earth by telescopes like Fermi, but the whole point is that I don’t know the source of such gamma rays. So I’m stymied by the fact that space, in case you hadn’t heard, is very empty. Damn.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is an explainer for my recent paper with Gopolang (Gopi) Mohlabeng and Chris Murphy on the implications of recent surveys of dark matter velocity distributions from the Gaia mission on dark matter direct detection. There are a bunch of moving parts in this paper, as we’re trying to tie together some new directions from astrophysics with a long-standing problem in particle dark matter.

Let’s start with the latter first. If you’ve been reading my work in the past, you know that we have pretty good evidence from cosmology that dark matter exists, but we have no idea what dark matter is, as a particle. There are many good ideas for a particle theory of dark matter, so many good ideas that ultimately I think at this point we are not going to resolve the issue without some new experimental evidence.

One of the longest running theoretical ideas for dark matter is called “weakly interacting dark matter” or WIMPs, where dark matter has a weak nuclear charge. The nice thing about this idea is that it naturally produces dark matter through “thermal freeze-out.” Basically, in the early Universe, when things were extremely hot and dense, dark matter was pair-produced through collisions of high-energy “normal” matter, and could pair-annihilate back.

As the Universe expanded and cooled, the rate of production decreased, but also the rate of annihilation decreased as well. The weaker the interaction between dark matter and other particles, the more dark matter would remain today (since the annihilation would stop earlier, leaving more dark matter behind). It turns out that the weak nuclear force is approximately the right interaction strength to produce the observed amount of dark matter. But more generally, we can have a new interaction between dark matter and normal matter and get the right result if the interaction is of the right strength. I call this type of dark matter “thermal dark matter,” since it was originally produced in the thermal bath of hot particles in the early Universe.

The point is that such well-motivated dark matter requires an interaction between dark matter and normal matter, and that interaction isn’t infinitesimally small. The weak nuclear force is weak sure, but we can measure it these days.

So how do you look for thermal dark matter?

One way is direct detection. If dark matter is a WIMP or thermal, then there is a lot of it moving through the Earth all the time. Rarely, one particle of dark matter will bash into a nucleus, and impart a tiny amount of energy. If you can build a detector with very low rates of radioactivity, and shield it from cosmic rays and anything else that could imitate a dark matter nuclear collision, then you can search for this kind of dark matter.

There are many such experiments; buried deep underground (for cosmic ray reduction). I got to visit one in Sudbury, Ontario: 2 km underneath the surface in a nickel mine. Almost all of them have seen nothing, and thus set very strong limits on the rate of dark matter-nuclei interactions.

There is one exception: the DAMA/Libra experiment in Gran Sasso (a car tunnel under a mountain in northern Italy, where the low-background lab is built on a side-tunnel halfway through). DAMA is unique in how it approaches searching for dark matter. Most direct detection experiments look for dark matter by having zero (or next-to-zero) background, and looking for a few scattering events over that background.

DAMA instead doesn’t seek zero background. Instead it looks for a modulation of the number of events they see over a year. Why do this?

A dark matter direct detection experiment is looking for a dark matter particle smashing into a nucleus and imparting some energy in the recoil. Think of it like two bowling balls hitting each other. The energy that the recoiling atom gets is set by the relative masses of the dark matter and nucleus and the momentum of the dark matter — that is, how fast the dark matter is moving through the experiment. No experiment can measure a recoiling nucleus with zero momentum, so there is always some experimental threshold of the minimum recoil energy that can be seen. Therefore, there is a minimum dark matter speed that could, even in principle, be measured by any experiment. This speed will vary from experiment to experiment, depending on the atomic mass of the target material and the experimental threshold.

So how fast is dark matter moving in the detector? Well, we don’t really know. We can make a simple assumption: this far from the Milky Way’s center, particles on orbits will be moving with an average speed of ~220 km/s or so, with some moving faster and some slower. This is the “Standard Halo Model” of dark matter. To that, we have to add the motion of the Sun through the cloud of dark matter — the cloud of dark matter isn’t rotating on average, but the Sun is moving at ~240 km/s through the rest frame of the Galaxy. So dark matter is moving through a direct detection experiment at incredible speeds: hundreds of km/s.

On top of which, the Earth orbits the Sun with a speed of 30 km/s. But here’s where things get interesting. Half the year, the Earth’s orbit causes us to move “with” the motion of the Sun around the Galaxy. The other half of the year, we are moving “against” that motion. Around June, more of the dark matter is moving faster relative to an experiment built on Earth than there is in December. So, if you measure the rate of scattering, you should see more events in June than December, because more dark matter can surpass your detector’s energy threshold.

So this is what DAMA/Libra has been looking for: a yearly modulation in the number of events, peaking around the date predicted by the models of the dark matter motion in the Galaxy. And they see such a modulation: now at nearly $13\sigma$ significance.

Great! Discovery of dark matter!

Limits on dark matter elastic spin-independent scattering from XENON1T, CDMSlite, and Cosine-100 assuming the Standard Halo Model, contrasted with the best-fit regions from DAMA/Libra. Fits to the recoil spectrum are shown in orange, and to the daily modulation in yellow.

Unfortunately, no other experiment sees the same signal. Even though the other experiments don’t look for modulation, you can, if you know the dark matter velocity distribution, predict how many events other experiments should see if DAMA/Libra is measuring real dark matter modulation. Experiments such as Xenon-1T (that’s Xenon-1-ton: a ton of liquid xenon instrumented to detect dark matter scattering) rule out the DAMA/Libra signal region by four or five orders of magnitude.

Now, you can appeal to a bunch of possible explanations to save the DAMA signal: you can try to make dark matter that scatters with the sodium-iodide crystals of DAMA and not with xenon (or the many other target materials used by other experiments). Given the strength of the negative results, that doesn’t seem to work anymore.

Or you can say that the Standard Halo Model is wrong, and the real velocity distribution of dark matter is such that other experiments get suppressed rates while DAMA gets an enhanced rate. I’ve written papers on this idea. Again, given the strength of the Xenon-1T limits, the plausible variations on the dark matter velocity distribution don’t seem like they could rescue DAMA.

So for the last few years, I’d been saying that, in order to get the DAMA signal from changing the dark matter velocity distribution, you’d need to do something very extreme. You’d need the Earth to be passing through a very fast-moving stream of dark matter — then if the dark matter mass was just right, you could get a greatly increased modulation rate in DAMA as the Earth went from going “with” the wind of dark matter to “against” the wind while not really changing the yearly average rate.

Further, you’d need this stream to be oriented pretty much exactly so it was pointed in exactly the opposite direction of the Sun’s motion through the Milky Way. Because DAMA isn’t just seeing a modulating rate: it’s seen a modulation that peaks when the Standard Halo Model predicts it should peak. A random stream of dark matter coming from a random direction in the sky would shift the modulation peak day.

So DAMA could only be saved by a very special stream of dark matter, and how likely was that?

Modulation of amount of dark matter capable of causing a dark matter scattering event for the Standard Halo Model (black), the Gaia model for the halo (red), and the S1 stream (blue).

Then Gaia came along.

Gaia is a space telescope run by the European Space Agency that is measuring the position and motion of 1.4 billion stars in the Milky Way. Now, Gaia can’t see dark matter, because dark matter is… dark (or, more precisely, it is invisible).

But Gaia can identify kinematic structures in the stars. Combined with measures of the age of stars, one can use this data set to find groups of stars that are statistically unlikely to be just due to chance. In particular, Myeong et al found a number of stars that were consistent with being “streams” or “tidal debris:” the relics of dwarf galaxies that were long ago absorbed by the Milky Way. Such ancient streams of stars are expected to have dark matter with them, though getting an idea of the amount of dark matter is difficult (and something I’m very interested in).

One of these streams, called S1, is extremely high velocity. It is also counter-rotating to the Sun. It coincidentally is pointing more or less exactly in the direction needed to result in a peak-day for a modulation experiment that would be the same as if there was no stream.

Limits and Best-fit regions assuming 100% of local dark matter is in the S1 stream and spin-independent Elastic scattering.

Basically, if you wanted to build a stream of dark matter that could boost the signal that DAMA/Libra sees while not changing other experiments’ sensitivities by much, you couldn’t do better than S1. Since I’d been going around saying “look, the only way to save DAMA is to have a stream that’s fast and counterrotating and how likely is that?” I sort of felt obligated to go and do the very time-consuming task of checking to see if S1 (or similar streams) can get a DAMA signal without being ruled out by any other experiment.

So myself, Gopi, and Chris recast DAMA/Libra’s results along with the leading experiments that don’t see dark matter: Xenon-1T, CDMSlite (a germanium-based detector), PICO-60 (a fluorine-based detector that is very sensitive to dark matter that couples to nuclear spin), and COSINE-100 (an experiment built out of the same target material as DAMA). In addition to the stream data, we used the best models of the “bulk” dark matter velocity distributions also derived from Gaia data.

While we couldn’t check every type of scattering of dark matter against nuclei, we checked a number of possibilities: spin-independent and spin-dependent, elastic (i.e., bowling ball hitting bowling ball type scattering), and inelastic (where some of the recoil energy is absorbed internally during the scattering).

Overall, we found that the stream can do exactly what we’d expect. It has a peak-day nearly the same as that predicted by the Standard Halo Model. It allows dark matter of lower mass than we’d normally expect to give a viable DAMA/Libra signal, and since low mass is where Xenon-1T has the least sensitivity, this moves things in the right direction to evading the null results.

Energy Spectrum of DAMA/Libra observed modulation signal (black), along with S1-only best-fit (Red), and S1+halo best fit (blue).

Additionally, overlaying the stream with the background halo of dark matter allows one to fit the DAMA results much better. DAMA, in addition to measuring the modulation of scattering per day also measures their spectrum: how many events are modulating per energy. One problem with the Standard Halo Model predictions for DAMA is that the lowest energy bins don’t seem to be showing the decrease in modulation that the models predict. A stream added to the halo changes the spectrum enough to greatly improve the statistical fit.

That said, for most every parameter point we considered, DAMA continues to be ruled out by the other experiments. There are a few exceptions though. For one, if dark matter scatters elastically and spin-independently, and the S1 stream is over 80% of the local density, then DAMA can be well-fit and no other experiment should have seen it yet. However, this amount of stream density is, to put a technical term on it, completely bonkers. My naive expectation is that S1 could be 10-20% of the local density. 30% would be “wow, but ok.” 80-90% seems implausible. But we don’t know for sure, and I’d like to find some way to check.

Best-fit regions to DAMA/Libra Recoil Spectrum as a function of mass and stream density fraction for spin-independent elastic scatter (yellow, green, orange are 1,2,3 sigma fits). Grey shaded region are parameter points ruled out by null results of other experiments. Note a very small allowed region at 3 sigma near 30 GeV and >80% stream fraction.

The other option is that the modulation rate seen by DAMA is right, but the recoil energy spectrum is way off. Then we can fit the signal with a stream scattering elastically and spin-dependently without being ruled out. However, at this point you’d be arguing that the experiment you want to be right is in fact sort of wrong (its spectrum would be incorrect), wrong in just the right way. Such arguments tend to be come across as a bit grasping at straws, but I mention it for completeness.

So, this was a lot of work to prove that DAMA is, for the most part, still ruled out. I think it was useful though, for a couple of reasons. First: I convinced myself that this stream that was perfect to “help” DAMA is just not enough to easily explain the positive signal while not being in tension with everyone else’s negative results. DAMA is a difficult signal to explain, and it seemed necessary to me to do the work to figure out if it could be understood through this new stream. It can’t, outside of some unlikely caveats, but if I didn’t check, I wouldn’t know. DAMA is still unexplained, but it once again seems, to me at any rate, that it isn’t something that can be easily fit by dark matter. Experiments like COSINE and similar sodium-iodine targets in the southern hemisphere (like SABRE) will probably tell us more about what’s really going on, but as of right now, I don’t see a clear way for this signal to be dark matter.

Second, I found out how much a stream like S1 can change direct detection results. The answer is “a surprising amount.” Even with reasonable stream densities, you can get pretty significant changes to the recoil spectrum, and some pretty wild changes in expectations from the Standard Halo Model. We know these streams exist, we will learn more and more about them as our ability to model the dark matter halo improves. Direct detection assumes something about the motion of dark matter, so as our knowledge of this motion improves, so will out ability to extract signals and limits.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Asymmetry Observables and the Origin of $R_{D^{(∗)}}$ Anomalies

This is an explainer of my recent paper, written with my colleague here at Rutgers David Shih, and David’s graduate student, Pouya Asadi. This is a follow-up in some sense to our previous collaboration, which for various reasons I wasn’t able to write up when it came out earlier this year.

This paper concerns itself with the $R_D$ and $R_{D^*}$ anomalies, so I better start off explaining what those are. The Standard Model has three “generations” of matter particles which are identical except for their masses. The lightest quarks are the up and down, then the charm and strange, and finally the heaviest pair, the top and bottom. The electron has the muon and the tau as progressively heavier partners. The heavier particles can decay into a lighter version only through the interaction with a $W$ boson — these are the only “flavor changing” processes in the Standard Model.

So, if you look at the $b$-quark, it can decay into a $c$-quark by radiating off a $W$ boson. Now, the $b$ quark only weighs around 5 GeV, so it can’t make a “real” $W$. Instead it creates a “virtual” particle, which quickly decays. Some of the time, the $W$ can turn into an electron/muon/tau and a paired neutrino, so \[ b \to c (W^- \to \ell \nu) \] where $\ell$ is one of the charged leptons. In the Standard Model, each of the possible three leptons $\ell = e,\mu,\tau$, should occur nearly at the same rate (with only small differences due to the fact that the tau is heavier than the mu which is heavier than the electron). Therefore, we could measure the ratio \[ \frac{b \to c (W^- \to \tau \nu)}{\sum_{\ell = e,\mu\tau} b \to c (W^- \to \ell \nu)} \] and this should be $\sim 1/3$.

Now, we can’t measure “bare” quarks, so we have to look at the decays of mesons — a quark-antiquark bound state. In this case, we care about the $B$ meson (which contains a $b$ quark), decaying into either the $D$ or $D^*$ mesons (both of which contain a charm quark). Looking at mesons means we have all sorts of additional calculational and experimental issues to deal with (which is why we look at this ratio: dividing different rates cancels out many of the experimental effects). So we can define \[ R_D = \frac{B \to D \tau \nu}{\sum_{\ell = e,\mu\tau} B \to D \ell \nu} \] and \[ R_{D^*} = \frac{B \to D^* \tau \nu}{\sum_{\ell = e,\mu\tau} B \to D^* \ell \nu} \] In the Standard Model, including all effects, we can predict \[ R_D = 0.299 \pm 0.003, R_{D^*} = 0.258 \pm 0.005. \] When we measure these ratios, we find \[ R_D = 0.407 \pm 0.046, R_{D^*} = 0.304 \pm 0.015. \] All told, these two measurements combined are a $3.8\sigma$ deviation from the Standard Model — one of the largest known discrepancies. It’s not $5\sigma$ yet, so it isn’t a discovery, but it is very interesting.

Now, these anomalies have been known for a while, and there are many theoretical ideas on how to explain them. The basic idea is usually to introduce a new particle that couples to $b-c$ quarks and $\tau-\nu$ leptons. This new particle can then act as a new source of flavor-violation, and mediate a decay into taus slightly more than to the other charged leptons.

In our previous paper, we added a new such mediator: a new particle like the $W$ boson, that coupled to “right-handed” quarks and a new “right-handed” neutrino alongside the tau lepton. This avoided several of the stringent constraints on new “left-handed” $W$-like bosons (left and right here really refer to the way that the Standard Model particles are spinning. In the Standard Model, the $W$ boson only interacts with quarks and leptons whose spins are oriented opposite to their direction of motion). But there are other options: charged scalars, new bosons, and a set of particles called “leptoquarks” which can decay into both a single quark and a single lepton (something no particle in the Standard Model can).

In this paper, we consider how to tell all these different ideas apart. We start off by considering the range of possible $R_D$ and $R_{D^*}$ values that each model could cause. For a given new mediator particle with a specific mass and a specific interaction with the Standard Model particles, you can change the two ratios by specific amounts. The details of how those interactions work tell you how you can change the two in combination. In this figure, we show where each model can populate the space of $R_D$ and $R_{D^*}$. Some can only live along certain lines, while other models can cover regions of parameter space. We consider both models with left-handed and right-handed neutrinos.

In this figure, we also show the current experimental measurements (the grey lines, centered at the average value with error ellipses around it), and the projected accuracy of a future experiment called Belle II. Belle II will generate about 55 million $B$ mesons by 2025, and measure these ratios with much higher precision (the red and magenta ellipses). We don’t know what values for the ratios it will measure (obviously, otherwise why do the experiment?), so we pick two possibilities. The first is that the Belle II results are just like today’s average values, but with much smaller error bars. If that’s the case, the anomalies will grow to something like $10\sigma$, which will be amazing evidence for new physics. We also consider a case where the Belle II measurements are lower than the current values, but still at $5\sigma$ (this is completely possible, and would be only a $2\sigma$ downward shift from the present values). Still, that’s a discovery. Other outcomes are possible, but the $5\sigma$ case is a “worst-case” scenario for our purposes (other than a less than $5\sigma$ anomaly, but in that case, our work will be less important).

Using just this, you can see that, once the measurements from Belle II are made, we might be able to narrow down the possible models of new physics (especially in the $10\sigma$ case). But we still won’t know for sure. So in our paper, we consider what else we can look at that would distinguish the various possibilities. Our benchmark goal is to see if we can distinguish left- and right-handed neutrinos.

We therefore consider “asymmetry observables:” we consider how each model will effect the direction of the tau lepton relative to the motion of other particles in the decay, or how the spin or the tau will tend to be oriented. The asymmetry ${\cal A}_{FB}$, for example, measures how often the tau tends to move in the same direction as the $D$ (or $D^*$) versus how often it moves in the opposite direction. The polarization asymmetries ${\cal P}$ tell you how often you will measure the spin of the tau with or against one particular direction in the decay. There are three directions you can choose: we denote them as $\perp$, $\tau$ and $T$. The “$T$” direction will turn out to be especially interesting, but it turns out no one has a fully viable plan to measure it (though it is, at least, theoretically possible to measure).

For the $10\sigma$ and $5\sigma$ benchmark scenarios at Belle~II, we can then consider the range of possible measurements we can expect for each of these observables, for the different models of new physics. We end up with plots like this (figure 6). Here, we’re showing the correlation of different observable measurements for different models. Right-handed neutrino models in red, left-handed in green. The point here is that, for this $10\sigma$ scenario, the green and red blobs are distinct: if you can measure all of the observables (not including in this case the ${\cal P}_T$ measurement), you can determine if the new physics involves left- or right-handed neutrinos.

You can do even better: for most possible measurements, you can even tell which model specifically would be responsible for the new physics — assuming again you can make these measurements. In a few possible outcomes, you won’t be able to tell apart certain leptoquark models from other leptoquarks (both using the same handedness of neutrinos). In this case, the ${\cal P}_T$ measurement can break the degeneracy in most, but not all cases. If you can make the measurement.

In the $5\sigma$ case, you can see from these blob plots that the situation will be harder. In many cases, the asymmetry observables will tell the left- and right-handed scenarios apart, but not in all. Again, the ${\cal P}_T$ measurement can come to the rescue: here completely breaking the degeneracy, assuming that the errors in the measurement can be made sufficiently small. So overall, a pretty positive result, though much depends on finding a way to measure one asymmetry observable that may have a big role to play.

You can do even better: for most possible measurements, you can even tell which model specifically would be responsible for the new physics — assuming again you can make these measurements. In a few possible outcomes, you won’t be able to tell apart certain leptoquark models from other leptoquarks (both using the same handedness of neutrinos). In this case, the ${\cal P}_T$ measurement can break the degeneracy in most, but not all cases. If you can make the measurement.

In the $5\sigma$ case, you can see from these blob plots that the situation will be harder. In many cases, the asymmetry observables will tell the left- and right-handed scenarios apart, but not in all. Again, the ${\cal P}_T$ measurement can come to the rescue: here completely breaking the degeneracy, assuming that the errors in the measurement can be made sufficiently small. So overall, a pretty positive result, though much depends on finding a way to measure one asymmetry observable that may have a big role to play.

You can do even better: for most possible measurements, you can even tell which model specifically would be responsible for the new physics — assuming again you can make these measurements. In a few possible outcomes, you won’t be able to tell apart certain leptoquark models from other leptoquarks (both using the same handedness of neutrinos). In this case, the ${\cal P}_T$ measurement can break the degeneracy in most, but not all cases. If you can make the measurement.

In the $5\sigma$ case, you can see from these blob plots that the situation will be harder. In many cases, the asymmetry observables will tell the left- and right-handed scenarios apart, but not in all. Again, the ${\cal P}_T$ measurement can come to the rescue: here completely breaking the degeneracy, assuming that the errors in the measurement can be made sufficiently small. So overall, a pretty positive result, though much depends on finding a way to measure one asymmetry observable that may have a big role to play.

I haven’t played much with flavor models, these projects with David and Pouya are some of my first forays into the field. Anomalies in these finicky flavor measurements tend to get less attention sometimes from the community than the results from the LHC. However, there are anomalies in the data, anomalies that might be statistically significant and may turn into new physics. In that case, we’ll all have to become experts in the language of flavor physics.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview