SciBlogs is a blog network, not for profit, prolific, opinionated and informative. It is New Zealand-based, but globally focussed. It is the place for science discussion and opinion with a Kiwi spin. Its mission is to improve the level of discussion of science-related issues in New Zealand.
The scar on the lunar surface produced when the Israeli space probe ‘Beresheet’ slammed into the Moon on April 11 has just been spotted using an orbiting NASA satellite.
Three nations have so far landed spacecraft on the Moon: the USA, the Soviet Union/Russia, and China. A fourth nation, Israel, has attempted to join this club, but its probe (named Beresheet) made a hard rather than a soft landing six weeks ago. Now detailed images of the lunar surface obtained using NASA’s Lunar Reconnaissance Orbiter (LRO) have been scoured and the crash site identified.
Beresheet (Hebrew for ‘In the beginning’, the first words in the biblical Book of Genesis) was built and operated by Israel Aerospace Industries on behalf of SpaceIL, a non-profit organisation founded in 2011 with the specific aim of landing a probe on the Moon. The plan was for this small (150 kg — about the size of a washing machine) probe to land gently on the Moon and return scientific data, but a gyroscope failure during the final approach stage led to it crashing into the lunar surface at around one kilometre per second.
Path taken by the Beresheet probe from Earth to Moon. It all worked well, until the very last minutes.
At such a speed it would be expected that a small crater would be excavated, and so imagery obtained by LRO was examined to see if this could be found. Using the LRO camera systems several new craters formed by the impact of tiny asteroids/large meteoroids have been detected in LRO imagery since that satellite was inserted into lunar orbit in 2009. Three deliberate impacts by US satellites (GRAIL-A, GRAIL-B and LADEE) have also had their crash zones identified, and the LRO imagery is of such high resolution that the paths traversed by the Apollo astronauts can also be spotted; for example, see the pictures here of the Apollo 17 landing site. (New images from LRO can be found here.)
Finding a new scar on the Moon’s surface is not easy when you are not sure precisely where it is, and also LRO takes some weeks to cover the whole of the Moon. As it happens an image obtained on April 22, eleven days after Beresheet crashed, contains the end-point of that probe, and its identification has recently been announced. Here are before (from 30 months ago) and after (post-April 11) images of the area:
Lunar Reconnaissance Orbiter Camera images of the impact region of Beresheet, before and after its hypervelocity arrival. (Courtesy NASA/Arizona State University.)
The new dark ‘hole’ where the high-speed spacecraft punched through the regolith to expose covered material is clear, along with a lighter ‘halo’ of ejected material (and also doubtless parts of the satellite and fuel/oxidiser). A streak apparent at an azimuthal angle of about 200 degrees in the ‘After’ image may be due to the impact occurring at a very low angle (less than 10 degrees away from the horizontal).
The way it should have been: artist’s impression of the Beresheet probe safely landed on the lunar surface.
If you are David Cameron, you will have by now learnt that size nouns used in SIZE NOUN + OF + NOUN constructions can get one in a whole bunch of hot water (well…maybe not the exact terminology but the idea behind it at least)! In 2016, he had the misfortune of Twitting the phrase a bunch of migrants in relation to the migrant situation at the refugee camp in Calais. If you want to see what’s wrong with a bunch of immigrants see the explanation in this blog post by Robbie Love – a corpus linguist from CASS (Centre for Corpus Approaches to Social Sciences at Lancaster University,UK).
What is special about BUNCH + OF+ NOUN?
Actually, nothing at all. It turns out that nouns which are used to express quantities: lots, heaps, piles, masses, sets, bits, swathes, smatterings, scraps, termed size nouns by Liesellote Brems in her book on the subject (yes, size nouns are that compelling, a whole book is required) expand their functions to include new meanings over time.
Bunch (of) is no exception, and as reported in a new article just this month, linguists found statistical correlations between bunch of and the nouns it frequently occurs with (collocates). So in theory, we could have bunches of just about anything: bunch of books, bunch of flowers, bunch of beds, bunch of A+ assignments, bunch of surprises, bunch of fragrant cupcakes, bunch of amazing linguists, bunch of open source data, bunch of inspiring politicians, bunch of rockstar mathematicians, and so on. The grammar of English certainly allows it! But we simply don’t!
What we do have is a bunch of X where X has shifted over the years in its meaning to gravitate towards a particular type of X. The 1910s to the 1960s have been busy times for the size noun BUNCH. In that time, it went from being an arrangement or bundle (bunch of flowers, bunch of grapes), to denoting a vague group rather than a specific constellation (bunch of cattle, bunch of kids), to further reconceptualisation as a large quantity (a bunch of idiots, a bunch of noise, bunch of questions). By the 2000s rolled around, we find that all three uses of BUNCH are thriving, but not equally well.
“BUNCH + OF+ PEOPLE” has negative overtones
Bin Shao and colleagues have sifted large historical collections of American English to uncover that while BUNCH appears with a great variety of nouns, the shift is nowadays towards using it with nouns denoting people more than any other type of nouns, bunch of immigrants, bunch of idiots, bunch of nutters. And what is more, the expressions it is found in tend to have negative overtones (especially since the 1980s and especially in fiction). And because BUNCH (of) attracts “negative” people-nouns, by analogy, other nouns that BUNCH (of) is used with, also get tainted by this negative brush. Oops!
Well this is HEAPS GOOD!
As mentioned , changes in meaning and function of the sort identified for BUNCH are not atypical of size nouns, and they are not exclusive to American English either. Last year, I wrote about incoming changes in how New Zealanders use the size noun HEAPS. While HEAPS + OF+ NOUN is a common phrase across British, American and Australian English, here in New Zealand, we witness a rather extravagant reshuffling of grammatical function for HEAPS.
For us Kiwis, HEAPS can nowadays occur without X altogether, or the preposition OF for that matter: “I paid heaps for that car”. Well maybe HEAPS stands for “heaps of money” you say, “of money” is left out as it is understood from context. And in something like “I learnt heaps in that class today”, HEAPS stands for “heaps of stuff” or “heaps of new things”, slightly more vague, but plausible. But, what about this one: “She loves him heaps”? (She loves him heaps of what?.)
From a mere noun denoting size, the unassuming HEAPS is born again, this time as an adverb (it connects with the verb: “what did he learn?” heaps, “what did he pay?” heaps, “how much does she love him?” heaps). And through this transformation (a process called grammaticalization), we also get useful Kiwi idioms such as “to give someone heaps” and “to get heaps”. In fact, the data I analysed even contained this gem: “That was heaps stressful”. Here, HEAPS is used to say something about the intensity of the (adjective) stressful. So from a simple shape or constellation (a noun), to a quantifier (expressing hyperbolic size) to an intensifier (an adverb) – what a journey!
Analyses of everyday phenomena like the changes in use of size nouns speak to a larger question, namely, they uncover the hidden alleys down which our minds tend to wonder. Even though we could say anything we want in principle because language provides us with the lego-like building blocks to do so, we simply don’t. We do not like to take the road untaken. Our linguistic minds prefer the road which is very much downtrodden, often for no reason other than the fact that others have gone that same way before. This is how BUNCH + OF+ NOUN has come to have negative associations, and I suspect, many other lexical associations which similarly fly under the radar of our consciousness.
In 2014 a study was published that challenged an oft-cited criticism that journalists are to blame for hyped-up health stories.
Sensational headlines, breathless reporting, caveats buried so far down the story that most readers never find them. We hear these complaints all the time about the media.
But this study, published in The BMJ, turned the claims on their head. Cardiff University researchers found the exaggerated claims in new stories was strongly linked with those same exaggerations in institute press releases.
Particularly in the modern media landscape, with fewer journalists filing more stories in a race to keep up with the 24/7 news cycle and the insatiable appetites of online news sources, there’s a certain amount of good faith that a press release from a respected institute – say, a university – is robust.
Of course, it’d be preferred if specialised science journalists had the time to delve into each study they reported on, reading the paper with a careful eye for exaggeration and consulting independent experts. But the reality of modern media is it’s more likely a general reporter will be covering a story they may have limited time or background to thoroughly report on it.
The BMJ study focused on three types of health-related claims: advice to readers to change behaviour, causal statements drawn from correlations, and human inferences from animal research.
Proportions of news with exaggerated advice, causal statements from correlational research, or inference to humans from non-human studies were higher when the associated press releases contained such exaggeration. Sumner et al. (2014).
Over a third of the press releases studies contained at least one of the three above claims. When this happened, resulting news stories were more likely to contain exaggerated claims, compared to the journal article: 56 times more likely when it came to conflation of animal studies to human relevance.
“Although it is common to blame media outlets and their journalists for news perceived as exaggerated, sensationalised, or alarmist, our principle findings were that most of the inflation detected in our study did not occur de novo in the media but was already present in the text of the press releases produced by academics and their establishments.
“The blame—if it can be meaningfully apportioned—lies mainly with the increasing culture of university competition and self promotion, interacting with the increasing pressures on journalists to do more with less time.”
An interesting aspect of this 2014 study was that the researchers found no (statistically significant) link between the exaggeration in a press release and media uptake of the story, which might be assumed to be the driving force for hyping up a story. But of course, this was all retrospective and hard to say for sure, which led the researchers to a clear question: what happens if you control for other factors?
Press releases as research subjects
Which leads us to the follow-up research published this week in BMC Medicine. The same research team conducted a randomised trial, aiming to find out whether inserting caveats in press releases and moderating causal claims changed the resulting media coverage, either by improving the stories or diminishing the news value.
Working with nine UK press offices, biomedical and health-related press releases were sent to the research team and randomly assigned to one, both or neither of two interventions. In the first, suggestions were made to bring the headline and release’s claims in line with the type of evidence in the study: for instance, using words like ‘might’ and ‘may’ where data were correlational. The other intervention was to insert an explicit caveat about causality, e.g. this was an observational study, which does not allow us to conclude that drinking wine caused the increased cancer risk.
The press offices were free to accept or reject the changes, then issued their releases to the media as usual. Perhaps unsurprisingly, given the group’s previous findings, news headlines and stories were more likely to use appropriate language around causality when the press releases’ headlines and text did so too. When press releases contained caveats, about 20 per cent of the news stories followed suit, compared to almost none when the caveats were missing from the release – this point is worth highlighting:
“Explicit causality statements have almost never been seen in news previously and almost never occurred in our large sample unless the press release contained it. Most of these statements were caveats and were not within quotes, making it more remarkable that they carried through to news (it is likely that carry-through for quotes would be higher).”
In the prior study, the research team also searched for caveats or justifications; for instance, that the study couldn’t say for certain, or that even when other factors were controlled there was still a clear finding. But they were unable to draw many conclusions because such caveats were rare in the press releases studied.
To go from next to no caveats, to caveats in a quarter of news stories just by using them in a press release is a remarkable finding. To me it suggests that journalists aren’t unwilling to include these points, but they are taking their lead from the institute’s press release.
Once again, the more cautious language didn’t appear to have an impact on news uptake, which indicates that it’s feasible and reasonable for press offices to include such caution in their work:
“Clinicians, scientists and press officers can take encouragement that deft caution and clear caveats are unlikely to harm news interest and can penetrate through to news and even to news headlines.”
It’s easy to blame the media for over-hyped headlines when it comes to science and health news, so these studies should give us cause to reassess those assumptions. It’s heartening to have evidence that shows how easy it is to improve media coverage and for researchers and institutes to play a greater role in ensuring their research is promoted responsibly. Other initiatives, including the UK Science Media Centre’s press labelling system, should be encouraged and adopted to continue the work improving coverage of science and health in the media.
The “Christchurch Call” summit has made specific progress, with tech companies and world leaders signing an agreement to eliminate terrorist and violent extremist content online. The question now is how we collectively follow up on its promise.
The summit in Paris began with the statement that the white supremacist terrorist attack in Christchurch two months ago was “unprecedented”. But one of the benefits of this conversation happening in such a prominent fashion is that it draws attention to the fact that this was not the first time social media platforms have been implicated in terrorism.
It was merely the first time that a terrorist attack in a western country was broadcast via the internet. Facebook played a significant role in the genocide of Rohingya Muslims in Myanmar, as covered in the Frontline documentary “The Facebook Dilemma”. And this study demonstrated a link between Facebook use and violence against refugees in Germany.
Better than expected outcome
I hope attention now turns to the fact that social media platforms profit from both an indifference to harassment and from harassment itself. It falls within the realms of corporate responsibility to deal with these problems, but they have done nothing to remedy their contributions to harassment campaigns in the past.
Online communities whose primary purpose is to terrorise the people they target have existed for many years, and social media companies have ignored them. Anita Sarkeesian was targeted by a harassment campaign in 2012 after drawing attention to the problems of how women are represented in videogames. She chronicled the amount of abuse she received on Twitter in just one week during 2015 (content warning, this includes threats of murder and rape). Twitter did nothing.
When the summit began, I hoped that pressure from governments and the threat of regulation would prompt some movement from social media companies, but I wasn’t optimistic. I expected that social media companies would claim that technological solutions based on algorithms would magically fix everything without human oversight, despite the fact that they can be and are gamed by bad actors.
I also thought the discussion might turn to removing anonymity from social media services or the internet, despite the evidence that many people involved in online abuse are comfortable doing so under their own names. Mainly, I thought that there would be some general, positive-sounding statements from tech companies about how seriously they were taking the summit, without many concrete details to their plans.
… tech companies have pledged to review their business models and take action to stop users being funnelled into extremist online rabbit holes that could lead to radicalisation. That includes sharing the effects of their commercially sensitive algorithms to develop effective ways to redirect users away from dark, single narratives.
Algorithms for profit
The underlying business model of social media platforms has been part of the problem with abuse and harassment on their services. A great deal of evidence suggests that algorithms designed in pursuit of profit are also fuelling radicalisation towards white supremacy. Rebecca Lewis highlights that YouTube’s business model is fundamental to the ways the platform pushes people towards more extreme content.
I never expected the discussions to get so specific that tech companies would explicitly put their business models on the table. That is promising, but the issue will be what happens next. Super Fund chief executive Matt Whineray has said that an international investor group of 55 funds, worth a US$3.3 trillion will put their financial muscle to the task of following up these initiatives and ensuring accountability. My question is how solutions and progress are going to be defined.
Social media companies have committed to greater public transparency about their setting of community standards, particularly around how people uploading terrorist content will be handled. But this commitment in the Christchurch Call agreement doesn’t carry through to discussions of algorithms and business models.
Are social media companies going to make their recommendation algorithms open source and allow scrutiny of their behaviour? That seems very unlikely, given how fundamental they are to their individual business models. They are likely to be seen as vital corporate property. Without that kind of openness it’s not clear how the investor group will judge whether any progress towards accountability is being made.
While the Christchurch Call has made concrete progress, it is important to make sure that we collectively keep up the pressure. We need to make sure this rare opportunity for important systemic changes doesn’t fall by the wayside. That means pursuing transparent accountability through whatever means we can, and not losing sight of fundamental problems like the underlying business model of social media companies.
The annual beanfeast for the US satellite industry — featuring major participation from European nations and companies in particular — is the SATELLITE congress held at the Washington Convention Center, a few blocks from the White House. It was an amazing event to attend, compared to the sort of low-key conferences we have in New Zealand.
Now I’m back in NZ and almost recovered from the jetlag, a few pieces of information about the SATELLITE 2019 convention that I attended last week in Washington DC.
Starting at the beginning, the keynote talk on the opening day was by the Vice-President, Mike Pence. I decided I could miss that, as he would not be saying anything not known already, and the security-check lines were long. Yes, the US will be proceeding with the development of a Space Force as a branch of the military — but that’s obvious, and a natural progression, no matter how much others might dislike the concept. The militaries of the world already depend on space assets for many of their capabilities (as do you: consider the GPS receivers in your smartphone and car), including communications, reconnaissance and surveillance, early warning of missile launches, navigation of their platforms and indeed targeting of their munitions.
More interesting to me was the sheer scale and the innovation of the hardware and services on display. Recently I noted an Australian space-related website that claimed that ‘space’ is now a half-trillion dollar industry, globally. I think that’s an underestimate. If one thinks back to the GPS system alone, and acknowledge that the satellite constellation is operated by the US Department of Defense with an unknown budget, the next question might be whether the expenditure and the value added by the systems we use in our automobiles and smartphones and dedicated navigation electronics should be included in the ‘space’ turnover worldwide. If you add that in, along with other modern activities linked to space — such as TV broadcasting via satellite — then the figure grows to more than a trillion dollars per annum, and big (US) dollars too.
A split panorama of the exhibition hall at the SATELLITE 2019 convention… It seems that blue is the favourite colour for aerospace companies.
Those unfamiliar with the satellite industry might expect NASA to be a major participant in this congress, especially since NASA HQ is barely a mile away from the convention centre. Well, here’s the NASA stand: To be honest I didn’t see much happening there, but that’s to be anticipated, because this meeting is all about business.
So who was there? The answer is virtually all the substantial aerospace companies from both the US (Lockheed Martin, Northrop Grumman, Hughes, etc.) and Europe (Airbus, the Ariane Group, Thales Alenia Space, and so on). Similarly various countries had pavilions promoting their own national companies, such as the UK, France, and Sweden.
The UK pavilion at the SATELLITE 2019 convention.
From the southern hemisphere the only stand I noticed was Argentina. Surprising that Australia was not represented, given the recent establishment of the Australian Space Agency and large federal government support there for a wide range of space activities.
And New Zealand? Rocket Lab had an impressive presence, and I enjoyed talking to people on their stand, though I cannot say that I noticed many kiwi accents.
The Rocket Lab stand, with an indicator boasting of 28 satellites delivered to orbit (so far).
As you might imagine, the big players had big stands. A good example is Airbus (disclosure: Xerra Earth Observation Institute has a teaming arrangement with Airbus with regard to the provision of satellite optical and radar imagery over NZ), as shown below.
The Airbus pavilion at the SATELLITE 2019 convention in Washington DC, next to their fellow Europeans, Ariane.
Something that stuck out, for me, was the proliferation of companies offering small satellite receiving stations, many of them being dishes two to five metres across and capable of being transported on the back of a ute (a pick-up truck, in American parlance) and also able to track a satellite in low-Earth orbit as it quickly crosses the sky.
One display that caught my eye was that of the Cubic Corporation, which markets satellite antennas from the GATR Technologies company. These antennas are inflatable: they look like large bouncy balls several metres in diameter, the concept being that the radio waves pass through the plastic front of the ball and are reflected from a metallic curved surface within, so being focused onto the horn of the radio receiver which is on the front part of the ball, that directed towards the satellite in geostationary orbit. Such an antenna has obvious military applications, being easily shifted and capable of being set up within 30 minutes.
Inflatable satellite antenna from Cubic Corporation/GATR Technologies.
The utility of such inflatable set-ups has not been missed by the New Zealand Army, which in October 2017 contracted to spend over US$5 million on such equipment from Cubic. More recently, in August last year, the US Army agreed to buy over $500 million worth of these devices.
Clearly a convention such as SATELLITE 2019 is organised so as to facilitate business, but its not all hard work and technical discussions. So as to illustrate this, I finish up with a couple of photographs showing the lighter side of things.
When I was growing up in Tulsa, Oklahoma, long before we spent our evenings drawn to the soft glow of electronic devices, I would sit down in my grandparents’ backyard on summer nights and watch the air twinkle with fireflies.
They allowed me to catch a few in a jar, so I could study their tiny anatomy amid brief bursts of light from within. But I was always made to set the insects free to continue their light show — or, frankly, to become food for frogs, spiders, and other creatures of the night.
Fast-forward 20 years and I was teaching science to preschoolers, indulging my nostalgia with a lesson on the Lampyridae family of beetles, commonly known as fireflies or lightning bugs.
My budding scientists learned to pronounce the word “bioluminescence” — the chemical reaction inside a firefly’s abdomen that produces light — and I had them fashion a model firefly using a soda bottle for a body, with pipe-cleaner antenna and construction-paper wings. We lit it up from the inside with a glow stick. The kids especially loved getting the “mark of a firefly” — a dab of glow-in-the-dark paint on the forehead meant to simulate the bugs’ light-emitting enzyme — and lining up in the dim park bathroom to see the radiant dots in the mirror.
What I didn’t realize until later is that fireflies were no more real to these kids than dragons or unicorns. Most, their parents told me, had never seen one. It’s now another 20 years later, and where fireflies were once abundant where I live in Central Texas, I went years without seeing them. Last year’s spring rains supposedly boosted populations, but I still only caught a few flashes here and there — a couple of lonely bugs signaling, perhaps in vain, for a mate amidst the darkness.
I had this all in mind when I recently read about a review of studies published in the journal Biological Conservation charting a catastrophic decline of insect populations worldwide. I was primed to take it at face value, and apparently, other journalists were, too, with sensational headlines ricocheting around the globe. Some called it “insectageddon.” Others wrote of a looming “insect apocalypse.” The Guardian, one of the first news outlets to cover the story, declared that “plummeting insect numbers threaten ‘collapse ofnature.’”
Meanwhile, entomologists and ecologists around the world took to Twitter, blog posts, and editorials to point out serious methodological flaws in the research, and to refute the study’s doomsday findings. Among these was Atte Komonen, a senior lecturer in the department of biological and environmental science at the University of Jyväskylä in Finland. In a response published in the journal Rethinking Ecology, Komonen and colleagues worried that the unsubstantiated claims pinballing across the globe could diminish public faith in science, and even undermine efforts to address the real stressors that many of the planet’s insects face.
“The problem is real, insects are declining in many regions,” Komonen told me. But, he added, insects are not going to vanish globally in 40 years. “It’s dramatic, over-exaggerated, and it reduces the credibility of … conservation science — or any other science for that matter.”
Indeed, according to Manu Saunders, an ecologist at the University of New England in Australia, the flawed review and poorly considered media hype gave the false impression that we have a handle on the state of the world’s insect populations when, in fact, we really don’t. “Widespread, consistent insect declines are a real concern,” Saunders noted in a critical analysis published in the May/June issue of American Scientist. “Yet there is little published evidence that worldwide decline of all insects is happening.”
Journalists ignore these nuances at the peril of everyone, she and other experts told me. That’s because when the real picture eventually emerges — a picture inevitably filled with boring things like caveats, counter-evidence, and a good deal of lingering uncertainty — the public’s understanding of science, along with their faith in its practitioners, will have once again been undercut.
The first wave of Insectageddon stories hit in late 2017 after publication of a study suggesting a 70 per cent reduction in flying insect biomass — the total volume of such insects — over 27 years at nature reserves in Germany. The next round came a year later in response to a study that discovered a precipitous drop in insects the Luquillo rainforest in Puerto Rico between 1976 and 2012, accompanied by reductions in the populations of the lizards, frogs, and birds that feed on them.
That was followed by the Biological Conservation review published earlier this year, in which two Australian researchers — Francisco Sánchez-Bayo, a research associate in the school of life and environmental sciences at the University of Sydney, and Kris Wyckhuys a professor of biology at the University of Queensland — analyzed data drawn from the Germany and Puerto Rico studies, along with 71 other studies of insect decline.
“As each study came out, the surrounding hype grew,” Saunders wrote, “filling broadcast and online platforms for popular-science news with a heady mix of hyperbole, anecdote, and speculation.”
With its global scope and unusually dramatic language, the Sánchez-Bayo and Wyckhuys review was a natural catalyst for sensationalist headlines. Based on their analysis, the authors characterized the state of insect biodiversity in the world as “dreadful.” “Almost half of the species are rapidly declining,” they wrote, “and a third are threatened with extinction.” The main driver of the decline, according to the researchers, is loss of habitat to intensive agriculture and urbanization. Other contributors include pollution from sources such as pesticides, fertilizers, and industrial chemicals; biological threats from pathogens and invasive species; and climate change.
They concluded that unless humanity changes its ways, “insects as a whole will go down the path of extinction in a few decades.”
Passing peer review
Lead author Sánchez-Bayo said he and his co-author were concerned that scientists who reviewed the study prior to publication would ask them to tone it down. But they didn’t. “That means to us that they agreed with us,” he said. To him, the impending collapse of insect life and the ecosystems they support warrants all the drama he can muster, so that both researchers and the public sit up and take notice. “[We need to] make them realize that it is a problem and we’ve got to handle it.”
And yet, experts I talked to expressed surprise that the study passed peer review. Many of the earlier studies examined in the Sánchez-Bayo meta-analysis were “localized and skewed toward particular taxa,” Saunders wrote in her critique. Several critics also noted that in reviewing the scientific literature, the authors deliberately sought out papers on insect declines, quite possibly overlooking research showing stable or increasing populations. (Sánchez-Bayo said that he and his colleague included other research as well, but the criteria for selection wasn’t clear.)
“The problem to me is that they are mixing really miscellaneous studies,” said Komonen. You could use that information to do a qualitative overview, he said, “but if you want to do this exact prediction of the extinction rate and what are the reasons behind [it], it’s just — you can’t do that. It’s impossible.”
The other key issue, Saunders suggested, is that the researchers based global predictions on limited data from just a few regions, predominantly Europe and parts of the U.S. She also pointed out that the review covered about 2,900 species — a tiny fraction of the estimated five million species of insects on Earth. “The most studied groups are bees, beetles, and butterflies,” she said. “For the vast majority of the rest of the species of insects in the world, there’s just no data and no one’s studied them.”
Biological Conservation later published a letter critical of the study, as well as the authors’ rebuttal to that criticism. In an email, the journal’s editor in chief Vincent Devictor credited the study with initiating a “very useful debate”. But, he wrote, “the merits of the study were unfortunately overshadowed by the critics (most of them justified)”.
Chris Thomas, a highly regarded expert on biodiversity loss and species decline at the University of York in the U.K. and one of authors of a critique of the Sánchez-Bayo and Wyckhuys meta-analysis, was unequivocal in his assessment: “It is a dreadful piece of science,” he told me. “It’s really bad.”
The journal itself was also negligent, he added, for having published the paper in the first place.
But Thomas also laid blame on the press. “I’m pretty cross with journalists,” he said. “I mean, not in an angry sense. But I mean in a dissatisfied sense with journalists who either didn’t inquire more, or did inquire but went with this more exciting-sounding story anyway. And I just don’t think it’s in people’s — in our long-term interests for the rational interaction of humanity with the planet — to behave in that way.” (He did not name names.)
Looking beyond the hype
Of course, some journalists reported the story with more nuance — though some did so sooner than others. Within a week or so of the latest Insectageddon flare-up, the science writer Ed Yong published a well-balanced account in the Atlantic. After reading that, Brian Resnick, a science reporter with Vox, did additional research on the study’s shaky methodology and updated his previously published story, noting the changes at the top of the article. “Corrections and changing things can feel scary,” he said. “But I always feel like as a journalist you can’t pretend you don’t know something.”
But these examples were exceptions in the science press, not the rule — and that’s part of the problem, Saunders said. The story the media often misses is far more complex — and in some ways more dire — than the sensationalist fodder they frequently prefer to peddle. While the studies behind the Insectageddon story don’t provide evidence that all six-legged life on Earth is doomed, Saunders said, they do provide a window on how humans can impact biodiversity more generally.
“That humans are changing the Earth in damaging, often dangerous, ways is undeniable. Forest clearing, pesticide overuse, agricultural intensification, and fossil-fuel production have severe effects on ecosystems, including the smallest of animals,” she wrote in American Scientist. But if we hope to reverse that damage, she told me later, we need to be talking much more about the cavernous gaps in our knowledge about how all insects are responding.
That means journalists need to look beyond the hype and easy narratives to convey the messiness and uncertainty of scientific inquiry. Saunders, who trained as a journalist before returning to school to become a scientist, said she understands that sensationalism grabs people’s attention. But she also noted that by misrepresenting the science, we are gradually eroding the public’s ability to trust scientists at all.
And that has implications not just for bumblebees and lightning bugs (and yes, fireflies are on the decline, researchers think most likely due to light pollution and habitat loss), but for society’s ability to comprehend and rationally respond to a growing battery of civic debates with deep science at their core — from vaccines and GMOs to climate change and artificial intelligence.
“Science can’t be represented as absolute truth and simplified little sound bites,” Saunders told me. “That’s not what science is, and it could never be that.”
Teresa Carr is an award-winning, Texas-based journalist with a background in both science and writing, which makes her curious about how the world works. She is a former Consumer Reports editor and writer, and a 2018 Knight Science Journalism Fellow at MIT. In 2019, she began penning the Matters of Fact column for Undark.
I’ve learned quite a bit about spiders over the years. (And I have never been able to understand the “burn it with fire!” some folks take towards these 8-legged creatures.) For example, it turns out that some spiders actively hunt fish, while others are vegetarian!
Crab spiders are cute little creatures. The family they belong to has around 2100 species worldwide, with 11 or so found here in New Zealand. Unlike most spiders they’re not active hunters and don’t use webs to catch prey; instead, they wait in ambush for dinner to drop by. In the first study, a research team from Singapore investigated two crab spiders that hang out on pitcher plants, to see how their hunting activities might impact on the plant. (In nutrient-poor environments, pitcher plants rely on catching insect prey to obtain the nitrogen that they need for growth.)
The researchers noted that crab spiders live in the pitchers (which are highly modified leaves) of several Nepenthes species. They decided to carry out lab experiments to investigate whether two particular species of spider were a) stealing prey that the plants had already caught (in which case, the spiders would be kleptoparasites), or b) catching – and potentially dropping into the pitcher – prey that the plants might not normally catch. Either way, the plants might still benefit if enough of the spiders’ left0ver meals made it into the pitcher to be digested there, and providing at least some of their nitrogen to Nepenthes.
The team found that the pitchers were were able to catch flies even when no spiders were present, but that having a resident spider increased the overall rate of capture. Both spiders ambushed their prey around the mouth of their host pitchers, but Thomisus nepenthiphilus was a better fly-catcher than Misumenops nepenthicola. I was amused to read that T.nepenthiphilus grabbed its prey, while M.nepenthicola tended to push flies into the pitcher fluids! Presumably it fished them out again afterwards.
Once the spiders had eaten – remember that they feed by sucking liquids from the bodies of their prey – they dropped their leftovers into the pitchers. As you might expect, the corpses were somewhat depleted in nitrogen, but there was still a measurable amount left for the plants; just not as much as if they’d caught the flies directly. The researchers suggested that the apparent loss in total nitrogen availability “can be offset by the increased crab spider-assisted capture rate of flies when environmental prey availability is low.”
That was in the lab. In the wild, crab spiders also take larger prey such as larger flies, moths, wasps, and cockroaches, all of which are occasionally trapped by pitcher plants. Presumably there’d be a greater benefit to the plants when the leftovers of these larger meals are discarded by the spiders. This was investigated further in the second study, which looked at the impact of the crab spider Thomisus nepenthiphilus on the nutrient budget of the pitcher plant Nepenthes gracilis. They found that pitchers where the spider was present contained higher numbers of many prey species, and that the spiders’ feeding reduced the available nutrients in the bodies that they fed upon. However, the size of the prey animals was an important part of the equation:
Because of this, the researchers concluded that “resource conversion mutualisms” are more likely to be a thing where high-quality resources are available, due to the nutrient ‘tax’ levied on the pitcher plants by the spiders as they feed.
You might wonder what the spiders are gaining here, apart from a decent meal. I suspect that one of the benefits for them is protection: because they’re operating in the enclosed pitcher, or under its ‘lid’, they’re less vulnerable to predation than if they were lying in wait out in the open. They might also gain from a larger potential prey population, if insects are attracted to the pitchers by scent or colour cues.
Visit any major museum in Aotearoa New Zealand and you will see a giant moa skeleton on display. The first thing you notice, apart from its enormous size, is the complete lack of wing bones. The answer to how the tūpuna of moa arrived on our shores and subsequently lost their wings has been one of New Zealand’s greatest evolutionary mysteries.
Pin the wings on the moa….what wings? The South Island giant moa skeleton in the entrance of Canterbury Museum that so fascinated my five-year-old palaeontologist.
Moa, and our other national bird, the kiwi, are members of an ancient super group of birds called palaeognaths (derived from the Greek for ‘old jaws’, referring to the primitive-looking roof of their mouth), very different to their evolutionary rivals, the neognaths (new jaw) that include all other birds alive today.
The moa and kiwi, along with the ostrich from Africa, rhea from South America, emu and cassowary from Australia and New Guinea, and the extinct Madagascan elephant bird (which inspired the legend of the roc in Sinbad) are members of an exclusive posse within palaeognaths, called the ratites.
Ratites share some key features. They are large (yes, even kiwi are large), flightless, and lack a keel on the breast bone for supporting flight muscles. Next time you carve your Sunday roast chicken, check out the large keel that supports the breast meat. If you don’t fly, over eons of evolution the keel is eventually lost. While the majority of ratites still have remnant wings, moa have lost all trace of them.
There’s one other player in the mix here. The South American tinamou – small, chicken sized birds that can fly. Tinamou are palaeongaths and were traditionally thought of as close cousins of ratites, but more on that later.
Like many kiwis my age I grew up thinking ratites were the poster children for Gondwanan vicariance – those animals whose ancestors were passively transported around the Southern Hemisphere as the supercontinent Gondwana broke up. Under this theory kiwi and moa were each other’s closest relatives and became isolated in Aotearoa as it started to split from eastern Gondwana around 80 million years ago.
It wasn’t until I was at university taking George Gibbs’ fantastic ‘New Zealand flora and fauna’ paper that I realised this theory had some serious holes. A decade earlier a seminal ancient DNA study had shown that kiwi and moa were not each other’s closest cousins – rather kiwi were most closely related to the emu and cassowary, and moa were their own unique lineage within ratites.
The little trouble maker: This elegant crested tinamou, one of 47 such species, caused considerable angst for scientists trying to reconstruct the whakapapa of ratites.
As more ancient DNA and morphological (i.e. shape of bones) data came in for moa, our understanding of how ratites evolved became more convoluted, clouded by prior assumptions about the break-up of Gondwana causing the diversification of ratites, and the flying tinamous being their close cousins. Theories abounded, were debated and then discarded as quickly as Australia does with prime ministers.
The implications were game changing. Rather than ratites having a single origin with one loss of flight, and being passively transported across the Southern Hemisphere when Gondwana broke up, their tūpuna could fly and subsequently lost flight independently in the lineages that led to ostrich, rhea, moa, elephant bird, kiwi and emu/cassowary. Hello poster children for dispersal.
The mother of ratites: From humble beginnings Lithornis came to rule Aotearoa as a true giant of the bird world.
It just so happens that scientists have discovered fossils of this flying ancestor, called Lithornis. This mother of ratites had a widespread distribution prior to the extinction of the dinosaurs 65 million years ago. With the demise of these terrible lizards, job vacancies opened up in the ecosystem for large, ground-dwelling giants. Independently, on the different Gondwanan continental fragments, Lithornis filled this vacancy and lost flight. Convergent evolution, (think of the similar body plans of sharks, dolphins and extinct ichthyosaurs), resulted in the ‘ratites’ all looking superficially similar. If you are a large, ground dwelling bird, there are only so many ways you can look, given functional constraints. It’s partly this convergence in the shape of ‘ratite’ bones that was muddying the waters in tracing the whakapapa of these magnificent birds.
The ancestors of moa diverged from those of the South American tinamou around 58 million years ago, having flown to Aotearoa. By 16-19 million years ago, at the Miocene Wonderland of St Bathans, moa were already large and presumably flightless (and maybe even wingless…). In contrast, the ancestors of the kiwi didn’t split from those of the elephant bird until around 50 million years ago, and were potentially still flying around Wonderland tens of millions of years later. The vacancy for a large, ground-dwelling bird had already been filled by moa, so kiwi stayed relatively small so to speak (a similar pattern occurred with tinamous compared to rhea), and became nocturnal – traits that no doubt helped the kiwi avoid extinction when Polynesians, and later Europeans, arrived in New Zealand.
This all leads to one big evolutionary mystery. How did these flying ancestors lose flight? We probably know why they lost flight. Flying is energetically expensive. If you don’t have to fly, why bother. In New Zealand, the absence of mammalian predators is certainly a good reason. But we don’t know the how.
The recent publication of the partial moa genome, one of the holy grails of ancient DNA and a game changer, only added to the mystery. All the genes coding for wing development and flight were entirely functional. Put those genes in another bird, and it would still develop a perfectly functional pair of wings. We are not dealing with a simple case of gene loss or loss of function in the same suite of genes (i.e. convergent evolution). So what gives?
New international research, with the full support of Ngāi Tahu and Te Āti Awa, involving fellow kiwi’s Paul Gardner and Nicole Wheeler, and the late Alan Baker, may just have the answer.
By comparing the genomes of most ‘ratites’, including moa, and the tinamou, the culprit is mutations in the same suite of non-coding regulatory elements in the genomes. These are genetic regions that do not code for proteins, instead controlling their levels of production. Not only that, these regulatory elements are closely associated with the developmental pathways that allow flight, like wing development.
Think of it this way. Genes are the blue prints for building planes in a robotics factory. Regulatory elements are the computer code that controls the robots and tells them what to do. If the code functions as it should, aerodynamic planes capable of flight will be produced. But if that code is corrupted or a mistake (i.e. a mutation) is introduced, flightless planes will be built. In multiple independently owned and operated factories, mistakes start occurring in the same bits of code, and pretty soon, all planes are flightless. In the case of moa, the mistake-ridden code stopped producing wings altogether.
How to make a flightless bird: Step one – corrupt the software. Artwork by Lily Lu.
Sackton and colleagues have gone a long way to answering how these poster birds for evolution became flightless, and why the moa lost its wings. More ‘ratite’ genomes, especially additional moa and the extinct Madagascan elephant bird (one day…fingers crossed) would go a long way to finally resolving this debate.
In a blow to the de-extinction hopes of Trevor Mallard, having the genome of a moa doesn’t mean you can successfully bring it back from the dead, even if you are the Night King – there’s a whole world of regulation that needs to be recreated as well. No wonder the Army of the Dead can’t walk properly. Bringing back the moa, that’s a tough call.
I’m standing in front of the giant moa skeleton that’s looking down at us in Canterbury Museum with my five year old. He’s fascinated by the skeleton of one of our avian icons and excitedly proclaims it has no wings. I recount this stunning tale to my budding palaeontologist, and wonder what new secrets, hidden in the mists of time, I’ll be able to tell him next time we’re in front of this majestic bird.
Elephants can be an important futures symbol. The “elephant in the room” – also, unnecessarily pigmented, called a “black elephant” by some futurists – is the well known large and obvious issue that people refuse to address.
Then there is the parable of the “blind men and the elephant”, which illustrates the misconceptions that result if we only focus on one part of a situation. A useful reminder when considering how narrow or broad you want to be when doing environmental scans or crafting scenarios.
The “elephant and the rider” analogy can be used to help decision-making. It portrays the elephant as our emotional side and the rider as our rational side. Both need to work together to bring about change, a critical factor to remember for any strategy.
I’ve found a new one to add to the pachyderm parade. I’m calling it “the elephants outside the room.”
Elephants outside the room
The artist Uli Westphal collated images of the elephant, sight unseen, in post-Roman times to depict an evolution of form in the absence of real knowledge. A game of cryptozoological whispers. Some commonalities emerge – long trunk, biggish ears, often tusks, and their use in warfare. But there are divergences – size, form, and resemblances to other animals (mythical and real).
Selection of elephant images from the middle ages (900-1500 AD). Source: Uli Westphal
This is illustrative of thinking about the future in three ways. Firstly, we have no real knowledge of the future. That elephant isn’t in the room. We have signs and suppositions of what it could look like, or what we want it to look like, and attempt to infer it from more familiar situations (like the blind men and the elephant). Some inferences are better than others. Sometimes it will be obvious, but not always.
Secondly, different people (or groups) will use the same base characteristics or information (long proboscis, tusks, large size; ageing population, automation, systemic economic trends) and create variations on a theme. They may have common features, but the emphasis and form will differ.
Lastly, we can try too hard to get it “right” – to have an accurate description of the future. Westphal’s compendium for me is interesting not because of how fantastically wrong most of the images are, but because of the variety.
A futures thinking approach is about exploring possibility spaces. Having a diversity of imagined futures, including ones that challenge current perceptions and expectations, is essential. More imaginative, divergent futures lets thinking take a short walk on the wild side, opening up possibilities and helping explore second, third, and higher order effects of change.
But too often I read futures reports, predictions and scenarios that are largely the same, simplistic and bland. Ponderous. They don’t surprise, or delight, or upset. Which connects back to the elephant and rider analogy that emphasises the importance of emotions as well as rationality.
In some cases we will want a non-fantastical future that we can work towards creating. But we shouldn’t try to get to these too quickly or linearly. We can’t describe the future precisely, so we shouldn’t overly constrain the possible. If the future is a metaphorical elephant it may be more astonishing than we think it is.
Featured image: detail from a folio by Guillaume le Clerc. Sourced from Wikimedia.
It’s now scientifically possible to predict potential asteroid impacts years in advance. But knowing that such a calamitous event is going to occur, due to the clockwork of the heavens, presents its own problems. Can we divert it, and if so, how? Similarly, if the impact is inevitable, can we model what is going to happen far ahead of time, and so plan better for this rude intrusion into global affairs?
In my preceding blog post I described the Planetary Defense [sic] Conference (PDC) that I was attending at the University of Maryland: a biennial meeting about the hazard posed to humankind by asteroids and comets, which we know strike the Earth from time to time with calamitous consequences. Just ask the dinosaurs.
Smaller objects that the 10 km leviathan that saw off the ‘terrible lizards’ and heralded the rise of the mammals (and eventually us) slam into our planet much more often. As part of the PDC an exercise is run in which day-by-day over the Monday to Friday meeting a fictitious scenario is introduced and then updated on the basis is what we could do in terms of investigating the threatening object, including dealing with it in any viable way. Attendees include not only astronomers and space scientists like myself, but also a range of experts who would be involved such as space lawyers (yes, they exist), representatives of international bodies such as the United Nations (most especially staff from the Office for Outer Space Affairs, which is based in Vienna), and disaster responders and planners from institutions such as the U.S. Federal Emergency Management Agency (FEMA). There were about 300 there in all, with just two nations in the southern hemisphere (Uruguay and New Zealand) being represented.
This exercise occupies about one-third of our time across the week (the PDC finished last Friday), with the larger part being involved in the presentation of talks and papers (including many posters) on specific aspects of the science, technology and other matters connected with the NEO (near-Earth object) impact hazard, and what it means for the future of humanity. As the t-shirts say, “Asteroids are nature’s way of asking… How’s that space programme coming along?” Sooner or later we will need to intercede, if we are not to go the way of the dinosaurs – or the many other species that have been slammed into extinction by past asteroid and comet impacts.
This year’s exercise – repeat, exercise – involved a relatively small asteroid being identified earlier in 2019, and on a course to run into our planet at hypervelocity in another three of its orbits around the Sun, in 2027. Its orbital parameters were craftily chosen by Paul Chodas of NASA’s Jet Propulsion Laboratory (JPL) to pose many problems for us to chew over, not the least of which was the fact that the circumstances (a small asteroid moving mainly far from Earth and observable even with very large telescopes during only a few intervals before the impact might occur) made it difficult to say anything definitive until very late in the piece.
To say it again, this asteroid is fictitious, and this was all merely an exercise; but the scenario complies with reality in that just such a thing could occur, and the science and technology involved are all ‘correct’.
We start on PDC Day 1 (29 April 2019) with the announcement to the public that an asteroid discovered by a team in Hawaii on 26 March has been found to have an estimated one in 100 chance of hitting Earth precisely eight years later, on 29 April 2027. Our best guess at its size, based on its brightness and an assumed albedo (reflectivity) range, is 100 to 300 metres. This is large enough to cause regional- to continental-scale devastation: the energy released on impacting Earth would be 100 to 2,000 megatonnes (cf. the Hiroshima atomic bomb released about 13 kilotonnes; the largest hydrogen bomb ever tested had a yield of about 60 megatonnes).
With the reasonably-precise orbit available, astronomers realise that this asteroid will only be observable during a handful of intervals over the next eight years, and would need very large talescopes to do so: eight metre apertures or more. Such behemoths are located in Chile, Hawaii, and the Canary Islands. The Hubble Space Telecope could also be used; but it will not remain in orbit for many more years. It is hoped that the new James Webb Space Telescope will be launched within a few years, and so could be pressed into service at some stage to track the asteroid, which is very faint and receding from Earth.
Most often when an asteroid is discovered and found to have a finite likelihood of colliding with our planet within the next few decades, additional positional information gained by observers enables us to say for sure that it will miss. That is, an initial value of one per cent (as here) as the collision probability for some known close appraoch to Earth means that 99 times out of a hundred it will miss, and the accumulation of data over weeks, months or years enables us to exclude it as a near-term risk. In my previous post I wrote about asteroid (99942) Apophis, which will pass very close by Earth in 2029 – but we do know that it will miss us.
In the case of this fictitious asteroid, however, as more observations are obtained the derived collision probability goes up. By 29 July 2019 such observations result in an upgrading of the formal collision probability estimate to one in ten: a 10 per cent chance that it’s going to strike our home, and cause havoc.
A note on how the probabilities are derived. Despite there being perhaps hundreds or thousands of positional measurements of an asteroid spread in time, there is still a set of finite uncertainties in its orbital parameters and this results in an ‘error ellipse’ that may be drawn around the location of the Earth at the time that the asteroid is due to come by us. It may seem paradoxical that we can say very accurately when this approach will occur, and yet we are unable to say for sure whether it will hit or miss. But that’s the way it is.
In the diagram below the largest ellipse is the initial one, and the point is that the collision cross-section of the Earth (its geometrical cross-section enhanced due to gravitational focussing being able to ‘suck in’ a passing asteroid which would otherwise miss) is about one per cent of the area of that initial, large ellipse. With more tracking data the ellipse reduces in size (‘later, more accurate prediction’) and now the green dot representing Earth is supposedly about ten per cent of that reduced ellipse’s area. With yet more data the ellipse collapses even further (‘Still more accurate prediction’) and the green circle of Earth is no longer within it: we now know that the asteroid will miss the Earth, just as Apophis will in ten years’ time.
At the end of July 2019, then, just four months post-discovery, we know that there is a worrying one-in-ten chance of major cosmic fireworks in 2027. From the anxiety perspective, astronomers also know that more data collection enabling a definite yes or no – hit or miss – will not be feasible until very late in 2020. It would really be helpful if some radar data were possible (a rule of thumb in the field is that one radar detection can be worth several years of optical telescope tracking, in terms of improving our asteroidal orbit determinations), but currently the only functional planetary radars are located in the northern hemisphere, at Arecibo in Puerto Rico and at Goldstone in California, and neither can access the problematic asteroid.
The reality of the situation with regard to asteroid paths which we know are going to come close by the Earth and possibly hit us is that our uncertainties reduce until there is a narrow line which we know will contain the asteroid as it reaches the Earth’s orbit… the problem is that as of yet we don’t know where across that line the asteroid will be. In the PDC exercise the red line in this diagram shows that part closest to Earth. As we get better knowledge of the asteroid’s orbit either the remaining viable part of the line is off the planet (i.e. it will miss), or else the line shortens and eventually is entirely on the Earth (i.e. it’s going to hit).
A side-view illustrating the same thing, for the instant on 29th April 2027 when the fictitious asteroid will either strike the Earth, or else pass safely by. The red line is composed of discrete dots because it is drawn up using a Monte Carlo simulation with the statistical nature of the uncertainties being incorporated.
The ‘risk corridor’ – the line along which the impact might occur – initially stretches from near Hawaii, across the contiguous U.S. and the Atlantic, and then various nations in west Africa.
Whilst the risk corridor is narrow, the blast that will occur will spread the damage transversely, so that a large number of people are in severe danger.
For the identified risk corridor as at 29th July 2019 (when the impact probability was still 10 per cent) it is feasible to estimate how many people would be affected by an impact at any particular point along that corridor.
The U.S. is by far the most-prepared nation with regard to the possibility of an asteroid or comet impact anytime soon.
An aside. I did not know in advance that the dates chosen for the fictitious asteroid impact was to be 29th April in 2027, but I did know that the first day of the PDC on 29th April coincided with a peculiarity in Scandinavia. Once-upon-a-time I worked for the European Space Agency, and was living in Sweden. In that country each day has a name associated with it, such that people with that appellation get to celebrate what might be thought of as being a second birthday. And April 29th is Tycho (or Tyko) day. Now, one of the most prominent (and youthful) impact craters on the Moon is called Tycho, for Tycho Brahe, the Danish astronomer who in the late sixteenth century made a vast number of accurate measurements of the positions of stars and planets. After Tycho moved to Prague his catalogue of observational data was used by Johannes Kepler in deriving his eponymous laws of planetary motion.
Tycho’s observatory was on the island of Ven (or Hven), in the strait between Denmark and Sweden. The southernmost province of modern Sweden (Skåne or Scania) was then, four centuries ago, part of Denmark. Anyhow, there is another thing about how particular dates are regarded in Scandinavia… Tycho Brahe days (there are about three dozen of them spread over the year, and, yes, 29th April is one of them) are considered to be unlucky.
So, the PDC starting on 29th April with a chosen date for a fictitious asteroid impact being that day eight years hence would seem to be felicitous. Or not, depending on your attitude to superstitions, and catastrophic asteroid arrivals.
It takes time – years, generally – to design, build and prepare a satellite for launch, especially for a deep space mission. With a 10 per cent chance of an impending asteroid impact, do you build and launch (in a rapid fashion) reconnaissance space probes to be sent to visit the threatening object and collect information? We need to know our enemy, and soon. Such matters were debated and decided on days 2 and 3 of the PDC. By Presidential decree the U.S. space agency NASA is given two billion dollars for an emergency spacecraft programme, and US$500 million for ground-based tracking and other observations. Various satellites already in Earth orbit are used to study the asteroid; for example thermal infra-red data can narrow down our estimate of its size.
As the end of 2020 approaches it had been hoped that with more positional measurements and a longer time-base our shrinking uncertainty in the asteroid orbit would result in it being proven that it would miss in 2027. The opposite occurs. It is now (at the end of 2020) certain that the asteroid will hit Earth, at a location near Denver on 29 April 2027. (Now it’s getting personal: for some years I lived in Colorado, working on NASA’s Pioneer Venus programme.)
Stepping forward to 30 December 2021, the first reconnaissance spacecraft mission flies by the asteroid. Images show the asteroid to be about 260 metres long and 140 wide, apparently a contact binary (we already know of several asteroids like that, being two-lobed). Its colours indicate it to be an S-type (stony; but it’s a bit more complicated than that). The impact energy estimate is now between about 150 and 600 megatonnes, being uncertain because we don’t know its mass. This asteroid could be a solid rock, in which case its density might be around 3 grams per cubic centimetre, or it could be a ‘rubble pile’: an agglomeration of boulders held together by self-gravity, with many voids within it. Whichever it is would make a huge difference to what will happen as it plunges into our atmosphere at about 69,000 kph.
As more observations of the asteroid accumulate, by the end of 2021 it is known that the impact will be near Denver, Colorado.
Given definitive knowledge of the impact location, it is is feasible to map the most-severely affected areas. The above pertains only to ground impact damage; the intensity of the radiation from the blast would be expected to start forest wildfires over a far larger region in the Rockies, for example.
There are just five days in the PDC, so we need to step ahead at pace. Various international agreements prohibit the deployment of nuclear weapons in space. Likely most of the people who understand the asteroid problem would prefer, in a real-life situation similar to this exercise, that stand-off nuclear explosions would be used to deflect or disrupt a threatening asteroid, but in this scenario we exclude such tactics. This means that the only means at our disposal is a kinetic impactor: slam a high-mass satellite as fast as you can into the asteroid, and try to knock it off course.
The preference is for a straightforward deflection: the asteroid remains intact but is diverted by enough to achieve a miss of our planet. It is known, though, that a weak asteroid might be disrupted. If it were indeed a rubble pile then shattering it into myriad lumps would be a useful outcome, in that although most or all would still hit the Earth they would be spread out, and smaller rocks explode higher up in the atmosphere. As it is, modelling of the entry physics of a 200-metre solid stony asteroid at the speed in question indicates that the asteroid would release most of its energy in an airburst at an altitude of about 1,500 metres (5,000 feet) above Denver. This is not a good thing. Such an airburst would cause more widespread damage than a projectile reaching the ground intact and then exploding, excavating a crater about 3 km across.
In the event (or, at least, in this fictional exercise) the asteroid is hit by three intercept spacecraft at the end of August 2024, and the first days of September. Six such spacecraft were launched by NASA, the European Space Agency, and other nations participating in a global effort to push the asteroid off-course, but half of them fail to make it.
On 3rd September 2024 it is announced that the three interceptors which hit the asteroid have successfully deflected the main mass, but split off the smaller lobe which is still on a collision course with Earth. The impact location is not yet defined. This fragment is estimated to be 50-80 metres in size. The last time something of this size hit Earth was in 1908, when the Tunguska event occurred in Siberia, laying waste to thousands of square kilometres of the taiga. No scientific expedition reached the region until 1927, so much is still unknown (or unknowable), but estimates of the Tunguska explosion’s energy range between 3 and 15 megatonnes. Apparently it was either a small asteroid around 50 metres in size, or perhaps a slightly-larger fragment of a comet.
Three kinetic impactors ‘attack’ the asteroid near the end of August 2024. These succeed in deflecting the main portion of the asteroid such that it will miss the Earth in April 2027, but a 50-80 metre fragment is knocked off and continues on a path intercepting our planet on 29th April 2027.
Following the kinetic interceptors hitting the asteroid in 2024, a fragment is calved off and this it is found will hit somewhere along the red risk corridor shown above.
The number of people directly affected by the asteroid fragment varies depending on precisely where it arrives, and this is..