Loading...

Follow SciBlogs on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Prof Tony Blakely, Dr Andrea Teng, Prof Nick Wilson

Policy-makers need to know how much of ethnic inequalities in health are due to socioeconomic position and tobacco smoking, but quantifying this is surprisingly difficult. In this Blog, and accompanying video, we summarize new research using NZ’s linked census-mortality data, blended with innovative new ‘counterfactual’ methods to determine causal relationships that can shed light on policy-relevant questions. 

A half or more of Māori:European/Other inequalities in mortality are due to four socioeconomic factors (education, labour force status, income and deprivation), and this percentage is stable over time for males but increasing for females.  Eradicating tobacco will not only improve mortality for all sociodemographic groups, but reduce absolute inequalities in mortality between Māori and European/Other by a quarter. It is hard to think of another intervention that will reduce inequalities by as much.

“How much of ethnic inequalities in health are due to socioeconomic position?” And “How much are these inequalities due to tobacco smoking?” are important questions for policy-makers.

The problem is these questions are surprisingly hard to answer. Why? Because of what would generally be termed “correlation is not causation” and “it is challenging to decipher causal pathways from data we observe only once, i.e. history as it unfolded”.

However, these questions are so important – and related questions about what causes inequalities more generally, and even more generally questions of “what mediates the causal association of exposure X with outcome Y?” – that a massive methodological push has been made internationally to generate better and better methods to answer these questions.

In this Blog we summarize our open access research just published in the journal Epidemiology [1]. There is also an accompanying video of Tony Blakely presenting the findings in a seminar.

What did we do?

We used linked NZ census-mortality data, covering three decades.  This is actually ground-breaking – we are not aware of any other study that has looked at mediation of ethnic inequalities on similar data over three decades for an entire country.

We examined Māori compared to European/Other mortality inequalities (differences in mortality rates) – the data was too sparse for Pacific and Asian. For socioeconomic mediators, we used labour force status, education, deprivation (NZDep), and household income – as measured at the census at the beginning of the 3 to 5 year follow up of each of the 1981, 1996 and 2006 censuses.  For smoking, we used the simple current, ex and never categorization at these same censuses.

We then used counterfactual methods – a.k.a. potential outcomes. (For those of you curious about the methods, there is a brief introduction at the end of this blog – as per this journal article.)

What did we find?

We looked at three research questions.

Question 1: How much of ethnic inequalities in mortality are mediated by socioeconomic position?

Interestingly, for males it was 46% for each of the 1981-84, 1996-99 and 2006-11 cohorts.  But for females, it steadily increased from 30% (95% confidence interval: 18% to 43%) in 1981–84 to 42% (36% to 48%) in the 2006-11 cohort. (Table below.)

Table: Percentage mediation (95% confidence intervals) of Māori:European/Other mortality inequalities

  1981-84 1996-99 2006-11
Males
% Mediated (SEP) 46.4 (31.4, 62.3) 46.6 (39.6, 53.6) 45.6 (40.3, 50.7)
% Mediated (SEP + Tob) 47.1 (31.6, 63.2) 47.8 (40.7, 54.6) 47.6 (42.4, 52.8)
Mediation Change (SEP to SEP + Tob) 0.8 (-2.4, 4.0) 1.2 (0.4, 2.8) 2.0 (1.2, 2.8)
Females
% Mediated (SEP) 30.4 (18.1, 42.7) 37.1 (29.4, 43.8) 41.9 (36.0, 48.0)
% Mediated (SEP + Tob) 33.5 (20.6, 46.3) 40.8 (33.3, 47.9) 49.7 (43.4, 55.7)
Mediation Change (SEP to SEP + Tob) 3.0 (1.7, 7.4) 3.7 (1.5, 5.8) 7.7 (5.5, 10.0)

SEP = socioeconomic position, including education, labour force status, household income and small area deprivation (NZDep).  Tob = tobacco smoking

Interpretation: The unequal distribution of socioeconomic position between Māori and European/Other, due to many reasons from the legacy of colonisation to current day discrimination in the workforce, explains a stable percentage of male inequalities in mortality over time.  Further, this ‘percentage mediation’ will be greater than 46%  – perhaps two-thirds or so – as even though our linked census-mortality studies are high quality and powerful, we still only measure income (approximately) in the year before census night (not over the lifecourse), an approximate measure of education (not quality), and so on.  Better measurement of individuals’ socioeconomic factors (e.g. with lifelong cohort studies) would almost certainly see more mortality inequalities explained.  Conversely, it also seems unlikely that better and better measurement of socioeconomic factors will explain all of the ethnic inequalities; there will be pathways from ethnicity to health that do not go through socioeconomic position (e.g. direct discrimination, tobacco consumption related to ethnicity – see next research question).

The increasing role of socioeconomic position as an explanation for excess mortality in female Māori females makes theoretical sense, with increasing workforce participation of females over time and increased salience of females’ own socioeconomic position (compared to, say, male partner’s socioeconomic position).

Question 2: What is the incremental increase in mediation when including smoking over and above socioeconomic position, and does this change over time?

As hinted above, we expect smoking to be a major contributor to ethnic inequalities in mortality – both because smoking is determined by socioeconomic position (and hence this contribution will be captured under research question 1 above) and because smoking is patterned by ethnicity for reasons other than socioeconomic position (e.g. Māori females have smoked at high rates ever since first contact with European explorers, whalers, sealers and then settlers).  This research question addresses the latter component. We found that including smoking in addition to socioeconomic factors only modestly altered the percentage mediated for males, but more substantially increased it for females, for example, 8% (95% CI: 6% to 10%) in 2006–2011.

Interpretation: Tobacco smoking is on pathways from ethnicity to mortality inequalities that do not include socioeconomic position for females – which makes sense given what we know about particularly high tobacco smoking rates for Māori females.

Question 3: If, counter-to-fact, NZ had been tobacco free, how much less would current ethnic inequalities in mortality be?

This is a different question and analysis to the above two so-called ‘natural’ effects analyses, which aim to understand the way the world is.  In this research question we model everyone as never smokers.  The counterfactual is of Captain Cook (and all subsequent settlers) having never brought tobacco to Aotearoa NZ.  In this question and analysis we do not manipulate peoples’ socioeconomic position.  Rather, we leave that untouched and counterfactually flip everyone to the mortality risk we would expect had they been never smokers (a so-called ‘potential outcome’). Whilst a counterfactual question, it is arguably a more relevant policy question as it starts to speak to what would happen with dramatic reductions in tobacco smoking – consistent with the 2025 tobacco free goal for NZ.

Here, everyone – Māori and European/Other, males and females – enjoys profound reductions in mortality risk. The figure below is for 2006-11. Here is the good news – the absolute gap in mortality risk between Māori and European/Other reduces by 15% for males and 24% for females.  It is hard to think of another intervention that would be so inequality reducing.  (But there is some not so good news too.  On a relative risk scale, the gaps hardly change at all as Māori and European/Other mortality risks reduce by about the same percent. Ideally, we want both absolute and relative inequalities to reduce, but it seldom happens – see another study we have published on this.)

Figure: Annual mortality risks for 25-74 year olds in 2006-11, by sex, as observed (darker blue and red) and counterfactually had there never been tobacco in NZ (lighter blue and red) 

If you want to know more about the methods….

… see the study we published in the journal Epidemiology [1].  We have made it open access, so it is easy to get.

On the methods side, it is at the leading edge using potential outcomes approaches – but there could be yet further improvements.  A commentary in the same issue of Epidemiology by John Jackson speaks to the use of ‘randomized intervention analogues’.  They take a bit to get your head around, but are flexible and clever, emulating randomized trials of shifting mediator distributions.  At the moment, these methods are rapidly being developed.  Should policy-makers and researchers care?  Probably.  We believe these methods get us closer and closer to answering the questions that you want answered – as outlined at the beginning of this Blog.  Watch this space.

For a YouTube clip of Tony Blakely presenting these results and methods, assisted by Kermit and Miss Piggy impersonations, view here.

This blog illustrates just one example of what can be done with NZ’s great data. There is ample and exciting potential to apply new causal inference methods to answer policy relevant questions – an issue we will take up in a forthcoming blog and accompanying video.

References

  1. Blakely T, Disney G, Valeri L, Atkinson J, Teng A, Wilson N, Gurrin L: Socio-economic and tobacco mediation of ethnic inequalities in mortality over time: Repeated census-mortality cohort studies, 1981 to 2011. Epidemiology 2018, 29:506-516.
  2. Blakely T, Cobiac LJ, Cleghorn CL, Pearson AL, van der Deen FS, Kvizhinadze G, Nghiem N, McLeod M, Wilson N: Health, health inequality, and cost impacts of annual increases in tobacco tax: Multistate life table modeling in New Zealand. PLoS Med 2015, 12(7):e1001856. [Correction at: http://journals.plos.org/plosmedicine/article?id=1001810.1001371/journal.pmed.1002211].
  3. Cleghorn CL, Blakely T, Kvizhinadze G, van der Deen FS, Nghiem N, Cobiac LJ, Wilson N: Impact of increasing tobacco taxes on working-age adults: short-term health gain, health equity and cost savings. Tob Control 2017.
  4. Nghiem N, Cleghorn CL, Leung W, Nair N, van der Deen FS, Blakely T, Wilson N: A national quitline service and its promotion in the mass media: modelling the health gain, health equity and cost-utility. Tob Control 2017, (E-publication 24 July).
  5. van der Deen FS, Wilson N, Cleghorn CL, Kvizhinadze G, Cobiac LJ, Nghiem N, Blakely T: Impact of five tobacco endgame strategies on future smoking prevalence, population health and health system costs: two modelling studies to inform the tobacco endgame. Tob Control 2017, (E-publication 24 June).

The post How much of Māori:European mortality inequalities are due to socioeconomic position and tobacco? appeared first on Sciblogs.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Peter Nelson is remembered as a founder of modern, 21st Century, pest management in New Zealand. In 1993, reflecting on our nation’s failure to eradicate Australian brushtail possum, rabbits and deer, he wrote:

“The aim of Integrated Pest Management (IPM) is to reduce pest damage to tolerable levels by using a variety of techniques, and ultimately also to reduce pest populations to tolerable levels. This is very different from the traditional idea of eradication as practiced by many of the old Pest Destruction Boards. Their goal proved to be impossible, especially with rabbits; hence, the change of direction in the management of rabbits in the early 1970s from eradication to control” [1].

Integrated Pest Management

IPM is deeply informed by the ecological and social sciences and has now become routine in horticulture and agriculture internationally because it works. It is an ecologically, economically and socially pragmatic and efficient response to pest animal damage that delivers results. IPM strategies are applicable to small mammal pests, like rodents and brushtail possum too (examples hyperlinked for each).

Pest management at the national scale is a complex task because pests and pest damage (e.g., predation, browsing, infection) and the values being protected (e.g., biodiversity, crops, stock) vary across the landscape and over time. Thus, success at pest management requires different strategies in different places, at different scales, in different communities of people, and at different times. National-scale eradication attempts ignore this socio-economic and socio-ecological complexity and that is why they fail. IPM, instead, adapts itself to social, spatial and temporal variation to develop a strategy for success that can be dynamic as circumstances change.

Regional councils and ‘good neighbour’ cooperation

The complexity of pest management challenges is why Peter Nelson was also enthusiastic about regional councils taking over responsibility for them in 1989 from the old Pest Destruction Boards. He wrote:

“They [Regional Councils] can deliver effective, efficient, and sustainable management of all pests that affect or are likely to affect agricultural production, for as long as they accept responsibility. All they need now is to develop an integrated pest management policy accepted and operated by all Regional Councils” [1].

Later in his article, Peter wrote further about coordination:

“The “good neighbour” policy agreed between the old Forest Service and the then Pest Boards, whereby each agreed to protect the other’s work from reinfestation, worked well and should be revived in the form of reciprocal arrangements between Regional Councils and the Department of Conservation (DOC)“.

Central government eradication wars failed

But for small mammal pest control at the highest levels of pest management policy-making in New Zealand, we have forgotten Peter’s advice to build and apply the science and practice of integrated pest management and the importance he saw of empowering different regional priorities and cooperation.

Perhaps this is because we have also forgotten the failure of central governments’ 20th-century pest eradication attempts that Peter wrote about. The possum, rabbit and deer wars soaked up enormous resources each year: e.g., $7.4 million for the 1976/77 financial year [2] (which is equivalent to ~$71.5 million in 2018), but failed.

Despite that experience, instead of building a national integrated pest management plan, as a nation have adopted a flawed 20th-century eradication policy and called it Predator Free 2050 [3].

Scientist’s concern

As a scientist, my greatest concern about the Predator Free 2050 policy is that it has ignored decades of ecological and pest management science and experience, like Peter Nelson’s, and returned us to the flawed and failed eradication policies of the past, although much better policy and practice are possible. It is possible, by adopting an integrated pest management framework as Peter envisaged, to achieve our biodiversity goals for New Zealand [4] without imposing the extreme environmental, economic and social costs of eradication and the risks of failure.

It is encouraging that Peter’s vision for regional councils generating nationally coordinated programmes for pest management at large scale (e.g., with DOC) is being achieved. That could assist very much in developing IPM plans. But it is discouraging that the last government’s policy has complicated central and regional government collaboration by imposing a business structure (i.e., Predator Free 2050 Ltd.) that is displacing integrated pest management to pursue, once again, an eradication policy like the old Pest Destruction Boards.

It seems, in pest management, we are destined to repeat the mistakes of history.

Citations

1. Nelson, P. (1993). “Options for future pest management: what will work for New Zealand.” New Zealand Journal of Zoology 20(4): 373-378.

2. Nelson, P. (1978). Agricultural Pests Destruction Movement in New Zealand. 8th Vertebrate Pest Conference, University of Nebraska – Lincoln.

3. Bell M. (2016) Accelerating Predator Free New Zealand. p. 110 in Cabinet editor., Wellington, New Zealand.

4. Norton D.A., Young L.M., Byrom A.E. et al. (2016) How do we restore New Zealand’s biological heritage by 2050? Ecological Management & Restoration 17, 170-179.

The post Remembering Peter Nelson’s advice – pest management for the 21st century appeared first on Sciblogs.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I was idly skimming the Herald’s website when I came across an article with the headline “Is plant medicine really that effective?” Since the article appears to be in the nature of an advertorial, the answer is, it depends on who you ask.

Unlike man-made chemical drugs that have been developed as novel medicines from the 19th century onwards, plant medicines have been used in human healthcare for millennia.

This is what’s known as an appeal to antiquity – because something’s been in use for ages, it must work. It’s repeated later in the article, with the claim that

[t]raditional plant medicines have a rich history of being effectively used for over 2500 years

A rich history of being used is not the same as a history of being used “effectively”. In Hippocrates’ time, for example, ‘plant medicine’ & basic surgery were about all physicians had to work with. That doesn’t mean that they necessarily achieved a high cure rate. The implication that plants are somehow better than their modern pharmaceutical counterparts is an example of another logical fallacy, the appeal to nature. (Tim Minchin was spot-on when he said “You know what they call alternative medicine that’s been proved to work? – Medicine.”)

They share a long co-evolution with humans and are the foundation of their modern chemistry-based counterparts.

There are certainly many examples of coevolution involving plants and animals. However, much of this coevolution has taken the form of an arms race: as mutations that make plants less attractive to eat (e.g. spiny, less palatable, or downright poisonous) spread through a species, this can act as a selective agent on herbivores: animals with gene combinations that allow them to process the poisons are more likely to survive and spread those own genes around, and so that species evolves in turn. Coevolution does not mean, as previous articles by Clair imply (see here, for example), that plants are thus well suited by coevolution to our own needs in terms of acting as medications. The defensive alkaloids produced by many plants can certainly have a physiological impact, but as part of the plant’s anti-herbivore armoury. We can make use of some of those chemicals, sure, but natural selection didn’t design them for medical (or recreational) use in any directed way. (Deliberate selection by humans is another matter.)

But yes, many modern pharmaceutical drugs are derived from plant extracts, and pharmaocognosy is an important field of research in the search for new drugs and investigation of how traditional treatments might work. The difference being that modern pharmacology means that we can control things like dose, concentration and purity, which isn’t really possible if you’re using the entire plant prepared fresh each time. The chemotherapy drug taxol (isolated from the Pacific yew tree) is a good example, but there are many others, including: digitalis (foxgloves), salicin/salicylic acid (meadowsweet and willow bark), vinblastine (derived from the Madagascar periwinkle), and quinine (chinchona bark). For some drugs (e.g. vinblastine) yields from the actual plants are low, and the cost of obtaining the drug is high, so modern production methods make the drugs available to far more people than could ever avail themselves of the natural source.

… research confirms their beneficial effects for rebalancing hormones, aiding sleep, dealing with stress, in depression or strengthening the immune system.

“Rebalancing hormones” seems to be one of those ‘catch-all’ phrases – which hormones are we talking about, & why do they need “rebalancing”? How did they get out of whack in the first place? Similarly, “strengthening the immune system”: it’s a meaningless term and ignores the fact that in the great majority of people the immune system works just fine. Other than the use of vaccines, “strengthening” or “boosting” may not be such a good idea… And in some instances evidence for other uses is conflicting.

Plant medicine can provide you with essential building blocks for organ health that cannot be found through diet alone, and have a cumulative effect on the body to help build or restore your physiology to the optimal levels.

Sorry, what? Which ‘building blocks’ would those be? All the building blocks of life – amino acids, di- & monosaccharide sugars, fatty acids, nucleotides, vitamins & minerals – are provided in an average diet. So what are these things that diet supposedly doesn’t deliver?

In fact, Western biomedicine is historically rooted in plant medicine, given that it was the main form of medicine until the establishment of the new economic order after the industrial revolution.

And why did medical practices change at that point? Perhaps, because it became much easier to identify the actual active ingredients, and produce standardised doses of known concentrations and purity? Perhaps because the ability to do this meant that some drugs, at least, could become more widely available? Certainly the use of lab-made ingredients would help to protect species such as the Madagascar periwinkle, or plants such as goldenseal & ginseng, which in the US anyway have become endangered in parts of their range due to overharvesting for ‘traditional’ uses.

Since the mid-1980s there has been an explosion of research into complementary and alternative medicines (CAMs), driven by consumer demand for natural medicines. There have been over 40,000 studies conducted over the past three decades. This means that in addition to traditional empirical evidence, we have increasing evidence based on newer methodologiesA such as randomised controlled trials. They overwhelmingly confirm traditional medical applications of plants.

Some citations would be nice. This would enable us to answer questions such as: of these 40,000 studies, how many were randomised controlled trials (RCTs)B? How many of those were properly blinded? Were they in vitro studies, carried out in petri dish or test tube, or in vivo, using animal models? Were they studies based on whole plants, or on extracts thereof? And – what were their results?

Traditionally, plant medicine incorporates the whole plant and its extracts, and with this it brings a full spectrum of active constituents that work synergistically on different parts of the body’s physiological functions.

There are certainly examples where different plant constituents can act in a synergistic manner. One such example, looking at antibacterial activity in extracts of the plant goldenseal, is discussed here. It identified the actual compounds, their structure, and their likely modes of actionC. (What’s not to like?) Notably, while the goldenseal article was written in 2011, evidence that the same action occurs in vivo is (as far as I could tell from a quick pubmed search) still lacking. It’s also worth pointing out – should this evidence eventuate – that a synthetic preparation of the 3 compounds would be a much more reliable source than a tisane or a poultice of the whole plant.

Plant medicines will only work if they have been expertly compounded – from harvesting the plant at the right time at their peak potency, to careful processing them to preserve their active constituents and then to the correct formulation.This means to reap the many benefits of plant medicine, you must ensure you are getting them form [sic] a trusted company or registered Medical Herbalist.

And thence, my comment on advertorials.

A The idea of RCTs isn’t actually a modern invention. Perhaps the first such trial (albeit an imperfect one) was run by James Lind, back in 1747, in seeking a treatment for scurvy. He subsequently followed this up with a systematic review of the subject.

B There are some good explanations & examples in terms of trial design at this link.

C Thus the statement in a 2015 op.ed by the same writer, that “we simply do not have the technology yet to understand exactly how they work”, is incorrect. (Nor, from that same article, is it accurate to say that there are no side-effects if you use a whole plant remedy: see here, here, & here, for example.)

The post Appeal to antiquity? Appeal to nature? Bingo! appeared first on Sciblogs.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Dave Frame, Victoria University of Wellington; Adrian Henry Macey, Victoria University of Wellington, and Myles Allen, University of Oxford

New research provides a way out of a longstanding quandary in climate policy: how best to account for the warming effects of greenhouse gases that have different atmospheric lifetimes.

Carbon dioxide is a long-lived greenhouse gas, whereas methane is comparatively short-lived. Long-lived “stock pollutants” remain in the atmosphere for centuries, increasing in concentration as long as their emissions continue and causing more and more warming. Short-lived “flow pollutants” disappear much more rapidly. As long as their emissions remain constant, their concentration and warming effect remain roughly constant as well.

Our research demonstrates a better way to reflect how different greenhouse gases affect global temperatures over time.

Cost of pollution

The difference between stock and flow pollutants is shown in the figure below. Flow pollutant emissions, for example of methane, do not persist. Emissions in period one, and the same emissions in period two, lead to a constant (or roughly constant) amount of the pollutant in the atmosphere (or river, lake, or sea).

With stock pollutants, such as carbon dioxide, concentrations of the pollutant accumulate as emissions continue.

Flow and stock pollutants over time. In the first period, one unit of each pollutant is emitted, leading to one unit of concentration. After each period, the flow pollutant decays, while the stock pollutant remains in the environment. Provided by author, CC BY

The economic theory of pollution suggests different approaches to greenhouse gases with long or short lifetimes in the atmosphere. The social cost (the cost society ought to pay) of flow pollution is constant over time, because the next unit of pollution is just replacing the last, recently decayed unit. This justifies a constant price on flow pollutants.

In the case of stock pollutants, the social cost increases with constant emissions as concentrations of the pollutant rise, and as damages rise, too. This justifies a rising price on stock pollutants.

A brief history of greenhouse gas “equivalence”

In climate policy, we routinely encounter the idea of “CO₂-equivalence” between different sorts of gases, and many people treat it as accepted and unproblematic. Yet researchers have debated for decades about the adequacy of this approach. To summarise a long train of scientific papers and opinion pieces, there is no perfect or universal way to compare the effects of greenhouse gases with very different lifetimes.

This point was made in the first major climate report produced by the Intergovernmental Panel on Climate Change (IPCC) way back in 1990. Those early discussions were loaded with caveats: global warming potentials (GWP), which underpin the traditional practice of CO₂-equivalence, were introduced as “a simple approach … to illustrate the difficulties inherent in the concept”.

The problem with developing a concept is that people might use it. Worse, they might use it and ignore all the caveats that attended its development. This is, more or less, what happened with GWPs as used to create CO₂-equivalence.

The science caveats were there, and suggestions for alternatives or improvements have continued to appear in the literature. But policymakers needed something (or thought they did), and the international climate negotiations community grasped the first option that became available, although this has not been without challenges from some countries.

Better ways to compare stocks and flows

An explanation of the scientific issues, and how we address them, is contained in this article by Michelle Cain. The approach in our new paper shows that modifying the use of GWP to better account for the differences between short- and long-lived gases can better link emissions to warming.

Under current policies, stock and flow pollutants are treated as being equivalent and therefore interchangeable. This is a mistake, because if people make trade-offs between emissions reductions such that they allow stock pollutants to grow while reducing flow pollutants, they will ultimately leave a warmer world behind in the long term. Instead, we should develop policies that address methane and other flow pollutants in line with their effects.

Then the true impact of an emission on warming can be easily assessed. For countries with high methane emissions, for example from agriculture, this can make a huge difference to how their emissions are judged.

For a lot of countries, this issue is of secondary importance. But for some countries, particularly poor ones, it matters a lot. Countries with a relatively high share of methane in their emissions portfolios tend to be either middle-income countries with large agriculture sectors and high levels of renewables in their electricity mix (such as much of Latin America), or less developed countries where agricultural emissions dominate because their energy sector is small.

This is why we think the new research has some promise. We think we have a better way to conceive of multi-gas climate targets. This chimes with new possibilities in climate policy, because under the Paris Agreement countries are free to innovate in how they approach climate policy.

Improving the environmental integrity of climate policy

This could take several forms. For some countries, it may be that the new approach provides a better way of comparing different gases within a single-basket approach to greenhouse gases, as in an emissions trading scheme or taxation system. For others, it could be used to set separate but coherent emissions targets for long- and short-lived gases within a two-basket approach to climate policy. Either way, the new approach means countries can signal the centrality of carbon dioxide reductions in their policy mix, while limiting the warming effect of shorter-lived gases.

The new way of using global warming potentials demonstrably outperforms the traditional method in a range of emission scenarios, providing a much more accurate indication of how stock and flow pollutants affect global temperatures. This is especially so under climate mitigation scenarios.

Well designed policies would assist sectoral fairness within countries, too. Policies that reflect the different roles of stock and flow pollutants would give farmers and rice growers a more reasonable way to control their emissions and reduce their impact on the environment, while still acknowledging the primacy of carbon dioxide emissions in the climate change problem.

An ideal approach would be a policy that aimed for zero emissions of stock pollutants such as carbon dioxide and low but stable (or gently declining) emissions of flow pollutants such as methane. Achieving both goals would mean that a farm, or potentially a country, can do a better, clearer job of stopping its contribution to warming.

Dave Frame, Professor of Climate Change, Victoria University of Wellington; Adrian Henry Macey, Senior Associate, Institute for Governance and Policy Studies; Adjunct Professor, New Zealand Climate Change Research Institute. , Victoria University of Wellington, and Myles Allen, Professor of Geosystem Science, Leader of ECI Climate Research Programme, University of Oxford

This article was originally published on The Conversation. Read the original article.

The post Why methane should be treated differently compared to long-lived greenhouse gases appeared first on Sciblogs.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

One thing I like about returning to NZ from places like China, is being around nature again. We live in one of the few parts of the world, and certainly the developed world, where wildlife is sometimes not just at our doorsteps, but also inside as well.

Urban Forests

Some of this is down to our urban forests. Our urban forests are often underappreciated. They’re fragmentary and disturbed places. Forestry- both the conservation and the production side- has typically been directed at large forests well outside the urban fringe. And areas big enough to do serious conservation, like save endangered birds. Urban forestry is a kind of fringe thing. And it’s not easy challenging the perception that urban forests are kind of messy parks. Urban forests though provide a lot services. Not only does this buffering areas from the worst of heavy rainfalls and pollution absorption, they also sustain some of our smallest native wildlife.

Many people will be aware of the tui, and some will be fortunate enough to have kereru about. Despite the very disturbed human environments of our cities, a number of native bird species do live here.

Our Secret Wildlife

Then there’s the secret wildlife. The hidden wildlife. The wildlife that likes to stay in the dark of our forests, that comes out at night. There is a wide range of arthropods and the like, that are hanging on (or even thriving) in our little patches of forests and our gardens. They’re easily overlooked. Sometimes they come to people’s attention when they wonder into houses. The male Cambridgea foliata (below) is one common household invader in Auckland. But rather than being a threat, he’s put looking for mates before he dies.

Male Cambridgea foliata Giant Centipedes

Another rarer creature is the giant centipede Cormocephalus rubriceps. We found it in the weekend, huddled in the bottom-sill of the front door. It had squeezed up through the drain hole in the aluminium. This creature can grow up to 16cm long (some report 25cm). I have rarely seen these outside of bush-settings. The problem is they are large, too large. Hiding from rats is a challenge for them.

Cormocephalus rubriceps, it’s fast, has a poisonous bite, and can grow up to 25cm long

After general cries of delight from the members of the household*, I evicted it during the night. I can add that giant centipedes make a very cool clickety-clickety sound as they run across cobblestones.

Which comes back to the threats to our native arthropods. While globally we are undergoing a biodiversity crisis, where in Europe people are reporting the collapse of insect populations, we have some remarkable native animals still on our doorsteps (or as I’ve just shown, right in the door).

Rat Control

The threats to our largest arthropods includes rats, which eat them. Invasive wasps are also a major predator of our native insects. These range from the aggressive Vespid wasps (like the German) and paper wasps. Aggressive rat and wasp control is an important part of sustaining our native arthropods. That’s in part why I have rat bait stations both inside and outside the house. And that’s why populations of vulnerable native creatures can be sustained in our cities. Keeping the rat populations down in our urban forests and where they connect to our houses, is a conservation action we can all do. Maintaining our urban forests is crucial.

And please keep killing those rats and wasps folks.

– – –

* I’m not actually joking. Much of the fear about our creepy-crawlies is taught. It’s not grounded in any good evidence. Appreciating what we have is just a matter of tapping in to our natural curiousity at a young age.

The post Our secret urban wildlife: Giant centipedes appeared first on Sciblogs.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
SciBlogs by Eric Crampton - 1w ago

Man, I shouldn’t have gotten my hopes up.

The Science Media Centre pointed me to reporting on some new work look at what’s happened consequent to National’s Sale & Supply of Alcohol Act 2012.

It always felt like a spot where some really good work could be done. Different locales implemented different district licensing plans at different times, so you could run a panel study looking at how different measures worked in different places.

But that isn’t what this is. And what it is… well, let’s go through it.

So the Science Media Centre points to this Newsroom piece by Farah Hancock. It doesn’t start well.

Alcohol industry appeals have “muted” potential benefits of legislation aimed at reducing the estimated $14.5 million a day cost of alcohol harm, a new study finds.

Massey University research shows the only effect of the Sale and Supply of Alcohol Act 2012 (SSAA) is a reduction in alcohol availability after 4am in cities.

The alcohol industry appealed 30 of 32 alcohol policies proposed by councils to reduce harmful drinking.

Lead researcher Stephen Randerson said alcohol related harm was estimated to cost New Zealand $5.3 billion per year.

“That was in 2005, so it’s probably going to have risen since then. The things that feed in to this are the cost of emergency services including police work where alcohol is involved in at least one in three call-outs, the emergency department and the health cost of chronic disease caused by alcohol.”

Recall that the $5.3 billion figure cited was from BERL’s decade-old work and was their headline figure for the cost of alcohol and joint alcohol-and-other-drug use. The alcohol-alone figure was $4.8 billion. And that figure was very very wrong. Matt, Brad and I explained the problems in it in this NZMJ piece. A few of those problems:

  • Counting as a social cost every dime spent on alcohol by anyone consuming more than about 2 pints of beer per day, including all alcohol excise paid by that cohort;
  • Double-counting productivity costs of lost wages and VSL measures of statistical lives lost that encompass productivity costs;
  • Counting as social cost every cost incurred by a heavier drinker, but not netting from those costs any benefits experienced by those drinkers.

A bad stat is hard to kill. And folks citing those kinds of stats who should know better, well, it tells me to be careful when reading what they’ve done.

And that brings us to the piece out in Friday’s NZ Medical Journal by Randerson, Casswell and Huckle that’s the basis for all this [Why oh Why can’t news outlets just link to the study?!]

The piece uses survey methods developed in Casswell’s International Alcohol Control study to structure interviews with 36 informants (main sources, according to the article, being police, liquor licensing inspectors, and public health officials) in early 2014 and early 2016, with 26 of the informants interviewed in both rounds of interviews. Plus a few police who do alcohol breathalyser checkpoints.

Those informants scored a pile of things, like their view of regulatory compliance and enforcement, alcohol availability, trading hours, compliance with hours, difficulty of obtaining new licences, purchase age enforcement, and several other indicators.

So I guess I’ll have to keep hoping somebody credible does the panel data work I’ve been hoping to see – this is just a survey of what police, public health, and liquor licensing inspectors think about things.

Ok, so where’s the evidence on big bad industry thwarting local communities? Here’s the relevant section – there’s also a bit in the conclusion I’ll quote later.

Local alcohol policies

Only five LAPs were in force by the end of 2015, although 32 of 67 territorial authorities had produced a draft or provisional policy by this time (pers. comm. Jackson, 2016). Appeals were the most commonly reported impediment to developing an LAP. Some local authorities halted or deferred developing a LAP until appeals in other districts had been decided. Other difficulties cited were finding a compromise between the commercial goals of businesses and alcohol-related harm in the community; opposition from business interests, including the hospitality industry and supermarket chains; and time and cost.

Now if you’ve been paying attention to the LAPs, you’d know that there’s some truth here. When Nelson-Tasman went for their LAP, they decided to hold back until Wellington’s had been decided. But it wasn’t industry that was appealing Wellington’s LAP, it was the police and medical people who didn’t like that Wellington wanted a 5am closing time and were trying to litigate them down to 3am.

Disclosure – I was hired by the Hospitality Association to provide evidence on the international literature on bar closing times. Didn’t wind up presenting in Nelson because all that was deferred to the Wellington decision, but I did have a lot of fun presenting in Wellington in 2014. Bottom line: it’s a stretch to expect any major changes in harms with a couple hours’ difference in closing times, but it’s likely a good idea to have bottle shops close before the bars do.

Anyway, the police and medical lobbyists were going to have a tough time getting anything more restrictive than the national default hours in Wellington because, well, Wellington could always just choose not to have an LAP and stick with the default. But at least I got to have a bit of fun.

But all that Casswell’s team has in this survey is respondents (police officers, licensing officials, and public health people) complaining that businesses selling alcohol will often object to the licencing plans that police and public health people want.

LAPs have significant potential to restrict trading hours, outlet density and location, but too few were in force in 2015 to affect the alcohol environment nationally. Appeals against LAPs deterred and delayed their implementation. Although medical officers of health and police mounted several appeals, appeals from off-licence alcohol suppliers were more widespread, and most commonly resulted in the relaxation or removal of restrictions from LAPs.29 In light of the substantial commercial conflict of interest which alcohol suppliers have with the SSAA’s aim of minimising harm from the excessive consumption of alcohol,30 steps to protect the LAP development process from their influence appear desirable. This could facilitate policies which are more likely to reduce alcohol-related harm, and reduce development time and cost.

So the problem they’ve identified with the Act is that tribunals and courts sometimes wind up siding with industry objections to local plans. Like, if industry were just raising objections that wasted time and were never upheld, then you could make an abuse of process case and hope that the courts might start awarding costs or something.

But the big SHORE-team complaint here is that sometimes LAPs are made more liberal after industry appeals. If the Police objected to some bit of criminal law procedure that made it harder for them to get convictions for people the courts wind up finding innocent, or that make it easier for people successfully to appeal convictions, I’d hope we’d want stronger basis for changes than just that observation from some surveyed police officers.

The post Alcohol harms and the NZ reforms appeared first on Sciblogs.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Monica Grady, The Open University

It was to a great fanfare of publicity that researchers announced they had found evidence for past life on Mars in 1996. What they claimed they had discovered was a fossilised micro-organism in a Martian meteorite, which they argued was evidence that there has once been life on the Red Planet. Sadly, most scientists dismissed this claim in the decade that followed – finding other explanations for the rock’s formation.

While we know that Mars was habitable in the past, the case demonstrates just how hard it will be to ever prove the existence of past life on its surface. But now new results from NASA’s Curiosity rover, including the discovery of ancient organic material, have revived the hope of doing just that. Understandably, the authors of the two papers, published in the journal Science, are very careful not to make the claim that they have discovered life on Mars.

While the 1996 discovery has never been verified, it hasn’t ever been conclusively disproved either. What the study has done, though, is to propel the search for life on Mars higher up the list of international space exploration priorities – giving space agencies ammunition to argue for a coordinated programme of missions to explore the Red Planet.

Mineral veins on Mars seen by Curiosity. NASA/JPL-Caltech/MSSS

Curiosity is the latest rover to trundle across the gritty sands of Mars. It has been tacking across the floor of Gale Crater on Mars for five years, returning stunning images of Martian landscapes, with vistas opening up to show rocky outcrops seamed with mineral veins. Close up, the veins have the appearance and chemistry of material that has been produced by reaction of water with the rocks, at a time when water was stable at the surface for extended periods of time. Such reactions could create enough energy to feed microbial life.

Ancient rocks

One of the papers reports the discovery of low levels of organic carbon in mudstones from Gale Crater. This might not sound like much carbon – but finding it at all is a big deal, since organic material could be traces of decayed living matter.

The sediments, analysed by the SAM instrument on Curiosity, come from just below the surface, where they have been shielded from most of the UV radiation that would break down organic molecules exposed on the surface. The organic material discovered on Mars is rich in sulphur, which would have also helped to preserve it.

However, the environment in which the mudstones were deposited – a 3.5-billion-year-old lake bed – would have been altered in other ways as the sediments settled and compressed to become rock. Over the intervening years, fluid flowing thought it would have initiated chemical reactions that could have destroyed the organic matter – the material discovered may in fact be fragments from bigger molecules. In rocks on Earth, such reactions – which causes living matter mainly from plants and microbes to degrade – produce an insoluble material known as kerogen.

Excitingly, the material discovered on Mars is similar to terrestrial kerogen. But that doesn’t necessarily mean it is biological in origin – it is also similar to an insoluble material in tiny meteorites that rain down on the surface of Mars.

At this point, we simply don’t know whether the origin is biological or geological. But it is the preservation of the material that is important – if there is this much organic matter preserved close to the surface, then there should be even better protected material at greater depths. What is needed to find more clues is a mission to Mars with a deep drill. Luckily there is one: ESA’s ExoMars rover, scheduled for launch in two years’ time.

Mysterious methane

The second paper investigates a problem that has been disturbing Mars scientists for several years: the abundance of methane in Mars’ atmosphere. Earth-based telescopes, spacecraft orbiting Mars and now Curiosity, have measured episodic sudden increases in the background methane content.

While this might be taken as a signature of biological activity – the main producers of methane on Earth are termites and bovine gut bacteria – non-biological mechanisms, such as weathering of Martian rocks or release from ancient ice, are possible too.

Gale crater on Mars. NASA/JPL-Caltech/ASU/UA

The new results represent the longest systematic record of atmospheric methane, with measurements taken regularly over five years. What the authors have found is a systematic variation in methane concentration with season, with the highest concentrations occurring at the Gale Crater towards the end of the northern summer. This is the period when the southern icecap – which freezes carbon dioxide out of the atmosphere, but not methane – is at its biggest, so enhanced methane is not unexpected. However, the abundances of methane measured are greater than models predict should occur, meaning we still don’t know exactly how they are produced.

The team also found several spikes where methane abundance suddenly jumped to be higher than average during the year. The authors conclude that this must be related to surface temperature. They therefore suggest that methane could be trapped at depth, gradually seeping to the surface. Here it is retained by the soil until the temperature increases sufficiently to release the gas.

However the paper states that, despite this, there “remain unknown atmospheric or surface processes occurring in present-day Mars”. While the authors do not specify biology as one of those unknown processes, it remains an intriguing possibility. This, to me, is a cue for further measurements – and fortunately, we may know soon. ESA’s Trace Gas Orbiter is now in place at Mars, and has just started recording data.

So, what can we conclude after reading these two papers? That even with the superb instrument array carried by Curiosity, and detailed modelling and interpretation of the results, we are still left looking for evidence of life on Mars. Is it a romantic yearning to discover that we do have companions within the solar system (even if they are likely to be very small and uncommunicative)? Or is it that our theories of how life arose on Earth cry out to be verified by a “second genesis”?

Whatever the reason, there is still much to be discovered on Mars. Luckily, a series of missions planned well into the next decade will help us make those discoveries. These include the return of Martian samples to Earth, where we can carry out even more detailed analyses than Curiosity.

Monica Grady, Professor of Planetary and Space Sciences, The Open University

This article was originally published on The Conversation. Read the original article.

The post Rover detects ancient organic material on Mars – and it could be trace of past life appeared first on Sciblogs.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Andrew Lorrey, National Institute of Water and Atmospheric Research; Andrew Mackintosh, Victoria University of Wellington, and brian.anderson@vuw.ac.nz, Victoria University of Wellington

Every March, glacier “watchers” take to the skies to photograph snow and ice clinging to high peaks along the length of New Zealand’s Southern Alps.

This flight needs to happen on cloud-free and windless days at the end of summer before new snow paints the glaciers white, obscuring their surface features.

Glaciers Don't Lie - YouTube
Each year, at the end of summer, scientists monitor glaciers along New Zealand’s Southern Alps. Summer of records

The summer of 2017-18 was New Zealand’s warmest on record and the Tasman Sea experienced a marine heat wave, with temperatures up to six degrees above normal for several weeks.

The loss of seasonal snow cover and older ice during this extreme summer brings the issue of human-induced climate change into tight focus. The annual flights have been taking place for four decades and the data on end-of-summer snowlines provide crucial evidence.

The disappearance of snow and ice for some of New Zealand’s glaciers is clear and irreversible, at least within our lifetimes. Many glaciers we survey now will simply vanish in the coming decades.

The Franz Josef glacier advanced during the 1980s and 1990s but is now retreating. Andrew Lorrey/NIWA, CC BY-SA

Glaciers are a beautiful part of New Zealand’s landscape, and important to tourism, but they may not be as prominent in the future. This stored component of the freshwater resource makes contributions to rivers that are used for recreation and irrigation of farm land.

Meltwater flowing from glaciers around Aoraki/Mt Cook into the Mackenzie Basin feeds important national hydroelectricity power schemes. Seasonal meltwater from glaciers can partially mitigate the impacts of summer drought. This buffering capacity may become more crucial if the eastern side of New Zealand’s mountains become drier in a changing climate.

Pioneering glacier monitoring

When Trevor Chinn began studying New Zealand’s 3,000 or so glaciers in the 1960s, he realised monitoring all of them was impossible. He searched for cost-effective ways to learn as much as he could. This resulted in comprehensive glacier mapping and new snow and ice observations when similar work was dying out elsewhere. Mapping of all of the world’s glaciers – nearly 198,000 in total – was only completed in 2012, yet Trevor had already mapped New Zealand’s ice 30 years earlier.

Octogenarian Trevor Chinn still participates in the snowline flights every year to support younger scientists. Dave Allen/NIWA, CC BY-SA

In addition, he wanted to understand how snow and ice changed from year to year. Trevor decided to do annual glacier photographic flights, looking for the end-of-summer snowlines – a feature about half way between the terminus and the top of a glacier where hard, blue, crevassed glacier ice usually gives way to the previous winter’s snow. The altitude of this transition is an indicator of the annual health of a glacier.

It was a visionary approach that provided a powerful and unique archive of climate variability and change in a remote South Pacific region, far removed from well-known European and North American glaciers. But what was hidden at the time was that New Zealand glaciers were about to undergo significant changes.

Trevor Chinn took part in this summer’s flight and said:

This year is the worst we’ve ever seen. There was so much melt over the summer that more than half the glaciers have lost all the snow they had gained last winter, plus some from the winter before, and there’s rocks sticking out everywhere. The melt-back is phenomenal.

New insights from old observations

The Southern Alps end-of-summer snowline photo archive, produced by the National Institute of Water and Atmospheric Research, is a remarkable long-term record. Our colleagues Lauren Vargo and Huw Horgan are leading the effort to harness this resource with photogrammetry to deliver precise (metre-scale) three-dimensional models of glacier changes since 1978, building directly on Trevor Chinn’s work.

Glaciers respond to natural variability and human-induced changes, and we suspect the latter has become more dominant for our region. During the 1980s and 1990s, while glaciers were largely retreating in other parts of the world, many in New Zealand were advancing. Our recent research shows this anomaly was caused by several concentrated cooler-than-average periods, with Southern Alps air temperature linked to Tasman Sea temperatures directly upwind.

The situation changed after the early 2000s, and we postulated whether more frequent high snowlines and acceleration of ice loss would occur. Since 2010, multiple high snowline years have been observed. In 2011, the iconic Fox Glacier (Te Moeka o Tuawe) and Franz Josef Glacier (Kā Roimata o Hine Hukatere) started a dramatic retreat – losing all of the ground that they regained in the 1990s and more.

Fox Glacier's spectacular retreat - Vimeo
In a series of ice collapses, New Zealand’s Fox Glacier retreated by around 300 m between January 2014 and January 2015. Looking ahead by examining the past

How New Zealand’s glaciers will respond to human-induced climate change is an important question, but the answer is complicated. A recent study suggests human-induced climate warming since about 1990 has been the largest factor driving global glacier decline. For New Zealand, which is significantly influenced by regional variability of the surrounding oceans and atmosphere, the picture is less clear.

To assess how human-induced climate influences and natural variability affect New Zealand glaciers requires the use of climate models, snowline observations and other datasets. Our research team, with support from international colleagues, are doing just that to see how Southern Alps ice will respond to a range of future scenarios.

Continuing the snowline photograph work will allow us to better identify climate change tipping points and warning signs for our water resources – and therefore better prepare New Zealand for an uncertain future.

Andrew Lorrey, Principal Scientist & Programme Leader of Climate Observations and Processes, National Institute of Water and Atmospheric Research; Andrew Mackintosh, Professor & Director of Antarctic Research Centre, expert on glaciers and ice sheets, Victoria University of Wellington, and brian.anderson@vuw.ac.nz, Senior Research Fellow, Victoria University of Wellington

This article was originally published on The Conversation. Read the original article. Featured image: Small aircraft carry scientists high above the Southern Alps to survey glacier changes. Hamish McCormick/NIWACC BY-SA.

The post A bird’s eye view of New Zealand’s changing glaciers appeared first on Sciblogs.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I don’t wander over to the dark side1 very often – NZ’s climate cranks are an unedifying bunch at the best of times – but I was somewhat surprised to find myself featuring in their current top story — National Geographic ignores the need for evidence. Not that they know it’s me – the level of scholarship on display is even worse than that in their sole contribution to the scientific literature.

The author, blog owner Richard Treadgold, allows himself a little rant about a paragraph he says comes from a recent National Geographic newsletter, illustrated with a picture he claims comes from NIWA. The words he rails against seemed strangely familiar to me, and also rather dated. Then the penny dropped: it was a paragraph I had written a decade ago, in a feature for the New Zealand Geographic magazine. You can read the whole thing here, and you’ll note that the header photograph is the one Treadgold claims comes from the National Institute for Weather & Atmospheric Research (NIWA).

Let me sum this up. Treadgold’s little homily on the need for evidence is not just complete twaddle, it’s shoddy scholarship at its worst: citing the wrong magazine, an article from the wrong decade, and blaming NIWA for something they didn’t do. Sadly, it’s par for the course.

I shall leave the last word to one of the little band of scientifically literate commenters who bravely point out the errors inherent in almost everything Treadgold publishes. Underneath a press release from the climate cranks, complaining that the Royal Society of NZ had failed to provide evidence of the reality of climate change2, Simon wrote:

Your complaint appears to be that the Royal Society provided you with lots of information which you couldn’t understand. That is not the fault of the Royal Society.

Twas ever thus, Simon.

  1. Richard Treadgold’s inaptly titled “Climate Conversation“ blog.
  2. What was that about bees and bonnets?

The post Postcards from La La Land #132: time warps and twaddle appeared first on Sciblogs.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Artūrs Logins, University of Southern California – Dornsife College of Letters, Arts and Sciences

You may have noticed a curious recent announcement: An international research team plans to use state-of-the-art DNA testing to establish once and for all whether the Loch Ness monster exists.

Regardless of the results, it’s unlikely the test will change the mind of anyone who firmly believes in Nessie’s existence. As a philosopher working on the notion of evidence and knowledge, I still consider the scientists’ efforts to be valuable. Moreover, this episode can illustrate something important about how people think more generally about evidence and science.

Discounting discomfiting evidence

Genomicist Neil Gemmell, who will lead the international research team in Scotland, says he looks forward to “(demonstrating) the scientific process.” The team plans to collect and identify free-floating DNA from creatures living in the waters of Loch Ness. But whatever the eDNA sampling finds, Gemmell is well aware the testing results will most likely not convince everyone.

A long-standing theory in social psychology helps explain why. According to cognitive dissonance theory, first developed by Leon Festinger in the 1950s, people seek to avoid the internal discomfort that arises when their beliefs, attitudes or behavior come into conflict with each other or with new information. In other words, it doesn’t feel good to do something you don’t value or that contradicts your deeply held convictions. To deal with this kind of discomfort, people sometimes attempt to rationalize their beliefs and behavior.

It’s hard to stop waiting for an expected UFO. Joseph Sohm/Shutterstock.com

In a classic study, Festinger and colleagues observed a small doomsday cult in Chicago who were waiting for a UFO to save them from impending massive destruction of Earth. When the prophecy didn’t come true, instead of rejecting their original belief, members of the sect came to believe that the God of Earth changed plans and no longer wanted to destroy the planet.

Cult members so closely identified with the idea that a UFO was coming to rescue them that they couldn’t just let the idea go when it was proven wrong. Rather than give up on the original belief, they preferred to lessen the cognitive dissonance they were experiencing internally.

Loch Ness monster true believers may be just like the doomsday believers. Giving up their favorite theory could be too challenging. And yet, they’ll be sensitive to any evidence they hear about that contradicts their conviction, which creates a feeling of cognitive discomfort. To overcome the dissonance, it’s human nature to try to explain away the scientific evidence. So rather than accepting that researchers’ inability to find Nessie DNA in Loch Ness means the monster doesn’t exist, believers may rationalize that the scientists didn’t sample from the right area, or didn’t know how to identify this unknown DNA, for instance.

Cognitive dissonance may also provide an explanation for other science-related conspiracy theories, such as flat Earth beliefs, climate change denial and so on. It may help account for reckless descriptions of reliable media sources as “fake news.” If one’s deeply held convictions don’t fit well with what media say, it’s easier to deal with any inner discomfort by discrediting the source of the new information rather than revising one’s own convictions.

Philosophy of knowledge

If psychology may explain why Loch Ness Monster fans believe what they do, philosophy can explain what’s wrong with such beliefs.

The error here comes from an implicit assumption that to prove a claim, one has to rule out all of the conceivable alternatives – instead of all the plausible alternatives. Of course scientists haven’t and cannot deductively rule out all of the conceivable possibilities here. If to prove something you have to show that there is no conceivable alternative to your theory, then you can’t really prove much. Maybe the Loch Ness monster is an alien whose biology doesn’t include DNA.

So the problem is not that believers in the existence of the Loch Ness monster or climate change deniers are sloppy thinkers. Rather, they are too demanding thinkers, at least with respect to some selected claims. They adopt too-high standards for what counts as evidence, and for what is needed to prove a claim.

Philosophers have long known that too-high standards for knowledge and rational belief lead to skepticism. Famously, 17th century French philosopher René Descartes suggested that only “clear and distinct perceptions” should function as the required markers for knowledge. So if only some special inner feeling can guarantee knowledge and we can be wrong about that feeling – say, due to some brain damage – then what can be known?

This line of thought has been taken to its extreme in contemporary philosophy by Peter Unger. He asserted that knowledge requires certainty; since we are not really certain of much, if anything at all, we don’t know much, if anything at all.

One promising way to resist a skeptic is simply not to engage in trying to prove that the thing whose existence is doubted exists. A better approach might be to start with basic knowledge: assume we know some things and can draw further consequences from them.

A knowledge-first approach that attempts to do exactly this has recently gained popularity in epistemology, the philosophical theory of knowledge. British philosopher Timothy Williamson and others including me have proposed that evidence, rationality, belief, assertion, cognitive aspects of action and so on can be explained in terms of knowledge.

This idea is in contrast to an approach popular in the 20th century, that knowledge is true justified belief. But counterexamples abound that show one can have true justified belief without knowledge.

Say, you check your Swiss watch and it reads 11:40. You believe on this basis that it is 11:40. However, what you haven’t noticed is that your typically super reliable watch has stopped exactly 12 hours ago. And by incredible chance it happens that, now, when you check your watch, it is in fact 11:40. In this case you have a true and justified or rational belief but still, it doesn’t seem that you know that it is 11:40 – it is just by pure luck that your belief that it’s 11:40 happens to be true.

Our newer knowledge-first approach avoids defining knowledge altogether and rather posits knowledge as fundamental. It’s its own fundamental entity – which allows it to undercut the skeptical argument. One may not need to feel certain or have a sensation of clarity and distinctness in order to know things. The skeptical argument doesn’t get off the ground in the first place.

When it comes to science versus skeptic, evidence doesn’t always matter. AP Photo, File Knowledge and the skeptic

The eDNA analysis of Loch Ness may not be enough to change the minds of those who are strongly committed to the existence of the lake’s monster. Psychology may help explain why. And lessons from philosophy suggest this kind of investigation may not even provide good arguments against conspiracy theorists and skeptics.

A different and, arguably, better argument against skepticism questions the skeptic’s own state of knowledge and rationality. Do you really know that we know nothing? If not, then there may be something we know. If yes, then we can know something and, again, you are wrong in claiming that knowledge is not attainable.

A strategy of this kind would challenge the evidential and psychological bases for true believers’ positive conviction in the existence of Nessie. That’s quite different from attempting to respond with scientific evidence to each possible skeptical challenge.

But the rejection of a few true believers doesn’t detract from the value of this kind of scientific research. First and foremost, this research is expected to produce much more precise and fine-grained knowledge of biodiversity in Loch Ness than what we have without it. Science is at its best when it avoids engaging with the skeptic directly and simply provides new knowledge and evidence. Science can be successful without ruling out all of the possibilities and without convincing everyone.

Artūrs Logins, Visiting Postdoctoral Researcher in Philosophy, University of Southern California – Dornsife College of Letters, Arts and Sciences

This article was originally published on The Conversation. Read the original article.

The post Why won’t scientific evidence change the minds of Loch Ness monster true believers? appeared first on Sciblogs.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview