Loading...

Follow Practical Ethics on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Written by Anri Asagumo

Oxford Uehiro/St Cross Scholar

Although more and more people see the importance of diversity in academia, language diversity is one type of diversity that seems to be diminishing: English is increasingly dominant in both areas. I would like to argue that people who are born and raised in an English-speaking country should be required to acquire a second language to the level they can write a rudimentary paper and give a presentation in that language in order to apply for international conferences and submit papers to international journals. The purpose of this requirement would be to address the significant inequality between native English-speakers and others. I focus on academia here, but ideally the same thing should be applied to the business world, too.

It is almost always the case that academics and graduate students who were born and grew up in a non-English-speaking country learn English as a second language, if they have not done so already, in order to advance their careers. In science, English is the standard language for almost all international journals and conferences. The need to learn English places non-native English speakers at a considerable disadvantage. Not only must they invest time in learning the language, but even then, they will often find it more difficult to participate in discussions, and they will often have to invest more time and effort in writing papers and preparing talks.

It might sound absurd, but my proposed requirement would mitigate the disadvantages that non-native English speakers currently endure. It would do this primarily by making native English-speakers more cognisant of the difficulties which non-native speakers face, but it might also, over time, encourage some conferences and journals to allow contributions in languages other than English. A positive side-effect would be that speakers of other languages could more easily find work as language teachers all around the world; this is an advantage currently enjoyed primarily by native English-speakers.

A more radical solution would be to artificially create the environment where other languages are as dominant as English, though Esperanto failed. My proposed solution would have fewer costs, since English could remain the dominant language, and native English speakers could choose which second language to Master. They could even choose a language which shares some words and grammatical rules with English. There are ten languages listed in category I of the Language Difficulty Rankings, as compiled by the Foreign Service Institute of the US, which are said to be the easiest languages for native English-speakers to learn.

There might be a concern that such a policy would slow down scientific progress because scientists are spending more time learning languages instead of doing science. It is, however, not a valid concern because that is exactly what people in non-English-speaking-countries are going through. One of the reasons I stipulated a level of language proficiency such that the individual could write or give a rudimentary paper or presentation is that this reflects the current situation for non-English speakers, who are required to have moderate competency in English, but not to the level of native English speakers to make a presentation in an international conference. One might object that my proposal could be taken as a case of levelling down, as a measure that aims to achieve greater equality only by making the currently better-off worse-off. However, it is worth noting that it might help English-speaking scientists to take the views of researchers with poor English more seriously and encourage communications between them, and eventually it could contribute to the faster development of new ideas and technologies.

One question is whether a similar policy should apply in other academic communities where there is a different dominant language. For example, French is the dominant language in academic philosophy in France; should French philosophers working in France therefore also have to learn another language? This is not within the scope of my argument because in the current society French-speaking people are also almost always the ones who have to work on their English before they go to an international conference.

Even if every single scholar in English-speaking-countries learn a second language well enough to make a presentation in that language, English would continue to be a platform as a world standard language. Yet the result of the implementation of the policy would bring about some fruitful benefits for everyone: for non-English speakers the alleviation of current inequality; for English speakers the well-known positive aspects of being a polyglot; the improved academic equality and the potential of faster development.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Written by Rebecca Brown

There has been recent concern over CRUK’s (Cancer Research UK) latest campaign, which features the claim ‘obesity is a cause of cancer too’ made to look like cigarette packets. It follows criticism of a previous, related campaign which also publicised links between obesity and cancer. Presumably, CRUK’s aim is to increase awareness of obesity as a risk factor for cancer and, in doing so, encourage people to avoid (contributors to) obesity. It may also be hoped to encourage public support for policies which tackle obesity, pushing the Overton window in a direction which is likely to permit further political action in this domain.

The backlash is mostly focused around the comparison with smoking, and the use of smoking-related imagery to promote the message (there is further criticism of the central causal claim, since it is actually quite difficult to establish that obesity causes cancer). 

An open letter to CRUK, signed by academics and healthcare professionals, makes the point: 

Given that the dominant public perception is that weight gain is caused by a lack of willpower and that weight can be reduced easily and rapidly, when you frame people’s weight as the problem, instead of directly addressing the environmental factors you intend to change through policy, you are effectively telling people that cancer is their fault. Through making a direct comparison between smoking and weight, your campaign contributes to these assumptions, suggesting that it is a lifestyle choice. This belies the reality.

The signatories are correct to point to the need to address environmental factors to tackle obesity, and to question the usefulness of suggesting that, in order to reduce weight, individuals just need to make better choices and exercise some more willpower. But why state that the comparison with smoking suggests that obesity is a lifestyle choice? It is disappointing to see those concerned about the stigmatisation of obese people apparently content, in criticising CRUK for this transgression, to reinforce the stigma currently experienced by smokers. It seems, according to this thinking, smoking is a lifestyle choice and smokers who develop cancer (and other diseases) as a result, are at fault. Smokers, and those suffering from smoking-related diseases, are of course already targets of plentiful stigma

The authors might like to remember that many of the same socio-economic factors that are correlated with obesity are also correlated with smoking. Many people who smoke do so for a range of reasons, and some smokers would prefer not to smoke but find it difficult to stop; much as many people who overeat do so because of an interplay of different personal, social and environmental factors, and struggle to reduce their weight despite having a desire to do so. It is also worth noting that obese people are more likely to smoke (as well as engage in various other health-risking behaviours). So at least some of the time, when we are discussing those who smokes and whose who are obese, we are talking about the same people.

Unless it is clear that individuals are behaving in morally wrongful ways (e.g. causing significant and avoidable third party harm) it is inappropriate to use stigmatising campaigns in order to raise awareness and encourage behaviour change. To the extent CRUK do this, they should be criticised. But by reinforcing the stigmatisation of smokers in the course of defending obese people, the authors of the open letter are guilty of much the same thing.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Hannah Maslen, University of Oxford, @hannahmaslen_ox

Colin Paine, Thames Valley Police, @Colin_Paine

Police investigators are sometimes faced with a dilemma when deciding whether to pursue investigation of a non-recent case of child sexual abuse. Whilst it might seem obvious at first that the police should always investigate any credible report of an offence – especially a serious offence such as sexual abuse – there are some cases where there are moral reasons that weigh against investigation.

Imagine a case in which a third party agency, such as social services, reports an instance of child sexual exploitation to the police. The alleged offence is reported as having occurred 15 years ago. The victim has never approached the police and seems to be doing OK in her adult life. Although she had serious mental health problems and engaged in self-harm in the past, her mental health now appears to have improved. She does, however, remain vulnerable to setbacks. Initial intelligence gives investigators reason to believe that the suspect has not continued to offend, although there are limits to what can be known without further investigation. Should this alleged offence be investigated?

The police will almost always pursue investigation when a victim reports a serious crime him or herself. But, cases like that sketched above, in which the victim has not approached the police, and in which the victim is vulnerable and may not want an investigation, challenge the assumption that all alleged cases of non-recent child sexual abuse should be investigated. In the case sketched, a few factors are immediately salient amongst the reasons that may be relevant to deciding whether to investigate: the lack of approach by the victim (do they even want an investigation?), the vulnerability of the victim (are they at risk of psychological harm if the police approach them?), the lack of evidence of continued offending by the suspect and the opportunity cost of the investigation (how many other investigations cannot be undertaken as a consequence of pursuing this one?). How should these be weighed into a decision whether or not to investigate? Does it make a difference that the offence is not recent? Do reasons ever stack up to justify not investigating an allegation of non-recent child sexual abuse?

In a new open access paper, published in Criminal Justice Ethics, Detective Chief Superintendent Colin Paine and I identify the reasons that weigh in favour of investigation and those that weigh against. We argue that there is always a presumption to investigate, which is grounded by reasons generated by the value of dispensing criminal justice and the general deterrent effects of investigation and prosecution. Since these are consistently served by investigation, they will apply in all cases. However, this presumption is tempered to some extent if it is unlikely that there will be sufficient evidence to prosecute or secure conviction.

The starting point, therefore, is that all suspected offences should be investigated. We then argue that there are various further reasons that either strengthen or weaken the overall justification to pursue investigation. The threat that the offender poses will vary considerably from cases to case. If evidence suggests that an offender presents a high risk of reoffending, then investigation should proceed regardless of any other considerations: preventing further sexual abuses generates a decisive moral reason to investigate. If, however, evidence indicates that the suspected offender is not likely to re-offend, or if the suspect is prevented from offending due to imprisonment or death, then the reason generated by the threat that the offender poses is much weaker.

Reasons generated by potential harm to the victim will weigh against investigation. Particularly if the victim is very vulnerable, investigators simply making initial contact with the victim can cause considerable harm. It is not uncommon to hear victims in these circumstances say things to the police like ‘everything was fine until you lot showed up’. There have been cases of suicide and of the breakdown of relationships precipitated by unwanted investigation into non-recent child sexual abuse cases. Such harm, where sufficiently likely, generates a strong moral reason not to investigate.

Other reasons that will bear on a decision whether or not to investigate relate to resourcing considerations and societal considerations. Investigating non-recent child sexual exploitation cases is unavoidably resource intensive. Analysis within Thames Valley, UK suggests that the average complex CSE case has seven victims, eighty-eight witnesses, twenty-one suspects and ultimately ten defendants. On average a complex CSE investigation takes nine investigators two years to complete and will cost £885,140 to resource from start to finish. These cases arise with a remarkable degree of regularity, arising on average every six months in Thames Valley alone. Where resources are limited there will be inevitable opportunity costs. Although it is not simply a matter of taking money and investigators from one investigation and moving them somewhere else, the reality is that trade offs have to be made when deciding how to allocate limited funds. If an investigation is hugely resource intensive and there are other pressing policing activities that would be affected, a weak reason against investigating is generated.

Societal considerations are multifaceted and can weigh weakly for or against investigation. Although it is not the case that justification stands or falls on the basis of public opinion, the legitimacy of the police is affected by public support and the fulfillment of obligations. If the police in a given community had made a statement that indicated that they would pay greater attention to offences of this type, then this would generate a weak reason to investigate, on top of those generated by deterrence, justice and public protection. If, however, police had a reputation within a community as prone to trawling for offences without securing convictions, considerations around trust might generate a weak reason not to investigate, especially if there are clear impediments to solving the case.

These various reasons, stacking up and weighing against each other, might seem impossible to adjudicate. How can any decision be made when there’s so much going on and when each case looks different with respect to these various reasons? Although there will not be a definitive answer, we can make some progress by carefully assessing the relative weights of the reasons (which will partly be dependent on facts of the case) and showing how they might be considered alongside each other, to see where the weight of reasons falls. In an attempt to do this, D/C/Supt Colin Paine and I developed a decision-making framework to assist investigating officers with these decisions (see the appendix). This framework is currently being piloted by Thames Valley Police.

It is important to reiterate that although a framework necessarily has to structure and assign weights to considerations, this is not an exact science. Other reasons may also be relevant in individual cases. But in so far as reasons vary in strength they will, in combination, point towards one course of action rather than another. Our ‘Oxford Framework’ assists by making an initial indication as to whether the reasons to investigate are stronger than the reasons not to investigate in a particular case. It is possible that frameworks such as this could apply, not just in cases of non-recent CSA, but across a range of morally challenging policing decisions.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Written by Julian Savulescu

Today, the Journal of the American Medical Association published an article entitled “Three Identical Strangers and The Twinning Reaction— Clarifying History and Lessons for Today From Peter Neubauer’s Twins Study” written by Leon Hoffman and Lois Oppenheim.  It provides background to a documentary, Three Identical Strangers, which gained a lot of attention earlier in the year into “the lives of Edward Galland, David Kellman, and Robert Shafran, triplet brothers who stumbled upon each other in their college years and enjoyed a brief period of celebrity before emotionally confronting the implications of their separation.” One triplet ultimately committed suicide.

The triplets were part of a covert research study by child psychiatrist Peter Neubauer who followed them up for many years to study gene-environment interactions in triplets separated at birth. The article alleges that Neuberger was wrongly blamed by the triplets and film makers for their separation at birth. The authors argue it was “Viola Bernard, then a prominent child psychiatrist from Columbia University and consultant to a now-defunct adoption agency” who was responsible for their separation because she “believed that children born of the same pregnancy and placed for adoption would fare better if they were raised by separate families.” The authors review some evidence from the child development literature at the time that supported the idea that twins or triplets would fare better if adopted, experiencing less sibling rivalry and having greater access to parental resources.

Importantly they argue that Bernard and Neubauer acted independently of each other. Moreover, the secrecy was required by laws at the time. “It was illegal at the time of the study to provide information about biological families to adoptive parents, a practice that did not begin to be modified until the late 1970s and 1980s.”

Hoffman and Oppenheim conclude:

“So the study was ethically defensible by the standards of its time—principles of informed consent and the development of institutional review boards lay in the future.10 However, these documentaries demonstrate how unsatisfactory that defense is to the study’s families who live with its legacy.

The films’ message for today’s child specialists and researchers is thus something other than their surface themes of outrage and restitution. They rather provide an unusually dramatic example of the potential for harm from human participant research, even if only observational.”

I would like to draw four other lessons from this episode.

  1. More Research

Paradoxically this is a call for more but better research – research that would have stopped the separation. This was introduced on the back of a fad, or ideological thinking and was not systematically or properly studied (for example, such research might recruit twins who had anyway had to be separated in childhood for other reasons). If proper research had been conducted earlier on the separation of twins and triplets, maybe this separation would never have occurred. There is a moral imperative to conduct research on hypotheses, no matter how well-intentioned or plausible they might sound. Until you do the research, you can’t know whether the intervention will do more harm than good, or whether the status quo is better than some available alternative.

A current contemporary example is drug policy and decriminalization of some recreational drugs. Instead of doing ongoing research on the best policy to promote human welfare, participants on both sides of the debate assert their approach is the right one. Instead, we should do carefully conducted research or design interventions with audit, being prepared to revise our policy in light of emerging evidence with the possibility even of reversing it.

  1. The Right to Know

Researchers were wrong not to disclose the conduct of their research (even if the law required it). People have a right to know if their data or they are a part of research project. To use people without informing them is to treat them as a mere means. It is to demean them, to treat them as less than individuals.

But it is important to realise this isn’t a thing of the past. Today, Facebook, Instagram and Google have been accused of conducting research without informing people in a comprehensible and intelligible way. One striking example was Facebook’s experiment manipulating people’s feed to receive more positive or more negative emotional content and measuring the change in the emotional tone of their own content. The paper was later updated with an explanation that the authors’ University ethics committee had not reviewed the project because, as the study was undertaken by Facebook for “internal purposes” it fell outside of their remit. There was no explicit consent process for the study, and users did not know that they were enrolled. Participant consent was taken to be in the terms of the Data Use Policy, which is agreed to as part of sign-up terms and conditions.

There is no way of knowing what research is taking place on these kinds of platforms. The Cambridge Analytica scandal, where a company harvested Facebook data to create profiles of millions of Facebook users and their friends via a free quiz, and used it for political advertising, shows that this data is useful for a number of ends, including political. There is no requirement for the research to be aimed at understanding and promoting human wellbeing.

  1. More Medical Research

While social media has essentially free reign over our data in many parts of the world, often without our knowledge for purposes that are not necessarily in our interests, medical research is now tightly regulated and controlled. There are much higher standards now applied to medical research because of abuses like this than in order social life and this now hampers research. In some ways, the pendulum has swung in the other direction when it comes to medical research. Numerous precise consents are required just to use data that is generated in the normal course of the medical encounter which makes research difficult or impossible to implement.

Big data provides enormous opportunities for research. We need to get more and more information about people, drugs, behaviours, etc to design better health care and life. But people also need to understand and have appropriate control. We could use new technologies like Block chain (the technology used in cryptocurrency transactions by Bitcoin) to achieve this or create new entities to protect people by providing anonymised data (Sebastian Porsdam Mann, Julian Savulescu, Philippe Ravaud & Mehdi Benchoufi, Blockchain, consent and prosent for medical research, in progress).

  1. Ethics Is NOT Relative

Hoffman and Oppenheim conclude that the research was ethical by the standards of the time. But this is misleading. The standards of the time and the research were both unethical. It was no defense of Nazi practices that they were ethical for their culture. It is no defense of slavery that it was just ethical for its time. It was not ethical that women did not have the vote in the 19th century. Ethics is not relative to time and culture.

What this shows is that we have made ethical progress. Bernard and Neubauer acted wrongly but sometimes it is difficult or impossible to act ethically. But rather than being sanctimonious about ourselves and critical of the behaviour we should look at our own behaviour. In 50 years, it is possible that future people will look at our treatment of nonhuman animals, particularly in agriculture, just as we now look at the slave owners of the 19th century.

We should put the microscope not on these historical actors but ourselves. How confident are we that we are not the Bernards or Neubauers of our time?

Thanks to Sam Wong from New Scientist for drawing my attention to this article. His New Scientist article gives further analysis.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Rebecca Brown and Julian Savulescu

Cross-posted from the Journal of Medical Ethics blog, available here.

There is a rich literature on the philosophy of responsibility: how agents come to be responsible for certain actions or consequences; what conditions excuse people from responsibility; who counts as an ‘apt candidate’ for responsibility; how responsibility links to blameworthiness; what follows from deciding that someone is blameworthy. These questions can be asked of actions relating to health and the diseases people may suffer as a consequence. A familiar debate surrounds the provision of liver transplants (a scarce commodity) to people who suffer liver failure as a result of excessive alcohol consumption. For instance, if they are responsible for suffering liver failure, that could mean they are less deserving of a transplant than someone who suffers liver failure unrelated to alcohol consumption.

These are challenging practical questions, but philosophy – in combination with other disciplines – can help. This involves a combination of intuition-mining through the use of thought experiments, and logical reasoning about justifiable principles, concepts and theories, a process the political philosopher John Rawls called reflective equilibrium. Gradually we can make our judgments about what responsibility is, what conditions must be present, why it matters, and so on, more consistent. One upshot of this is the identification of two conditions thought necessary by many philosophers to judge an agent responsible for some action. These are the control condition and the epistemic condition. The control condition requires that, for an agent to be responsible, she must have been able to control her action (she wasn’t forced or experiencing an epileptic fit or something similar). The epistemic condition requires that the agent could foresee the likely consequences of her action, including their moral significance (she knew that pulling the trigger would send a bullet into the victim’s leg, causing him serious injury; she didn’t believe it was a fake gun or loaded with blanks).

If either of these conditions are not fulfilled it is common to judge people as not responsible. But how does this work in the health context? We can take the same approach – ask whether or not the agent had control over her health-affecting actions, and whether or not she understood the health-affecting consequences of those actions. However, lots of health harms result not from a single, discrete action with clear likely consequences (firing a gun at another person). Instead, they result from the accumulation of numerous actions, each of which makes only a small, probabilistic contribution to eventual health harm. Moreover, one agent’s actions are heavily influenced by the environment they inhabit, including the actions of other agents. For instance, people are more likely to smoke if those around them smoke. The same goes for diet – people will tend to eat similar things to those they live and socialise with.

Theories of responsibility have been developed more or less with discrete, morally significant behaviours, performed by identifiable individuals, in mind. But in the health context the behaviours are repeated, may appear morally benign, and performed by an agent influenced by those around them. If we are to assess responsibility for these health-related behaviours, we need to consider how our theories of responsibility should be adapted to apply properly in these contexts.

In a recent paper in the Journal of Medical Ethics, we argue that two areas need particular attention: how should we assess responsibility over time and across agents. The first must tackle the question of how often the control and epistemic conditions must be fulfilled when considering complex (repeated) behaviours. For example, if we are interested in whether or not a smoker is responsible for developing a smoking-related disease such as heart disease, we might first ask ‘was the smoker responsible for smoking?’ But this seems to require consideration of whether the smoker was responsible on each occasion she smoked a cigarette. That means considering whether the control and epistemic conditions were fulfilled every time she smoked. It is not clear what the threshold should be for considering someone responsible for a smoking habit – whether they must fulfill the conditions of responsibility on every occasion they smoke, or most or only some of those occasions.

The second consideration is to look at who the ‘agent’ is that is responsible for smoking. This might seem obvious – we typically identify agents with the human bodies they inhabit, as bounded by the skin. But there are plausible arguments for extending agency outside the skin – if we make use of various prosthetics or assistive technologies to enhance our physical functioning and cognitive powers, why not include these artefacts within the bounds of the agent? This raises the possibility of whether it is possible to identify the ‘agent’ who is responsible for some action as being spread across multiple bodies. If this is the case, the most likely instances where it would happen seem to be intimate dyads – couples who live together, have special obligations towards one another, have significant influence over one another’s behaviour, and who share important goals. In these cases, someone who seeks to quit smoking might be significantly aided in doing so by a supportive partner who supports them to do so. Similarly, the partner may scupper those efforts by continuing to buy them cigarettes or mocking their attempts to quit. In these cases, we think responsibility might plausibly be ‘dyadic’ – distributed across the dyad, rather than located only with the individual.

The answers to questions about how responsibility should be evaluated over time and across people could have practical implications, in terms of targeting healthcare interventions and distributing resources appropriately. There appears to be political will to ‘hold people responsible’ for their health. Taking account of diachronic responsibility could result in those more frequently fulfilling the conditions of responsibility for their behaviour being more blameworthy than those whose capacity and foresight have historically been more variable. Dyadic responsibility might justify requiring those who contribute significantly to a partner’s health outcomes to share in the responsibility for those outcomes. More work is needed to identify what the implications of responsibility are in the healthcare context – most importantly, whether it is ever legitimate to deny or delay treatment on the basis of responsibility. But if responsibility is to play any role in healthcare, it is vital that it reflects the reality of many health-affecting behaviours. Our current accounts of responsibility are poorly equipped to guide us here. It is essential we adapt them in the light of the reality that humans are creatures whose behaviour changes over time and is influenced by others.

This post relates to Brown, RCH and Savulescu, J (2019) ‘Responsibility in Healthcare Across Time and Agents‘ Journal of Medical Ethics, 10.1136/medethics-2019-105382

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Practical Ethics by Charles Foster - 2w ago

In a recent blog post on this site Dom Wilkinson, writing about the case of Vincent Lambert, said this:

‘If, as is claimed by Vincent’s wife, Vincent would not have wished to remain alive, then the wishes of his parents, of other doctors or of the Pope, are irrelevant. My views or your views on the matter, likewise, are of no consequence. Only Vincent’s wishes matter. And so life support must stop.’

The post was (as everything Dom writes is), completely coherent and beautifully expressed. I say nothing here about my agreement or otherwise with his view – which is comfortably in accord with the zeitgeist, at least in the academy. My purpose is only to point out that if he is right, there is no conceivable justification for a department of medical ethics. Dom is arguing himself out of a job.

If he is right, autonomy is the only ethically relevant principle. There are no queasy questions about identity, personhood or authenticity. And our boundaries are easily defined: we bleed into nobody, and nobody bleeds into us.

By ‘autonomy’ is evidently meant the expressed wishes of the capacitous and the previously expressed or presumed wishes of the incapacitous. If the principle is so self-evidently true for end-of-life situations, it is hard to see why it is not the key that unlocks all problems in clinical ethics. Questions about embryo manipulation evaporate once one observes the benefit that can accrue to already autonomous creatures from the use of non-autonomous creatures. Questions about abortion are similarly trivial. Questions about organ donation and other post-mortem use of tissue are determined in exactly the same way as Dom urges that Lambert’s fate should be decided. There are no remaining philosophical questions. Even resource allocation questions become ethically easy. The value system relevant for the utilitarian calculus is autonomy: one simply has to work out how to maximise the amount of autonomy in the world.

There is still plenty to discuss, of course. But the remaining discussion is for lawyers, public policy makers, and health economists. The lawyers will need to devise procedures to enable capacity to be assessed, to ensure that those all-important wishes have been made freely and expressed with sufficient clarity, and so on. The public policy people will agonise about the tension between individual rights and societal interests. The health economists will create indices for autonomy and plug them into their Bayesian algorithms. But the philosophers’ work is done. It would be kind, but hardly essential, to invite them to meetings (in the Law Faculty) about medical law. But a department of their own? It can’t be justified in these straitened times.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Written by Neil Levy

A dad joke is a short joke, often turning on a pun or a play on words. Here are a couple of examples:

  • Did you hear about the restaurant on the moon? It’s got great food, but no atmosphere.
  • A sandwich walks into a bar and orders a beer. “Sorry,” says the bartender, “we don’t serve food here”.

They are called ‘dad jokes’ because they are stereotypically told by fathers. The term is a somewhat backhanded compliment. Like the words “daggy” (in Australian and New Zealand English) and “naff” (in British English) – all of which are words that could appropriately be used to describe jokes in the genre – calling something a dad joke at once conveys that is extremely uncool, but also indicates grudging affection for the target. Dad jokes are bad, in many people’s eyes (not mine, as it happens), but they’re so bad that they’re a kind of artform all of their own, and we convey an affection and grudging respect for those who tell them.

I like dad jokes. Some of them are inventive and I find them amusing. But I worry about the name. Calling them “dad jokes” seems to me to be sexist.

Roughly, to call behavior sexist is to say that it expresses attitudes that classify people on the basis of sex, when sex is irrelevant. It’s not sexist to offer maternity leave to women only, because sex is relevant to pregnancy. It is sexist to offer advanced mathematics to men only, because sex is not relevant to mathematics. Sexist behavior is not always directlywrong. It is directly wrong to offer advanced mathematics to men only, because it closes down possibilities that would otherwise have been open to women. But it may nevertheless be indirectly wrong, even when it is not directly wrong.

Calling this genre of jokes ‘dad jokes’ doesn’t strike me as directly wrong. It may nevertheless be indirectly wrong. I think it expresses – and perhaps plays a small role in reinforcing – attitudes that are bad for men and for women.

First, for men. The idea that dads are daggy (to use the Australian word) associates fatherhood with a loss of masculinity. Before children, men are real men; after children, not so much. Both stereotypes are constraining. The first may play a role in the epidemic of male suicide, with men implicitly thinking that real men face their problems alone, and real men bear prime responsibility for the wellbeing of their families. The second stereotype, insofar as it is emasculating, may play a role in men attempting to avoid parental responsibilities. Better stereotypes would avoid the suggestion that masculinity is compromised by childrearing or by emotional dependence on others.

But the idea that dads are daggy may be bad for women too. While the “dad” stereotype is mixed, it contains positive elements. It associates dads with intellectual capacities worth having. With their plays on words, and often their display of knowledge, dad jokes associates men with intelligence. This association reflects, and may reinforce, our implicit biases; our implicit assumption that a male candidate is likely to better at certain tasks than a female who possesses equal qualifications. So the association is likely bad for women too.

I don’t think the sexism of “dad joke” is a burning issue. I doubt the harms arising from this linguistic quirk are major. Nevertheless, as part of a pattern of gendering language and thought, this kind of sexism is best avoided. At worst, we lose nothing by referring to them as corny jokes, or daggy jokes, or what have you. At best, we may thereby play a small role in improving the future for both men and women.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A Guest Post Written by Jonny Anomaly

It’s been 20 years since Allen Buchanan and his colleagues published From Chance to Choice: Genetics and Justice. The book was a landmark, and it repays careful reading.

But there is at least one kind of question that has been largely (if not entirely) ignored in discussions about whether we should regulate parental choice, once parents have access to technologies that allow them to sculpt the genetic endowment of their children. How should we think about reproductive choices that are good for each but not for all? What should we do when there is a conflict between parents selecting the best traits for their children, when a different distribution of traits might be better from a social standpoint? Another way of asking the question is this: how should we think about situations in which there is a potential conflict between the principle of procreative beneficence and the principle of procreative altruism?

Bioethicists like Dan Brock have argued that, although there should be a presumption in favor of reproductive liberty, there may be reasons to regulate parental choice when failure to do so would produce serious harm to the child, or would undermine a public good. For example, assuming that some degree of cognitive diversity helps groups of people solve complex problems, cognitive diversity in a population is a public good. Other reproductive public goods include maintaining a balanced sex ratio, and preserving immuno-diversity in a world of rapidly evolving microbes.

Let’s take a specific case of a psychological trait. Suppose studies tell us that extraverts tend to have more friends, more sexual partners, and report slightly higher subjective satisfaction than introverts. Now suppose that introverts are more likely to creatively solve important problems when they are left alone, but don’t perform as well as extraverts in social settings. To the extent that it’s possible, individual parents might select for more extraverted children even if it’s socially beneficial to have introverts in a population.

I’m not arguing that these are binary personality traits, or that they’re purely genetically determined. I’m only arguing that traits like these are influenced by genes, and that for any personality trait we can think of, we cannot simply assume that the ability to select or alter our children’s genes will always produce a socially optimal distribution of traits.

In thinking about cases like this, it’s worth mentioning a principle named by Amy Guttman: Regulatory Parsimony. Guttman worries that bioethicists are often too quick to call for rules against using novel biomedical technologies. As Guttman says, “the blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.”

By contrast, the principle of regulatory parsimony recommends “only as much oversight as is truly necessary to ensure justice, fairness, security, and safety while pursuing the public good.” While the principle of parsimony is vague, on one interpretation it’s the familiar principle of the least restrict alternative repackaged in a form that applies to synthetic biology in general, and genetic enhancement in particular.

I’d like to give a few reasons to endorse this approach to enhancements for cases in which there is a conflict between what is (believed to be) good for each and what is (believed to be) good for all:

First, complex laws are often easier for powerful people to navigate, and tend to increase unjust inequalities by raising the relative cost of accessing new technologies. For example, medical tourism is already thriving for organ transplants and surrogacy, and it is likely to happen for gene editing and embryo selection as well. Too much regulation can harm the worst off by making access prohibitively expensive, and by creating black markets that are harder for poor people to navigate.

Second, too many laws can crowd out social norms, which are more sensitive to local conditions than laws are. For example, if local sex ratios are likely to deviate from 50/50 as more people use IVF and PGD, it may favor one sex or another in different places. Norms are better than laws at influencing these choices, in part because there is likely to be more value to choosing the opposite of whatever sex happens to be in the majority at a particular place and time.

Finally, regulators have their own biases. They often lack the information needed to find a socially optimal distribution of traits, and lack the incentives to implement it. Past eugenics programs ceded too much authority to the state even if, as Allen Buchanan has argued, states do have a role in promoting informed choice and distributing biomedical technology to parents who wish to select the traits of their children.

I expand on these themes in a new paper, written with Chris Gyngell and Julian Savulescu, and in a recent talk at Duke University.

Is there anything I’m missing?

Jonny Anomaly is an Academic Visitor at Oxford’s Uehiro Centre for Practical Ethics.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In a fascinating presentation hosted in March by the Oxford Uehiro Centre in Practical Ethics, Professor Seumas Miller spoke about what is now known as ‘moral injury’ and its relation to PTSD, especially in the context of war fighting and police work.

Miller began by explaining the standard view of moral injury, as used in the work of, for example, Nancy Sherman and Jonathan Shay. Like PTSD, it arises from highly stressful events, such as events threatening one’s own life or the loss of a comrade. PTSD can be said to consist in extreme mental distress, involving fear, depression, and other negative states, along with physical or cognitive impairments, such as memory loss or insomnia. Moral injury is usually taken to be a species of PTSD which involves experiences, such as the killing of an innocent, that involve the violation of an individual’s own moral values.

Miller, however, argued that PTSD is in effect a species of moral injury. He first outlined the notion of what it is involved in caring about something which, or someone who, is worthy of being cared about. Objects of such care might include one’s own life, the lives of one’s comrades, one’s own autonomy, and the approval of others. If any of these is taken away, the moral identity of the ‘caring-self’ will itself be damaged.

At this point, Miller focused in particular on war fighters and police officers. These individuals deeply care about the things most human beings deeply care about, but they are also willing to put themselves in harm’s way and to use potentially harmful methods to achieve their goals. Consider in particular the concern such people often have for honour, as it consists in loyalty, self-discipline, and so on. These virtues are clearly elements of their moral identity.

We can now understand how, on a care-based account, the relation between moral injury and PTSD can be differently understood. Moral injury mightinvolve the sufferer’s acting wrongly or wrongdoing, but it need not. The traumatic event causing PTSD undermines core elements of their moral identity. PTSD, then, turns out to be a species of moral injury, rather than the other way round.

Miller then turned to the issue of ‘dirty hands’ and moral injury. First he noted the possibility of conflict between an individual’s role identity (as, say, war fighter) and their inter-personal identity (as, say, father). Consider cases in which a war fighter kills an enemy terrorist, guilty of terrible war crimes, or an innocent civilian, believing the killings to be both morally and legally prohibited. (Structurally similar cases can be imagined for police work, involving the treatment of criminals and innocent parties.) Such actions might be placed in the category of ‘dirty hands’ – legally and morally prohibited actions performed for good ends. It is easy to see how considering such actions and then performing them can lead the agent down a slippery moral slope, in which such actions become habitual, and also how – given the conflicts between identities mentioned above – can lead to moral injury and indeed PTSD.

Miller closed with several recommendations for reducing moral injury of this kind in war-fighting and police work: Recruit the resilient; provide ongoing training; avoid disastrous failure; reduce stressors; provide supportive leadership, ethics training, and opportunities to consult psychological injury professionals; communalize moral injury through collective moral responsibility.

There may be responses available for the defender of the ‘standard’ view of moral injury and its relation to PTSD. Miller himself allowed that we can distinguish PTSD into two types, mirroring the phenomena described in the standard view as PTSD and moral injury – i.e., those which involve moral responsibility in some respect, and those which do not. But it may well be that the moral/non-moral distinction at work in the standard view is less significant than it appears at first sight, and so the pressure Miller puts on that distinction, and his articulation of a more capacious evaluative alternative in the notion of moral identity, is a very welcome intervention in the debate.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Written by: Carl Tollef Solberg, Senior Research Fellow, Bergen Centre for Ethics and Priority Setting (BCEPS), University of Bergen.
Espen Gamlund, Professor of Philosophy, Department of Philosophy, University of Bergen.

In 2015, there were 56.4 million deaths worldwide (WHO 2017).[i] Most people would say that the majority of these deaths were bad. If this is the case, why is it so, and are these deaths equally bad?

Death is something we mourn or fear as the worst thing that could happen—whether the deaths of close ones, the deaths of strangers in reported accidents or tragedies, or our own. And yet, being dead is not something we will ever live to experience. This simple truth raises a host of challenging philosophical questions about the negativity surrounding our sense of death, and how and for whom exactly it is harmful. The question of whether death is bad has occupied philosophers for centuries, and the debate emerging in the philosophical literature is referred to as the “badness of death.” Are deaths primarily negative for the survivors, or does death also affect the decedent? What are the differences between death in fetal life, just after birth, or in adolescence? When is the worst time to die? These philosophical questions, although of considerable theoretical interest, is particularly relevant for how we evaluate deaths in global health, and policy-makers spending money to finance different health programs need to know how to answer them. 

Two Disconnected Debates

The ancient philosopher Epicurus (341–270 A.D.) would not have thought that the deaths of the 56.9 million people mentioned above were bad for them. Epicurus argued that death is not prudentially bad for us because “as long as we exist, death is not with us; but when death comes, then we do not exist” (Epicurus 1940, 30–34). Many contemporary philosophers disagree with Epicurus and believe that death can be bad for those who die. Most notably, Thomas Nagel (1970) argued that death can be bad for those who die when and because it deprives them of the good life they would have had if they had continued to live. This marks the beginning of the debate emerging in the philosophical literature referred to as the “badness of death.” Moreover, Nagel’s so-called Deprivation Account has come to be regarded as the orthodox view as to why death is prudentially bad.

There are a few trends in the current philosophical debate that are worth mentioning. First, the debate is secular in tone with the assumption that permanent non-existence follows death. Second, the focus is on the instance of death rather than on the process of dying. Third, most of the discussion is concerned about whether death can be prudentially bad for those who die, rather than bad for everyone else but the decedent, such as family, friends and society.

Up until the 1940s, epidemiology was primarily concerned with mortality rates, such as the crude death rate and age-specific death rates (Dempsey 1947). The crude death rate is merely the number of deaths per year per 1,000 people, while age-specific death rates are crude death rates restricted to an age group. Following mortality rates, none of the 56.4 million deaths is ranked as worse or better than the other. Descriptive measures have their virtues as they are simple, transparent, and inherently universal. It can be argued, however, that this leaves out something of importance. First, clearly some deaths are worse than others, and these descriptive measures are silent about this fact. Second, one may question whether descriptive mortality measures—without further adjustments—are suited for comparison and aggregation with morbidity measures. These and similar concerns can be addressed by mortality measures that are to some extent evaluative.

Evaluating Deaths

Consider the deaths that occurred worldwide in 2015. Of these 56.4 million deaths, 2.7 million were those of infants, and 5.9 million were those of children from birth and up until 5 years of age. The deaths of people from 5 to 14 years of age counted 1 million. The majority of the 56.4 million deaths were those of older adults. Furthermore, there were roughly 2.6 million stillbirths not included in this WHO statistic of the total number of fatalities. Do we want to say that all these deaths are equally bad?

It would seem that our answer to this question depends on our theoretical starting point. If deaths are bad for those who die primarily because of what they are deprived of, then it would appear that the earlier in life death occurs, the worse it is. Newborn deaths, for instance, would be worse than adolescent deaths because newborns are deprived of a greater future than adolescents. While many philosophers seem to accept this conclusion (Marquis 1989; Feldman 1992; Broome 2004; Bradley 2009), others seek to defend intuitions that conflicts with it. The latter group considers the death of an adolescent to be worse than the death of a newborn, even if the newborn is deprived of a longer future (see, e.g., Dworkin 1993; McMahan 2002). Most philosophers would say that stillbirths should be included in WHO’s statistics because the death of late-term fetuses, although not a great misfortune for them, is nevertheless bad enough to be counted.

How we should evaluate the quality of life or well-being as such has been thoroughly discussed in both philosophy and in medicine. How we should evaluate deaths has not received similar attention. The question of the harm of death for the individual who dies is undoubtedly complex. However, by answering this question carefully, we can seek to design appropriate evaluative measures that can guide health policy around the world. To avoid the question, or to answer it rashly, is to risk getting global health priorities wrong. If we are mistaken in our evaluation of death, then our monitoring and assessing of the burdens of different diseases become impaired. For example, contrary to current practice, decisive arguments exist as to why one should include the annual 2.6 million stillbirths (2015) in the evaluation of deaths. Moreover, an illumination and improvement of the way we evaluate deaths can have consequences for how organizations such as WHO monitor health in the global disease burden study (GBD) and, not least, it may give us a better tool for prioritizing between major health programs that are intended to prevent deaths in different age groups. Ultimately, it will have consequences for the clinical work to prevent premature deaths in general and stillbirths in particular.

Future Discussions

We have recently edited and published an anthology—Saving People from the Harm of Death—which discusses how to evaluate deaths. In this volume, leading philosophers, medical doctors, and economists discuss different views on how to evaluate death and its relevance for health policy. This includes theories about the harm of death and its connections to population-level bioethics. For example, one of the standard views in global health nowadays is that newborn deaths are among the worst types of death, while stillbirths are neglected. This raises difficult questions about why birth is so significant, and several of the book’s authors challenge this standard view.

This is the first volume to connect philosophical discussions on the harm of death with discussions on population health, adjusting the ways in which death is evaluated. Changing these evaluations has consequences for how we monitor health and compare health outcomes, prioritize different health programs that affect individuals at different ages, as well as how we understand inequality in health. Our hope is that academics and policy-makers alike will be much more concerned and engaged in the question of how to evaluate deaths in the future.

References

Bradley, Ben. 2009. Well-Being and Death. New York: Oxford University Press.

Broome, John. 2004. Weighing Lives. New York: Oxford University Press.

Dempsey, Mary. 1947. “Decline in Tuberculosis: The Death Rate Fails to Tell the Entire Story.” American Review of Tuberculosis 61, 2: 157–164.

Dworkin, Ronald M. 1993. Life’s Dominion: An Argument about Abortion, Euthanasia, and Individual Freedom. New York: Vintage Books.

Epicurus. 1940. “Letter to Menoeceus.” In the Stoic and Epicurean Philosophers, edited by W. J. Oates, translated by C. Bailey, 30–34. New York: Modern Library.

Feldman, Fred. 1992. Confrontations with the Reaper: A Philosophical Study of the Nature and Value of Death.

Gamlund, Espen and Carl Tollef Solberg (eds.). 2019. Saving People from the Harm of Death. New York: Oxford University Press.

Marquis 1989 1989. “Why Abortion is Immoral.” The Journal of Philosophy 86, 4: 183–202.

McMahan, Jeff. 2002. The Ethics of Killing: Problems at the Margins of Life. New York: Oxford University Press.

Nagel, Thomas. 1970. “Death”. Nagel, Thomas. 1970. “Death.” Noûs 4, 1: 73–80.

WHO. 2018. “The Top 10 Causes of Death.” http://apps.who.int/mediacentre/factsheets/fs310/en/index.html (Visited 14.05.19).

[i] This essay is a shortened and revised version of the introduction in Gamlund and Solberg (2019).

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview