There is a famous Seinfeld joke about public speaking. It's based on an old opinion poll result that reported that people fear public speaking more than death. Seinfeld used this to make the wry observation that the next time you are at a funeral you should reflect on the fact that the person giving the eulogy would rather be in the coffin.
Suffice to say, I don't feel that way about public speaking. I have many social anxieties but speaking in front of a large (or small) audience is not one of them.1 That's not to say I'm any good at it, of course. But I have at least done a lot of it and grown accustomed to its rhythms and its demands. Furthermore, I have learned from the mistakes that I have made over the years so that even if I amn't particularly good at it, I am at least better than I used to be.
This is all by way of justifying what you are about to read. I get asked quite often for advice on giving talks (by students) and I am frustrated that I have still not got around to formalising my thoughts on the matter. What follows is my first attempt to do so. If you are in a hurry and are just interested in reading my 'tips' on how to give a talk, then you can find them summarised in the poster that accompanies the text. If you have more time, and are willing to tolerate the occasional diversion, then I hope you will read the full thing because I'm not just going to explain the methods I follow when giving talks, I'm also going to reflect on things I love and hate about the process, give some rants about academic conferences, consider the larger purpose and philosophy behind the practice of giving talks.
As always, what follows is my own take on things. I am not claiming that the things I find useful when giving talks will be useful to others, or that I have undertaken a detailed survey of the evidence concerning what works and doesn't. I'm just distilling the lessons I have learned from my own experiences. This means, inevitably, that my reflections are geared toward giving academic-style presentations. I have some experience giving other kinds of talks too, so I hope what I say is of more general interest. I'll include links to examples of talks I have given along the way and I will also include several as an appendix at the end.
For ease of analysis, I am going to structure the discussion around a timeline that corresponds to the major steps involved in preparing and delivering a talk. The timeline is illustrated below along with the 'tips' that correspond to each step in the timeline. As you can see, it starts at the point in time at which you accept an invite to give a presentation, then proceeds through to preparation, delivery and follow-up. The preparation step is, proportionally, the longest and this is because I think it is the most important.
1. The Invite or Acceptance The journey to a talk starts when you accept an invite to give one. This will either be because you have deliberately sought an invitation (e.g. by submitting a paper to a conference) or because someone contacts you out of the blue asking you if you would be interested in giving one. The former is common early in an academic career; the latter common later in a career.
When I was young and eager to establish myself, I accepted all invitations to give talks without hesitation (assuming I could afford the travel or someone else was going to pay for it). Nowadays, I take a bit more time reflecting on whether it is something I really want to do. There are several reasons for this. The most obvious is that preparing a talk takes a lot of time (or, at least, it should take a lot of time) and I need to figure out whether I have that time to spare. But that's only part of the picture. There are other, less practical and more existential, reasons that loom larger for me now.
I have developed quite a cynical attitude toward academic conferences and gatherings over the years. Academic conferences are strange affairs. They are made up of hordes of earnest scholars gathering together in brightly-lit meeting rooms and poorly-catered conference suites, to speak at each other in 10-20 minute timeslots. Most of the talks are poorly attended and poorly delivered. The speakers assume that their audiences are interested in what they are saying. The attendees repay this assumption by appearing bored and listless, busily scrolling through their phones or checking email from their real jobs back home.
Having attended dozens of these events over the years, I have turned into something of a 'conference nihilist', at least when it comes to the talks delivered at them (I'll say more about the social aspects of conferences later on). I think conference talks generate a lot of sound and fury but ultimately signify nothing. I see them as a holdover from a bygone era. At one point in time, attending conferences and listening to papers may have been the only way to 'keep in touch' with what was happening in your field. It also may have been the only way to contribute and get attention for the work that you do. I read nowadays of the lore surrounding the Solvay conferences on quantum physics in the early part of the 20th century and they sound like exciting affairs. Groundbreaking work was presented and debated, and the frontier of human knowledge was expanded.
I have never attended a conference like that. It seems to me to clearly no longer true that attending conferences is essential to academic work. I can access more working papers and preprints than I have time to read at the click of a button, and I can interact with and solicit feedback from academics all over the world from the comfort of my home. Indeed, the experience of reading, writing and deliberating over ideas from the comfort of my home is usually (though not always) superior to the experience I get at a conference. So I have really started to question the value of attending and participating.
My commitment to conference nihilism tends to vary depending on the size of the event. Very large conferences tend to generate the most profound sense of nihilism. I'm talking here about conferences with hundreds (maybe even thousands) of attendees where your talk takes place in one of half a dozen parallel streams. At such an event, your contribution will feel like a small drop in a large ocean: you'll be lucky if anyone notices a ripple. Smaller events generate less nihilistic feelings. My sweetspot is the 'workshop' with 15-20 participants, each of whom is given a decent amount of time to talk, and all of whom are curious and interested in what the others have to say. But sometimes those events are lacklustre too because they are poorly organised and poorly run. An event where I am the sole speaker (e.g. a guest seminar or lecture) can seem quite attractive and less nihilistic on paper, but my experience of these is mixed too. Guest seminars and guest lectures are often poorly attended (maybe it's just me?) and having organised a few myself, I know that there is sometimes a desperate, last-minute attempt to get 'bodies in the room'. This means attendees are less engaged and interested than you initially suspect and the talk can generate less useful discussion as a result.
All of this might make it sound as though I hate giving talks and I am blame others for the nihilistic nature of academic conferences - as though the problems all stem from the organisation, format and attendees, and not from the speaker and their inability to say anything interesting or valuable. That's not the case. As you'll see below, I do think you can enjoy the process of giving talks, and I do think the speaker has a heavy burden to discharge: they have to try to make their talk as good as possible. My point here is simply that before you accept an invitation to give a talk you have to know what you are getting yourself into. You have to realise that most talks, at most events, are relatively pointless. You have to embrace that pointlessness.
In this sense, conference nihilism can be quite liberating. Once you acknowledge that most conferences are nihilistic affairs, you are freed from the ordinary expectations and obligations associated with attending and delivering talks at them. You are free to shape your own conference destiny, at least to some extent, by being a little more picky and selective in what you are doing.
In this respect, I have three 'tips' when it comes to accepting invitations to give talks:
Don't over-leverage yourself: Don't accept too many invitations to give too many talks. Only agree to do as many as you feel able to do to the best of your ability. This is a lesson I have learned the hard way. Realising that most talks I give won't change the intellectual landscape, or be life-changing or career-shaping, gives me the courage to be more selective.
Limit expectations: Don't expect too much from the process or experience of giving a talk. Don't be surprised if no one attends, or seems to care about what you have said. Make sure you are comfortable with that possibility before accepting.
Focus on the process not the outcomes: Before accepting ask yourself whether you will enjoy the process of preparing and delivering the talk. Is it going to be on a topic that you want to talk about? Will you enjoy the challenge of preparing and refining the talk? If so, do it. If not, and if you see the talk as a stepping stone to future success, maybe reconsider.
There are other reasons to accept an invitation too. Sometimes I accept invitations because it allows me to visit a place I have never visited, or meet up with people I would like to meet up with, but these reasons have become less compelling as I have aged. I find that the obligation of preparing and delivering a talk tends to suck energy away from any wider enjoyment of the trip or the destination, and if I'm at an event with other participants and speakers, I feel an obligation to attend those talks too (more on the reason why a bit later on). Finally, when you travel to a lot for specific events, it all tends to get a bit monotonous. You see the world through airports and hotel rooms. Sometimes these are nice places, but they can be a bit same-y.
2. Preparing a Talk The common cliché is that preparation is paramount. I try to avoid clichés, but when it comes to giving talks this is one that I wholeheartedly endorse. Most talks I attend (and give!) are bad. I can't say for sure why this is the case, but my guess is that the majority of the time the problem is that the speaker hasn't prepared properly. They haven't thought about the audience and their expectations; they haven't rehearsed and refined what they want to say; they haven't given due consideration to the time constraints of the talk; and they haven't put a proper structure on what it is they want to say.
I understand why this is. Proper preparation takes a lot of time and, given the low stakes of most talks, it's hard to justify that temporal investment. Other deadlines intervene and, before you know it, it is the night before your talk and you are frantically pulling together some slides and jotting down some bullet points so that you will have enough content to fill your time-slot.
I've been there.
The problem is that this under-investment of time and frantic last-minute preparation just feeds the cycle of nihilism: you don't expect much from your talk, so you don't put much effort into it and, sure enough, your talk is a flop and this confirms your worst suspicions about the process. This is another reason not to over-leverage yourself and commit to giving too many; and another reason to only agree to give talks when you are willing to invest the time and effort required to make the talk as good as possible.
There is an odd paradox to this. I am aware of it. Once I embraced conference nihilism, I found that I was able to take the process of preparing and giving talks seriously once again. This was because I was free to reject invitations that I might otherwise have accepted out of some sense of professional obligation or personal ambition, and free to focus on accepting the few to which I was willing to dedicate myself. This has enabled me to enjoy the preparation process once again, to see it as something that can be intrinsically rewarding and fascinating, not just an unwelcome chore. The net result seems to be that I live the opposite of conference nihilism, while still being committed to conference nihilism in the abstract. I am happy to live with that paradox.
But what of the preparation process itself? Through trial and error, I have hit upon the following method that I try to follow when giving a talk. I don't always succeed, but when I do, I find that the end result is better.
Write it out and learn your speed limits The first thing I like to do is write out the content of my talk in full. I don't aim for perfection. I try to come up with a rough first draft that I will subsequently refine. I do this for two reasons. The first, and most important, is that it allows me to control the length of my talk. Over the years, I have learned how many words I can say in a given period of time. I find this to be a powerful tool when preparing talks. For example, I know that if I have to give a ten minute talk, I will need to produce approximately 1200 words of text; if I have to give a twenty minute talk, I will need to produce about 2500; and so on. Writing it out gives me a clear sense of whether I am within those limits and whether something needs to be cut out or included to make it work (this is what I mean by learning your speed limits).
The other reason why I write it out is that it helps me to remember the content of my talk. I very rarely 'read' a talk, though sometimes I do refer to notes. Indeed, I find talks that are read out to be quite dull (even if some people can do it very well). This was one reason why I resisted writing anything out for years. I thought talks should be more casual, spontaneous monologues, and I worried that writing them out would make me a slave to a script. But I now realise that this isn't true. If you have a written script, and you learn it off and rehearse it, you can still be quite natural in your delivery and include some spontaneous ad libs and remarks. You can, however, do this safe in the knowledge that you know what you want to say and how long you have to say it.
Build an Enticing and Transparent Structure When writing out the talk, I try to ensure that it has an enticing and transparent structure. In other words, I try to ensure that it says something that the audience might want to hear and that it is clear about its aims and objectives. I appreciate that this is a very generic 'tip', but it is hard to be more specific since the content of a good talk is highly variable. A dense, data-rich talk might go down well at a scientific conference, but not so much at a meeting of local politicians. My sense is that you should try to meet your audience's expectations as much as possible, but not at the expense of sacrificing your own values and competencies. So, for example, you might think it is important for the local politicians to hear the data-rich talk. That's fine. You just have to do more work to make them willing to hear it.
From my own perspective, there are three things I try to do when structuring a talk:
I try to build rapport and/or intrigue at the outset. In other words, I try to lead with an interesting story or example that sets up the problem I am going to discuss in the remainder of the talk. I also then try to explain where I want to bring the audience by the end of the talk. What proposition or thesis am I going to defend? Do I expect them to agree with it? Are they likely to be resistant to what I say? What assumptions will I make that they might not share? I see talks as an attempt to bridge the gap between different minds. My working hypothesis is that the gap between my mind and the mind of others is quite large and so I have to do a lot of work to bridge it. I have adopted this working hypothesis based on my own experiences when listening to other people talk. I find they assume that I know more than I do about the topic they are talking about it, that I will find it just as interesting as they do, and that we share similar methodological or theoretical assumptions. I try not to make those assumptions (though we all have our blindspots).
I try to include 'memorable moments' within the structure of the talk. These memorable moments could be interesting stories, thought experiments, visuals, statistics and so on. I like to pepper these throughout the talk and have at least one towards the start and one towards the end. How many I have in the middle depends on the length of the talk. The basic rationale behind this is that including such moments draws in the attention of those whose minds may have wandered away from the talk. Trying to have some audience participation can be a good way of doing this too. But I'm often too cowardly to do this.
I try to be provocative/interesting and not comprehensive. My academic instinct is to be comprehensive. When I'm writing something, I want to address every objection I can think of, to identify all the gaps in what I am saying, and acknowledge all the complexity and nuance. The problem is that I cannot do that in a talk, particularly a short talk of ten to fifteen minutes. I have to resist the urge to hedge every argument and highlight every nuance. I have to get the core idea across and I have to make that core idea (at least somewhat) provocative and interesting. This is because I want the audience to engage with it. If they have objections, great -- we can discuss them in the Q&A -- but I have to get them excited enough to even bother raising those objections. That said, I will admit that this is a bit of a balancing act. You don't want to be needlessly provocative and you don't want to come across as being naive or cavalier about the complexities of the issues you are talking about. So it is a judgment call, but my judgment is that academics tend to lean too far in favour of hedging and complexity when giving talks. This means they never get to the interesting idea within their allotted time.
Remember Less is More, Particularly with Visuals A good heuristic for preparing a talk is to cut about a third from your initial draft. At least, that's always been a good heuristic for me. I try to stuff too much into my talks. This might be okay if I stuck to the script but I like to ad lib and wander when it seems appropriate to do so. This usually results in a rushed presentation and I end up sacrificing some of the interesting ideas I wanted to include anyway. I find it's better to nip this problem in the bud by murdering some of my darlings before I finalise the script, even if this is hard to do.
I've found that preparing for formal debates has been a good training ground for this. I'm not a huge fan of the debate format, but I've participated in a few (you can see one of my debate contributions on YouTube, here, listen to the audio of one here, and read another here), and the one thing they do have going for them is that they force you to condense your key arguments and ideas into a short timeframe. You typically get 10 minutes in a formal debate (sometimes more; sometimes less). If you want to present a robust argument for a proposition in that space of time, you have to cut out a lot of the fluff and nuance. I find this strict time limit to be creatively liberating as it counteracts my natural tendency to prolixity (he says 3,500 words into this article).
Less is more is definitely true when it comes to visual accompaniments to talks. I know it's a cliché to talk about 'death by powerpoint' but it amazes that people still produce horrendous powerpoint presentations to accompany their talks. You know the type: densely packed slides, with 12-point font that the speaker proceeds to read out with their back facing the audience. I've always tried to avoid that, but I have gone through different phases on how best to design a powerpoint.
For years, I adopted a powerpoint style that was similar to the one Lawrence Lessig employed (the so-called 'Lessig Method'). You can see what this is like in this video. The classic Lessig method is to have lots of slides, often consisting of single words or sentences, that serve as visual emphasis to the spoken words. The slides and the speaker thus perfectly complement one another, like different instruments in an orchestra. I never quite approached the staccato-esque style of Lessig, but I aim for something similar.
I now look on this as a mistake. I now think that you shouldn't use slides or visuals if they don't genuinely complement and accentuate what you are saying. This belief is born out of my experiences at conferences where the speakers have learned the 'less is more' lesson but have taken it too far. For example, I was recently at a conference where one speaker produced a slide deck that contained four slides, each of which consisted solely of a numbered heading for the sections of their talk, and another speaker produced a slide deck that consisted solely of photographs that served as obscure (and often unexplained) visual metaphors for what they were saying. I went away with the sense that both talks would have been better if the speakers had dropped the slides altogether and made themselves the focus of attention.
Nowadays, when I produce slides to accompany a talk, I try to have just a few images that help to explain the key concepts or ideas and I cut out all the other fluff. I've embedded two example slide decks from talks I have given in the past 12 months that hopefully illustrate my approach. The first is from a talk I gave about robotics and discrimination and the second is from a talk about algorithmic domination (written versions of both talks were formerly published as blog posts here and here). Both talks were about 25-30 minutes in length.
In this episode I talk Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security and ethics, including Artificial Superintelligence: a Futuristic Approach. We talk about how you might test for machine consciousness and the first steps towards a science of AI welfare.
[This is a slightly expanded version of a talk I gave at the SIENNA workshop on the ethics of human enhancement in Uppsala, Sweden on the 13th June 2019. The talk was intended to be a provocation rather than a comprehensively reasoned argument.]
I've been asked to say a few words about the challenges that emerging enhancement technologies might pose for how we define human nature (with a nod towards how this might also interact with the 'dual use' nature of technology). I didn't say this to the organisers when they asked me, but this is a difficult topic for me to talk about. That's because I am a sceptic of human nature. I tend to agree with Allen Buchanan (2009; 2011) that discussions of 'human nature' in the enhancement debate tend to obscure more than they clarify. This is because the term 'human nature' usually functions as a proxy for something else that people care about. My feeling is that people should talk about that something else instead, and not about human nature.
That said I'm clearly in a minority in taking this sceptical view. People are hungry for discussions of human nature. The library shelves groan under the weight of scholarly volumes dedicated to the topic. Just to illustrate, there was a book I read many years ago as a student by Leslie Stevenson called Seven Theories of Human Nature. It was first published in 1987. In 2017, they published the seventh edition of the book, now titled Thirteen Theories of Human Nature - apparently the number of theories of human nature had doubled in the intervening 30 years. At that rate of growth, the number of theories of human nature will exceed the total number of humans in just over 900 years. Clearly people are obsessed with this topic.
What is it that obsesses them? Obviously, I can't do justice to the diversity of thinking on this matter -- I'm just setting up a conversation -- but I can at least help to structure that conversation by considering three senses in which people use the term 'human nature' and by explaining what I find problematic and interesting about them.
The first sense is as a descriptive-explanatory theory, i.e. as a theory that describes some fundamental truth(s) about what it is to be a human being. The classic descriptive theories of human nature are essentialist in nature. They try to identify the characteristics that are both necessary and sufficient for belonging to the kind 'human being', They do this usually by engaging in human exceptionalism: i.e. by focusing on characteristics that distinguish members of human kind from other animals. Typical examples of such characteristics include things like the capacity for self-consciousness, altruism, language, laughter, art, complex tool use and so on.
These essentialist theories are scientifically dubious. In this regard I find myself swayed by an old argument by the philosopher David Hull to the effect that modern evolutionary biology undermines essentialistic theories of human nature. This is because modern evolutionary biology endorses the view that world is filled with genetically varying individuals that occasionally form stable reproductive populations that we call 'species', but these 'species' are temporary, and at least partially, linguistic facts. As he put it:
[I]t is simply not true that all organisms that belong to Homo Sapiens as a biological species are essentially the same… periodically a biological species might be characterised by one or more characters which are both universally distributed among and limited to the organisms belonging to that species, but such states of affairs are temporary, contingent and relatively rare.
Even if you don't buy that argument, there are two other fatal flaws with the essentialist theory. Whatever characteristic you pick as being distinctive of humans (self-consciousness, altruism etc) you can (a) find animals that share primitive or proto-versions of those traits (with perhaps the exception of true language) and, more importantly, (b) find individuals (or groups of individuals) that we would like to call 'human' that lack them, either due to disability or disease or some other factor.
These problems with the essentialist theory have led some scientists and philosophers to endorse non-essentialist theories of human nature. These theories do not pretend to identify distinctively human characteristics but, rather, try to identify characteristics that tend (statistically) to be shared by humans in virtue of their evolutionary and developmental origins. Edouard Machery, for example, has defended a 'nomological' theory of human nature that focuses on traits that have their origins in our shared evolutionary history. Similarly, Michael Tomasello, in his recent trilogy A Natural History of Human Thinking, A Natural History of Human Morality, and Becoming Human: A Theory of Ontogeny, has defended a theory of human nature that focuses on characteristics that emerge from our shared evolution and ontogenetic development (although I find that Tomasello leans too far in favour of the human exceptionalism that is typical of essentialist theories of human nature). Related to this, it is also worth noting that some people argue that we should move away from theories of human nature that expect it to be a stable and unique 'thing' and should, instead, favour theories that view it as a 'process'. This is because an individual human being is not a stable thing but is, rather, a process that develops and changes over time (proponents of this view include John Dupré and Paul Griffiths).
These non-essentialist theories strike me as being much more plausible, but after reading about them I tend to wonder how useful they are, even as scientific theories. The problem is that they all tend to allow for a lot of individual and cultural variation in the traits that are supposed to define our natures. Furthermore, I often get the sense that their proponents pick and choose characteristics that they think are important and interesting and use those to define what it means to be human. In this sense, I worry that proponents of these theories are like dog breeders that measure each individual dog relative to an 'ideal breed type', which as best I can tell are arbitrary constructions. In other words, just as there is no ideal dalmation or poodle; so too is there no ideal human. The problem is that even these non-essentialist theories of human nature tend to assume that there is.
This brings me to the second sense in which people use the term 'human nature', namely: as a normative theory of what is good/bad (and permissible/impermissible) for 'creatures like us'. This normative approach to human nature is probably the approach that we are most interested in here today. We are all presumably familiar with the way in which normative theories of human nature get weaponised in debates about the ethics of enhancement. Some people claim that enhancement is against human nature and so ought to be stopped; some people claim that it is expressing our most human traits and so ought to be celebrated. Neither side persuades the other.
Normative theories of human nature could be thought of as being entirely distinct from descriptive theories of human nature. If they were, I would probably find them unobjectionable, but that's only because they would then be indistinguishable from theories of human well-being and flourishing (which, though contested, do, provide some genuine normative guidance with respect to enhancement). The problem is that many people try to ground their normative theories of human nature in descriptive theories, presumably to give them some extra normative 'oomph'. Suffice to say, I find this practice highly dubious because I find those grounding theories highly dubious. There is the same 'pic-and-mix' mentality at play: people select characteristics they happen to like and then reify them into this descriptive-normative theory of what it is to flourish as a human being.
A clear example of this mentality in action, at least based on my reading, is the theory of human nature that the conservative philosopher Roger Scruton puts forward in his 2017 book On Human Nature (Princeton University Press, 2017). Scruton, who has always been a controversial figure, is much-maligned recently due to his apparent sympathy for the right-wing governments in Poland and Hungary and the fact that he has been favoured by both governments. I mention this not to poison the well but because I expect people reading this would find it odd if I didn't make some allusion to this ongoing controversy. Anyway, in the book Scruton argues against reductionist/scientific theories of human nature and in favour of an emergentist/Kantian theory. Roughly, he claims that what is distinctive about humanity is that we understand ourselves and our fellow humans to be moral agents, who possess a unique first-person perspective on the world, and are capable of grasping and acting for moral reasons. A couple of quotes will give you a flavour of his approach:
I want to take seriously the suggestion that we must be understood through another order of explanation than that offered by genetics and that we belong to a kind that is not defined by the biological organization of its members.
We are animals certainly; but we are also incarnate persons, with cognitive capacities that are not shared by other animals and which endow us with an entirely distinctive emotional life--one dependent on self-conscious thought processes that are unique to our kind.
I don't want to dismiss these thoughts entirely. Clearly, there is a sense in which it is true that this mode of self-understanding and interpersonal relationality is central to the human experience1, but there is also a sense in these are the properties that Scruton would like to associate with what it means to be human. Shining the spotlight on these characteristics obscures the fact that some humans don't express or exemplify these properties in the form that Scruton imagines, and that most (all?) humans are more than just these properties.
Despite my scepticism of theories like this, I do think that (non-essentialist) descriptive theories of human nature can provide some important normative heuristics to those of us interested in the enhancement project. They might help us to identify practical limits to what it is possible to change about most humans without doing harm. This might be useful when it comes to setting policies at a population level (while acknowledging exceptions at an individual level). Nick Bostrom and Anders Sandberg discussed this point several years ago in their paper 'The Wisdom of Nature: An Evolutionary Heuristic for Enhancement'. They accepted that there was some room for a form of Burkean conservatism in the enhancement debate: the human body was a complex, evolved system and we should be cautious about tinkering with it too much and too quickly -- though they certainly didn't rule out that tinkering entirely.
That said, one problem with the enhancement project is that -- at its most speculative limits -- it threatens to entirely destabilise any descriptive theory of human nature. What I mean here is that if we achieve near perfect technological control over every aspect of our biology, then there will be no practical constraints on what we can do to ourselves and hence nothing to provide even heuristic normative guidance to our policy-making. It will be entirely up to us to decide what form of life we want live. Some people find this idea deeply disturbing. It is almost like they want to cling to a mythical form of human nature in order to avoid the burden of choosing what kind of life they want to live for themselves (and, yes, there is something redolent of Sartrean existentialism in this but I don't have time to explore it further in these remarks).
This brings me to the final sense in which people talk about 'human nature', namely: as a catch-all explanation (and maybe excuse) for the 'darker' things we do. This is human nature as an 'anti-normative' theory. What I have in mind here is people who say things like 'it is in our nature to be violent' or 'it is in our nature to be jealous/envious'. These sentences, which are common, all seem to be wistful lamentations about the dark side of what it means to be human. They are designed to caution us against ourselves.
This third sense of the term suffers from the same basic flaws as the second sense of the term. To the extent that it is a theory of human evil or human badness it is relatively unobjectionable; to the extent that it tries to ground itself in a descriptive-essentialist theory, it has some problems. That said, this third sense of human nature has obvious implications for debates about the dual-use nature of technology. If humans tend (statistically) to have a dark side, and if it is relatively fixed and stable, then it will pose regulatory and strategic challenges when it comes to the development of technologies that can be used for good or ill. I think one of best recent expositors of these challenges is Phil Torres. Phil has written some thought-provoking and terrifying essays about the threat that 'omnicidal' or 'apocalyptic' agents pose to the future of humanity (I interviewed him about his work here). These are human beings whose dark side is, for whatever reason, turned up to eleven. Phil's point (echoed by Nick Bostrom in his paper 'The Vulnerable World Hypothesis' and Ingmar Persson and Julian Savulescu in their series of books and papers on the 'unfit for the future' idea) is that as powerful technologies become more widely dispersed, the probability that one of these apocalyptic agents will misuse them starts to get unnervingly high.
Unless we can do something to identify, reform and/or neutralise these individuals, then human nature (whatever we take it to be) doesn't have much of a future. What is interesting to me is that Phil and the others who raise this point often suggest technological solutions to the problem. The idea seems to be that powerful and widely dispersed technologies, when combined with the dark side of human nature, could lead to our doom. The problem is that we cannot (or are highly unlikely) to stop the development and dispersal of powerful technologies. Therefore, we need some technological fix that will either (a) identify and neutralise potentially threatening humans (Phil's suggestion) or (b) correct for the dark side of humanity using some kind of moral enhancement technology (Persson and Savulescu's suggestion). As best I can tell, all proponents of this argument admit that their technological solutions are highly speculative and unlikely to work in practice. They are, in a sense, a 'hail Mary pass', a last desperate attempt to stop humanity from sliding over the cliff. But if that's the case, I'm not sure that anti-technological solutions (i.e. solutions focussed on preventing the development and dispersal of powerful technologies) can be dismissed so quickly2.
Either way, it seems that if you believe that human nature is dark and relatively fixed, you should be very worried about the future.
For what it is worth, these properties also feature strongly in Tomasello's scientific theory of human nature ↩︎
Myself and Phil went back and forth on this point in the podcast I did with him. You can listen here, if you are interested ↩︎
Here's a new preprint. It's a penultimate draft of a chapter I have contributed to the upcoming edited collection Algorithmic Regulation (edited by Karen Yeung and Martin Lodge), which is due to be published by Oxford University Press later this year. As per usual, more details and links are below.
Abstract: We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory systems respond? This chapter defends three claims by way of response. First, it argues that autonomy is indeed under threat in some new and interesting ways. Second, it evaluates and disputes the claim that we shouldn’t overestimate these new threats because the technology is just an old wolf in a new sheep’s clothing. Third, and finally, it looks at responses to these threats at both the individual and societal level and argues that although we shouldn’t encourage an attitude of ‘helplessness’ among the users of algorithmic tools there is an important role for legal and regulatory responses to these threats that go beyond what are currently on offer.
This audio essay looks at the Epicurean philosophy of death, focusing specifically on how they addressed the problem of premature death. The Epicureans believe that premature death is not a tragedy, provided it occurs after a person has attained the right state of pleasure. If you enjoy listening to these audio essays, and the other podcast episodes, you might consider rating and/or reviewing them on your preferred podcasting service.
Most of us want to feel good about ourselves. We want to think that what we do is worthwhile. We want others think well of our efforts. Some people claim to be indifferent to the opinions of others, but I think we should be sceptical of their claims. Their indifference is often an act — something they do, paradoxically, to attract attention and good will. They want other people to look at them and say ‘I envy their lofty indifference.’ Even if it is true that some people are genuinely indifferent to the opinions of others, I suspect they still want to feel good about what they do themselves, i.e. even if they don’t care about gaining the esteem of others, they care about gaining their own self-esteem.
But self-esteem can be a tricky thing to cultivate, particularly in a world of competitive endeavours. The philosopher Robert Nozick once argued that self-esteem was necessarily competitive and comparative: one person could gain self-esteem only if another person lost out. He also argued that the best way to overcome the injustice this might entail was to create a society in which there are many different ways to win self-esteem.
In what follows, I want to look at Nozick’s arguments in a bit more detail. I do so through the lens of Andrew Mason’s short article “Nozick on Self-Esteem”. As we shall see, Mason challenges some of Nozick’s foundational assumptions about the nature of self-esteem.
1. Is Self-Esteem Necessarily Comparative and Competitive? To understand where Nozick is coming from, we need to understand self-esteem. Self-esteem is a positive self-assessment. It is a belief that you score highly on some valuable attribute. For example, you might attach your self-worth to skill in writing or artistic expression. In that case, you will have high self-esteem if you believe that you score highly on those skills.
But how do you know that you score highly on those skills? This is the critical question. Nozick’s claim is that the only way you can tell whether you score highly is by looking at how your performance ranks relative to that of other people. I will only know that I am a good writer by comparing what I write with other writers. Do I sell more books than they do? Do more people read my work? Do more top critics shower my prose with praise? Ditto if I am an artist. My assessment of myself is always conducted on a comparativist basis: am I better than others who do the same thing?
If Nozick is right about this, then it has some unpleasant consequences. It means that self-esteem is a necessarily competitive game. I only gain self-esteem if I am better than someone else at a particular skill. And my gain in ranking is their loss in ranking. Esteem rankings are thus a scarce resource: something that people fight over in order to gain self worth.
But is Nozick right about this? There is undoubtedly some truth to what he has to say. Sometimes the only way to tell whether you are good at something is by comparing your performance with others; and there is undoubtedly a lot of competition and comparison in the contemporary economy of esteem. But it is not the full picture. There are two reasons to think that Nozick’s endless war of all against all is avoidable.
The first reason is that some activities have objective standards of success: you can tell if you are good at them by comparing your performance to the objective standard. You don’t have to compare it with other people. Mason, working off an example provided by the philosopher Anthony Skillen, uses boat-building to illustrate this point. If you want to know if you are any good at boat-building, you see whether the boat you build floats, whether it gets from A to B and so forth. You don’t have to compare your boat-building with that of other people.
It’s not that simple and straightforward, of course. Even in the case of boat-building, you could, if you liked, start comparing your boat with the boats built by others. You might do this if you have attained the minimum threshold of success in boat-building (the boat floats!) and start to focus on other attributes of a boat such as its aesthetic beauty. You might do this to further hone your skills. But even then there will be a blending of objective and comparative standards at play. The important point, and the one that needs to be emphasised, is that Nozick’s mental model of self-esteem draws too much from the world of competitive sports, and this skews his reasoning into thinking that comparative assessments of worth are the only game in town.
The second reason for doubting Nozick’s argument is that many people accept that they will not score highly, relative to others, on some skills and attributes. Nevertheless, they can take pride in performing those skills to the best of their own ability. For example, I know I will never be as good a golfer as Tiger Woods, but I can take pride in playing the best round of golf that I am capable of playing. This optimisation of my own talents and abilities can be a source of self-esteem, even if my ranking compared to others is quite low.
Some people might argue that even this is implicitly comparative. In determining whether I have done something to the best of my ability then I am, implicitly, comparing myself to someone else who has not done something to the best of their ability (e.g. another golfer of a similar level of ability, or myself at an earlier point in time). But this can’t be true. As Mason points out:
…if it were true that the only way in which one could judge whether a person had done something to the best of their ability was by comparing their performance with others, then there would be grave difficulties in determining whether a person had done as well as they could. How could one tell whether a person had developed their talents to a greater extent than others, or had merely a greater initial endowment of them?
The bottom line then is that although self-esteem is often comparative and competitive; it doesn’t have to be. Indeed, ideally, self-esteem should derive from a rational self-reflection on one’s own abilities, relative to absolute standards of success. This doesn’t have to be immune to the opinions and attitudes of others, but it doesn’t have to be hostage to them either.
2. The Distribution of Esteem Even if we accept Mason’s critique, there are problems. He could be right, and yet it could still be the case that (a) many people continue to derive esteem from comparative rankings with others (i.e. they get trapped in the competitive ethos) and (b) they struggle to gain self-esteem even when following this approach because they cannot win the comparative ranking. What do we say to them? Can we help them to gain self-esteem? There are two things worth considering here.
First, it is worth considering Nozick’s own argument about how to deal with people who lose out in competitions for self-esteem: create more competitions. The tendency to measure one’s self worth relative to the skills of others is going to be a particularly problematic in a world in which there is only a handful of ‘games’ one can play to gain self esteem. To illustrate, imagine being a man in Ancient Sparta. Let’s suppose that in that culture the primary metric of self worth is courage and skill in battle (I’m sure this does a disservice to the historical reality but it conforms to the stereotype). If you aren’t a brave and skilful warrior, then you will probably lack self esteem. This could be true despite the fact that you have many impressive skills that can distinguish you from other people. Perhaps you are skilled dancer or artist for example. Unfortunately for you, there are no opportunities to compare your skills in those domains against others because no one else is being incentivised to participate in dancing or art competitions. If they were, you might find a way to gain self esteem.
This is why Nozick recommends that we create a society in which there is a diversity of competitions for self-esteem. This will allow individuals to find their own niche. I should say I use the term ‘competitions’ somewhat loosely here. The idea is not that there are lots of formal competitions taking place; rather it is that the society celebrates many different skills and allows for lots of informal comparative rankings to take place. As Nozick puts it:
The most promising ways for a society to avoid widespread differences in self-esteem would be to have no common weighting of dimensions; instead it would have a diversity of different lists of dimensions and weightings. This would enhance each person’s chance of…[making] a non-idiosyncratic favorable estimate of himself.
Now you could argue that this solution misses the point because we shouldn’t be deriving our self-esteem from these comparative and competitive rankings. We should try to shift to more objective assessments of ability. But that may not be easy in all cases, and I think there is something to what Nozick is arguing. A society that celebrates and tolerates many ‘competitions’ for self-esteem is going to be happier than one that channels everything through a few competitions.
This brings me to the second point that is worth bearing in mind. Nozick might argue that a free market, capitalist society is the best one for enabling lots of competitions for self-esteem. There are lots of jobs one can perform that are valued by the market, so you have an opportunity to find your own niche and gain self-esteem. But as Mason points out, one major problem in our society is that there is differential access to these different niches. Not everyone gets into the competition that suits them best. They are forced, through economic necessity, into jobs that do not suit their skills and aptitudes, but are a means to an end. Furthermore, the market doesn’t value all skills equally. Indeed, some skills have no economic value at all. People aren’t incentivised to develop those skills because of this. The net result is that many people end up in competitions that slowly gnaw away at their self-esteem.
This problem of differential access to esteem-raising activities has been a major theme of my own recent research on workplace automation (although I never talk directly about ‘self-esteem’, preferring instead to focus on meaning and well-being). I have argued that workplace automation can exacerbate the problems alluded to in the previous paragraph, but also provide the means to escape them. In other words, I have argued that Nozick’s ideal society in which everyone can find their own niche is more likely to arise in a post-work economy than in one committed to the capitalist work ethic. Defending this post-work ideal is a major focus of my forthcoming book Automation and Utopia.
[Note: This is (roughly) the text of a talk I delivered at the bias-sensitization workshop at the IEEE International Conference on Robotics and Automation in Montreal, Canada on the 24th May 2019. The workshop was organised by Martim Brandão and Masoumeh (Iran) Mansouri. My thanks to both for inviting me to participate - more details here]
I never quite know how to pitch talks of this kind. My tendency is to work with the assumption that everyone is pretty clever, but they may not know anything about what I am talking about. I do this from painful personal experience: I've sat through many talks at conferences like this where I get frustrated because the speaker assumes I know more than I do. I'm sorry if this comes across patronising to some of you; but I'm hoping it will make the talk more useful to more of you.
So, anyway, I am going to talk about discrimination and robotics. More specifically, I am going to talk about the philosophical and legal aspects of discrimination and how they might have some bearing on the design of robots.
Before I get started I want to explain how I approach this problem. I am neither a roboticist nor a computer scientist; I am a philosopher and ethicist. I believe that there are three perspectives from which one can approach the problem of discrimination and fairness in the design and operation of robots. These are illustrated in the diagram below.
The diagram, as you can see, illustrates three kinds of relationships that humans can have with robots. The first, which we can call the 'design relationship', concerns the relationship that the original designers have with the robot they create. Discrimination becomes a worry here because it might leak into that design process and have some effect on how the robot looks and operates. The second relationship, which we can call the 'decision relationship', concerns the decisions the robot makes with respect to its human users. Discrimination becomes a worry here because the robot might express discriminatory attitudes toward those users or unfairly treat users from different groups. The third relationship, which we can call the 'reaction-relationship', concerns the reactions that human users have to the behaviour of the robot. Discrimination becomes a worry here if the humans discriminate against the robot or if they learn and normalise such attitudes from their interactions with the robot and carry them over to humans.
A comprehensive analysis of the problem of discrimination in the design of robots would have to factor in all three of these relationships. I do not have time for a comprehensive analysis in today's talk so, instead, I'm going to focus on the second relationship only. That said, unless the robot is itself a fully autonomous agent, focusing on the second relationship inevitably entails focusing on the first relationship too since the robot's decision algorithms will be created by a team of designers. There is, however, a difference between them. Whereas the first relationship concerns both the look, appearance and general behaviour of the robot; the second relationship is concerned specifically with its decision-making practices and how they might affect human users.
With that caveat out of the way, I want to do three things in the remainder of this talk:
(a) I want give a quick overview of how philosophers think about the concepts of fairness and discrimination.
(b) I want to look at the debate about algorithmic discrimination/fairness and consider some of the key lessons that have been learned in that debate.
(c) I want to make two specific arguments about how we should think about the problem of discrimination in social robotics.
1. A Brief Primer on Fairness and Discrimination Let's start with the philosophical overview and consider the nature of fairness. Fairness is a property of how social goods (i.e. money, food, jobs, opportunities) get distributed among the members of a population. A common intuition is that a fair distribution is an equal one. But what does that really mean?
To think about this more clearly, it will help if we have a simple model scenario in mind. Consider the image on the screen. It represents a highly stylised social system. On the bottom of the image we have a population of individuals. These individuals are divided into three social groups (you can think of these as 'races' or identities, if you like). In the middle of the image we have what I am somewhat awkwardly calling 'outcome makers'. These are properties attaching to the members of the population that make them more likely to achieve certain socially desirable outcomes. These properties can take many forms. Some might be innate characteristics of the individuals (e.g. race, sex) and some might be more contingent or acquired properties (e.g. income, a good education, good health and nutrition). All that matters, is that they make it more likely that the individuals will achieve relevant outcomes. As you can see from the image, different individuals have different outcome makers and they are not evenly distributed across the population. Finally, on the top of the image, we have the outcomes themselves, i.e. the 'buckets' where the individuals in the population end up. For illustrative purposes, I've imagined that the outcomes are jobs but they could be anything at all (e.g. income, number of friends, access to credit and housing, number of intimate relations, whatever it is you care about). As you can see from the image, different proportions of the three main social groups have ended up in different outcome buckets. In fact, there is something oddly skewed about the outcomes since all the 'blue' members of the population ending up in one bucket.
With this simple model in mind, we can explain more clearly some of the different ways in which philosophers think about fairness and equality .
Equality of Outcome: We can start with the concept of "equality of outcome", which is widely touted as a desirable goal for social policies. Following our model, this could mean one of two things. It could mean, in the extreme case, that all members of the population, irrespective of their social group, share the same outcome (in this case, they all have the same job but it could also mean they all have the same number of friends or income). This understanding is extreme and counterintuitive -- why would you want to live in a society in which everyone had the exact same outcome? -- so an alternative interpretation, which is more plausible, is that equality of outcome arises when all social groups are equally or proportionally represented in the different social outcomes. This corresponds to what some people call a principle of fair representation.
Equality of Opportunity: Even though equality of outcome is a popular idea, it is also widely criticised. People worry about a society that forces people into different outcomes in the interests of fairness. So instead of achieving equality of outcome they think we should focus on equality of opportunity. This is function of how the 'outcome makers' get distributed among the population. From our model, equality of opportunity would arise when each member of the population, irrespective of social grouping, is given a mix of outcome makers that enables them to achieve any of the different possible outcomes. This doesn't mean that they all have the exact same mix of outcome makers; it just means that whatever mix they have is such that they each have the same opportunity of achieving the different possible outcomes (the playing field has been levelled between them).
Theories of equality of opportunity are often complicated by the fact that different philosophers take different attitudes toward different outcome makers. A common assumption is that you cannot and should not equalise all outcome makers. For example, you cannot make all people have the same level of physical strength or general intelligence. Nor should you force people to acquire abilities that they don't really want (e.g. forcing everyone to take high-level quantum physics). You have to respect people's autonomy and responsibility for choosing their own path in life. This means that when thinking about equality of opportunity, you should equalise with respect certain kinds of outcome maker, but not all.
[A brief aside: you may notice from this discussion that I don't think much of the distinction between equality of outcome and equality of opportunity. My view is that opportunities are really just outcomes of a particular kind: they are outcomes that are steps on the road to other outcomes. But it would take a bit longer to justify this position, and the distinction between equality of outcome and equality of opportunity is popular one so I am working with it.]
This brings us to our second key topic -- discrimination. To understand how philosophers think about discrimination, we just need to add some details to our model. First, we need to think about how the members of the population access the different possible outcomes. I've assumed that this just a function of the outcome makers they possess, but that's not very realistic. In any real society, there will probably be some set of actors or institutions that decide who gets to access the different outcomes. We can call these actors or institutions 'the gatekeepers'. They act as screeners and sorters, taking members of the population and assigning them to different outcomes. To make it more concrete, and to continue with our example, we can imagine people interviewing candidates for different jobs and deciding who should be assigned to which job. Discrimination is a phenomenon that arises from this gatekeeping function. More precisely, it arises when gatekeepers rely on criteria that we deem to be unjust or unfair in screening and sorting people into different outcomes.
To understand this problem more clearly, we need to add a second complication to the model. This complication concerns the properties of the members of the population who get sorted into the different outcomes. Each member of the population will be a bundle of different characteristics and properties. Some of these characteristics will be 'protected' (e.g. race, age, religion, gender) and others will not be (e.g. income, educational level, IQ). The core idea in discrimination theory and practice is that gatekeepers should not use protected characteristics to sort people into different outcomes. They should only rely on unprotected characteristics.
Actually, it's a bit more complicated than that and we need to introduce several conceptual distinctions in order to think clearly about discrimination. They are:
Direct Discrimination: This arises when gatekeepers use protected characteristics to guide their decision-making, e.g. an interviewer explicitly refuses to hire women for a job.
Indirect Discrimination: This arises when, even though gatekeepers do not explicitly use protected characteristics to guide their decision-making, they rely on other characteristics (proxies) that have the effect of sorting people according to their protected characteristics, e.g. an interviewer refuses to hire someone with more than one career break (could be a problem if women are known to be more likely to have taken a career break).
Individual Discrimination: This arises when individual gatekeepers act in discriminatory ways (be they direct or indirect).
Structural Discrimination: This arises when social institutions, as opposed to individual gatekeepers, work in such a way that members of some social groups are systematically discriminated against when compared to others. Structural discrimination could arise with or without individual discrimination.
Positive Discrimination: This arises when gatekeepers are incentivised to use protected characteristics in decision-making in order to achieve a fairer representation of different social groups across the possible outcomes. This is usually done to correct for historic unfairness in social sorting (e.g. affirmative action hiring policies).
Impartiality: This is when gatekeepers show no favourability or bias toward certain social groups or individuals in their decision-making. This is often the long-term aim of anti-discrimination policies.
I appreciate that is a lot of conceptual distinctions but they are all important when it comes to understanding the debate about fairness and discrimination.
You might ask: "How do we prove that discrimination has occurred?" That is a good question and it is often difficult. Sometimes we have clear and unambiguous evidence of discriminatory intent, but more often we see that different social groups have been sorted disproportionately into different outcomes and we infer from this that some discrimination might have occurred. A more thorough investigation might confirm this suspicion. Another question you might ask is "how do we decide what counts as a protected characteristic?" This is also a good question and there is no single answer. Different moral considerations apply in different cases. Sometimes we designate something to be a protected characteristic because we believe it has no actual bearing on whether someone would be a good fit for a particular outcome, but people mistakenly think that it does, and we want to stop this from influencing their decision-making; other times it is because we don't want to punish people for characteristics that are outside of their control; sometimes its a combination of factors. There is an interesting phenomenon nowadays of something we might call 'protected characteristic creep', which is the tendency to think that more and more characteristics deserve to be protected against discriminatory decision-making, which often has the net effect of making it more difficult to avoid discrimination.
2. Lessons from the Algorithmic Fairness Debate With that overview of the philosophy of fairness and discrimination out of the way let's consider the implications for robotics. And let's start by considering the lessons we can learn from the algorithmic fairness debate. As some of you will know, algorithmic decision processes have been used for some time in the public and private sector, for example, in credit scoring, tax auditing, and recidivism risk scoring. This usage has been growing in recent years due to advances in machine learning and big data. This has generated an extensive debate about algorithmic fairness and discrimination. Looking to that debate, is an obvious starting point for anyone who cares about fairness and discrimination in robotics. After all, the decision-making algorithms used by robots are likely to be based on the same underlying technology.
Some of the lessons learned from this debate are important but relatively unsurprising. For example, it is now very clear that decision algorithms can work in biased and discriminatory ways. This may be because they were designed to rely on discriminatory criteria (directly or indirectly) when making decisions, or it may be because they were trained on biased or skewed datasets. Trying to recognise and correct for this problem is an important practical concern. But, as I say, it is relatively unsurprising. I want to focus on two lessons from the algorithmic fairness debate that I think are more surprising and still practically important.
The first lesson is that, except in very rare circumstances, there is no way to design an algorithmic decision process that is perfectly fair and non-discriminatory.
This is a lesson that was first learned by investigating risk-scoring algorithms in the criminal justice system. Some of you will be familiar with this story already, so please forgive me for sharing it again. The story is this. For some years, an algorithm known as 'COMPAS' has been used in the US to rate how likely it is that someone who has been prosecuted for a criminal offence will commit another offence in the future. This rating can then be used to guide decisions regarding the release (on parole) of this person. The COMPAS algorithm is somewhat complex in how it works, but for present purposes, we can say it works like this: a risk score is assigned to a criminal defendant and this score is then used this to sort defendants into two predictive buckets: a 'high risk' reoffender bucket or a 'low risk' reoffender bucket.
A number of years back a group of data journalists based at ProPublica conducted an investigation into how this algorithm worked. They discovered something disturbing. They found that the COMPAS algorithm was more likely to give black defendants a false positive high risk score and more likely to give white defendants a false negative low risk score. The exact figures are given in the table below. Put another way, the COMPAS algorithm tended to rate black defendants as being higher risk than they actually were and white defendants as being lower risk than they actually were. This was all despite the fact that the algorithm did not explicitly use race as a criterion in its risk scores. This seems like a textbook case of indirect discrimination in action: we infer from the lack of fair representation in outcome classes that the algorithm must be relying on proxies that indirectly discriminate against members of the black population.
Needless to say, the makers of the COMPAS algorithm were not happy about this finding. They defended their algorithm, arguing that it was in fact fair and non-discriminatory because it was well calibrated. In other words, they argued that it was equally accurate in scoring defendants, irrespective of their race. If it said a black defendant was high risk, it was right about 60% of the time and if it said that a white defendant was high risk, it was right about 60% of the time. It turns out this is true. If you look at the figures in the table given you can see this for yourself. The reason why it doesn't immediately look like that is because there are a lot more black defendants than white defendants -- an unfortunate feature of the US criminal justice system that is not caused by the algorithm but is, rather, a feature the algorithm has to work around.
So what is going on here? Is the algorithm fair or not? Several groups of mathematicians analysed this case and showed that the main problem here is that the makers of COMPAS and the data journalists were working with different conceptions of fairness and that these conceptions were fundamentally incompatible. This is something that can be formally proved. The clearest articulation of this proof can be found in a paper by Jon Kleinberg, Sendhil Mullainathan and Manish Raghavan, which is like a version of Arrow's impossibility theorem for fairness.
The details are important and often glossed over. Kleinberg et al argued that there are three criteria you might want a fair decision procedure to satisfy: (i) you might want it to be well-calibrated (i.e. equally accurate in its scoring irrespective of social group); (ii) you might want it to achieve an equal representation for both groups in false positive ratings (more technically, you might want both groups to have the same average score in the positive class) and (iii) you might want it to achieve an equal representation for both groups in the false negative rating (more technically, you might want both groups to have the same average score in the negative class). They then proved that except in two unusual cases, it is impossible to satisfy all three criteria. The two unusual cases are when the algorithm is a 'perfect deterministic predictor' (i.e. it always get things right) or, alternatively, when the base rates for the relevant populations are the same (e.g. there are the same number of black prisoners as there are white prisoners). Since no algorithmic decision procedure is a perfect predictor, and since our world is full of base rate inequalities, this means that no plausible real-world use of a predictive algorithm is likely to be perfectly fair and non-discriminatory. What's more, this is generally true and not just true for cases involving risk scores for prisoners, even though this was the initial test case.
This result has significant practical implications for designers of decision-algorithms. It means they face some hard choices. They can have a system that is well-calibrated or one that achieves a fair representation, but not both. Plausibly, you might want different things in different decision contexts. For example, when deciding who would make a good doctor, you might want an algorithm that is well-calibrated: because you want some confidence that the people who end up becoming doctors are good at what they do and you want to stop people from assuming some people aren't good at what they do for irrelevant reasons. Contrariwise, when deciding who would make a good politician and should be put forward for election, you might want a system that achieves a balanced representation of the different social groups. This is to say nothing of the further complexities that arise from the fact that fairness is just one normative goal of social policy: there are other goals that can compete with it and crowd it out, e.g. security and well-being.
That's the first lesson to be drawn from the algorithmic fairness debate. What about the second? This lesson is that although there is a lot of concern about discrimination in decision algorithms, there is good reason to think that algorithmic decision procedures can be less discriminatory than traditional, human-led decision procedures. There are two reasons for this. The first is that we are prone to status quo bias when it comes to assessing the normative implications of any novel..
In this episode I talk to Carissa Véliz. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities at the University of Oxford. She works on digital ethics, practical ethics more generally, political philosophy, and public policy. She is also the Director of the research programme 'Data, Privacy, and the Individual' at the IE's Center for the Governance of Change'. We talk about the problems with online speech and how to use pseudonymity to address them.
There’s an oft-repeated ‘fact’ thrown around in debates about retirement and old age. The details can vary but it’s something to the effect that when the pension entitlement age was set at 65 in the early part of the 20th century, very few people could expect to collect it, and those that did could only expect to collect for a few years (probably no more than 5). This was because life expectancy was so much lower back then. Hence setting pension entitlement at 65 was a relatively low cost gesture for the government. But what was low cost back then has turned into a major expenditure today, now that people are living so much longer and life expectancy has shot up. Whereas most people could only expect to live to their early 60s in the early 1900s, nowadays the majority can expect to live into their late 70s/early 80s. This places considerable strain on public finances and means more people are spending more of their lives in a ‘retired’ and ‘non-productive’ (from an economic/tax-paying perspective) state.
Having done some digging, it turns out this fact is notquite true. While it is true that life expectancy was much lower back then, that was mainly due to high infant and early adult mortality (due to infectious disease and war). If you cleared those early-life hurdles, and made it all the way to 65, you could expect to live a good bit longer, upwards of 13 years in fact (more if you were a woman). That post-65 life expectancy has gone up since then, but by much less than how much life expectancy as a whole has gone up as whole. This doesn’t mean that costs are not increasing — the huge drop in early life mortality means a lot more people are making it to their late 60s. It also doesn’t detract from the fact that more and more people are entering this ‘retired’ phase of life.
But what does it mean to enter that phase of life? Many of us, myself included, have a negative perception of ageing and retirement. We see ‘old age’ as a period of inevitable decline and senescence. It is a phase of life marked by narrowing horizons, and a fall from grace and prowess. Although death looms large in old age, it is not the only negative aspect of the ageing process. If the Epicureans are right, then death itself is nothing to us; it is the period of life just before death — when we retreat from public view and lose our sense of significance, purpose and social meaning — that is the most existentially terrifying phase of life.
Is there anything to be said to quell these fears of ageing? Can we live valuable and meaningful lives in old age? There is a surprising lack of philosophical commentary on this issue. One of the more prominent contributors to the debate (Jan Baars) has argued that Western philosophy’s obsession with death has tended to suck attention away from old age. Nevertheless, there has been some work done on the topic and in what follows I want to share my own, poorly structured thoughts on it. This is a partial and selective take on old age, reflecting my own interests and biases. Still, some of you might find it interesting.
I start by looking at what I take to be the standard, ‘decline’-oriented view of old age. I then consider the alternatives to it.
1. The Decline Argument: A General Schema There are some cultures where the elderly are afforded a lot of respect. Indeed, there is a famous adage stating that ‘with age comes wisdom’. Nevertheless, a common view in Western societies is that old age is a period of decline and devaluation. Simone de Beauvoir comments on this in her book The Coming of Age. She notes that many people are uncomfortable around the elderly. They see them as a social nuisance — a burden on the productive working population. They see them as something ’other’ or ‘foreign’. They try to marginalise and obscure them from sight through hollow gestures of charity:
Society appears to think that they belong to an entirely different species: for if all that is needed to feel that one has done one’s duty by them is to grant them a wretched pittance, then they have neither the same needs nor the same feelings as other men.
De Beauvoir links this attitude to the productivist ethos that underscores the modern economy:
The economy is founded upon profit; and in actual fact the entire civilization is ruled by profit. The human working stock is of interest only insofar as it is profitable. When it is no longer profitable it is tossed aside.
This resonates. Certainly in debates about pension entitlements one often hears mention of the ‘burden’ that the elderly place on the working population. To be clear, this often comes with a sense of duty to the elderly, i.e. with a sense that they have done their bit for the economy and so deserve some protection, but, still, there is some begrudgery to the arrangement.
We should not, however, get too hung up on the economic devaluation of the elderly. It is significant but there is also a wider sense of decline and devaluation at play. There is the general belief that old age is a state in which you inevitably lose the capacities that make you valuable to yourself and your society (creativity, innovation, productivity, moral foresight, aesthetic beauty, physical prowess and so forth). Consequently, there is the sense that there is an inevitable general devaluation in old age.
This suggests that the following argument scheme undergirds the negative attitudes toward ageing:
(1) A life is valuable only if it has properties P1, P2…Pn. [The value premise]
(2) In old age, you inevitably lose (or experience some decline in) properties P1, P2…Pn. [The decline premise]
(3) Therefore, old age is inevitably a period in which your life becomes devalued.
I will evaluate the merits of this argument below. Before getting to that, however, it’s worth making a few comments on how it ought to be interpreted and understood.
First, note that this argument is a template that can be filled in with specific examples of the relevant value-conferring properties. You could, for instance, argue that life is valuable only to the extent that it is artistically creative; that old age inevitably brings about a decline in artistic creativity; and hence conclude old age results in an inevitable loss of value. Different combinations of value-conferring properties might make the argument more or less persuasive. Furthermore, these value-conferring properties could emanate from very different philosophical perspectives. For example, one could make the argument with personal value in mind (i.e. the value of a life to the one that is living it), or objective moral value in mind (i.e. the value of the life to the universe/humanity as a whole). Selecting one perspective over the other could make for very different arguments. A life could, after all, lack value from the personal perspective without lacking value from an objective perspective, and vice versa. This is to say nothing of socially constructed metrics of value (such as fame or economic value) and how they could be worked into the argument.
Second, note that there is some fuzziness to premise (2). This must be factored into the interpretation. I’ll say a bit more about the nature of ‘old age’ below, but here I want to point out that — at the limit — the decline premise is almost always true: people must lose some capacities in old age (after all, ultimately they must lose their lives). The only way this could fail to be true is if we invent perfect anti-ageing technologies that mean we can restore any lost capacity. This remains a pipe dream for now. This is important because, given the inevitable (at the limit) association between old age and loss of capacity, it might be tempting to simply define old age in terms of that loss of capacity. We must not succumb to that temptation. Doing so would make the argument trivially true. Old age must be defined as something other than the loss of properties P1, P2…Pn if the argument is to be interesting.
2. What is old age? But then how should we define ‘old age’? We could lose a lot of time to this question. Jan Baars has some fascinating meditations on the ontology of old age in his work. He notes there are multiple different measures of old age and they don’t necessarily coincide on a common definition.
There is, for example, the standard chronometric measure of age. This is the measure of the number of minutes, hours, days, months and years since a person was born. This is a simple objective fact about the person that is easy to track and record. The problem is that this doesn’t tell you exactly when a person becomes old (if that even makes sense). You would have to pick some arbitrary cutoff point in the chronological measure (e.g. age 65 or 70) and define as ‘old age’ anything above that cutoff point. But that’s not particularly helpful since it doesn’t provide any reasoned justification for the choice of cutoff point.
This arbitrary, chronometric, approach to old age could then quickly lead to trouble. Here’s one: it is common for statisticians to associate clusters of capacities and abilities with chronometric ages (e.g. mental acuity, reading ability, physical dexterity). This allows them to say things like “If you are aged 18, you should expect to have properties X, Y and Z” and “if you are aged 65, you should expect properties P, Q and R”. But these are just statistical averages. You may not have those properties. This can lead to all sorts of odd statements being made about your age relative to the statistical average. For example, when I was younger it was common for students to be told their ‘reading age’ after standard assessments of reading comprehension. I remember I was quite chuffed when I heard that my ‘reading age’ was far in excess of my chronometric age. I was less chuffed, years later, when I was told that the age typically associated with my level of physical fitness was in excess of my chronometric age. This mismatch between one’s actual capacities and the statistical norms associated with chronometric age can lead to ageism, and makes articles like this one (on the ethics of ‘trans-ageism’) inevitable.
The other problem with the chronometric approach is that attitudes toward different chronometric ages are highly variable. The social and biological facts of ageing can change, depending on culture and technology. While 65 might have seemed like an appropriate retirement age 100 years ago, nowadays it doesn’t. That’s one reason why people call for increases in the retirement age (or a complete rejection of the concept). At the same time, as Baars notes, there is a tendency within certain groups to push back the chronometric cutoff to old age in order to promote certain interests. For example, an athlete is considered old in their 30s, ‘older workers’ are often relatively young (50ish), and so-called ‘mature students’ in universities can be very young indeed (early 20s).
As I say, we could waste a lot of time trying to figure out what old age actually is. I don’t want to go there because I don’t think there is a wholly satisfying answer. I do, however, think there are useful paradigm cases of old age that can guide our analysis. Thus, even though there is disagreement around the margins, I suspect most of us would agree that someone in their late 70s and 80s would count as being old age. Why so? I presume the answer lies in a combination of biological and chronometric reality and socially constructed norms and attitudes. Thus, I don’t think being old is a simple objective fact about a person — associated with their chronometric age — but rather is a complex bio-social-physical fact. It is not a fact that can be entirely self-determined (you cannot ‘will’ yourself to be younger), but it is a fact that is somewhat contingent and open to renegotiation (because of changing technological and medical realities as well as changing social perceptions).
With that clarification out of the way, I will spend the remainder of this article assessing the merits of the decline argument, focusing in particular on ways to object to its two premises. In doing so, I will have paradigmatic cases of old age in mind.
3. Rejecting the Decline Premise One obvious way to object to the argument is to reject its second premise: the decline premise. The claim that old age is inevitably associated with some decline or obsolescence in value-conferring properties P1…Pn is, undoubtedly, going to be shaped by the statistical averages and social perceptions attached to certain chronometric ages. These averages and perceptions can be challenged.
To make this concrete, let’s consider a specific example. Suppose that within the world of mathematicians it is common to hear claims like “no mathematician over the age of 40 makes a significant breakthrough”. Any mathematician unlucky enough to be over the age of 40 (or should that be ‘lucky enough’ since the alternative fate is presumably worse?) would be devalued by other members of their profession as a result of this belief. But is the belief accurate? Any particular mathematician could undermine by pointing out that the belief does not hold true in her case (i.e. that they have made significant breakthroughs despite being over the age of 40), or by pointing out that it is based on a statistical misperception or error. In other words, they could either argue that (a) they are an exception to the perceived rule or (b) the perceived rule is false.
One or both of these strategies may work, depending on the individual case. In his article on successful ageing, Howard Harriott uses the example of the artist Matisse to illustrate how it is possible for an older person to live a life of significance. In Matisse’s case, his life’s mission centred around art and artistic creativity. He battled against the perception that art is a ‘young man’s game’ and dramatically illustrated how untrue this was through his own example. Despite being lambasted by the critics and suffering from several illnesses and frailties, he embarked on a ‘second life’ in his 70s and produced some of his most memorable work as a result:
In this late phase of his life, he embarks on a series of new works such as Florilége des Amours de Ronsard, Thèmes et variations, collages and innovative cutouts (papiers découpés). He embraces the “colors” of jazz as he transforms the vibrancy of jazz sounds and rhythms into a visual medium and produces his final triumph: the glorious chapel at Vence. Viewing Matisse’s later works, as for instance recently convened at the Musée de Luxembourg, in Paris, one gets the full sense of why Matisse’s work so illustrates the new paradigm of the creative life as seriously possible in old age.
Some people may object that Matisse and others like him are unusual figures — the exceptions that prove the general rule — but it’s not clear if that is accurate. It could be that general perceptions of decline in old age are misguided and that elderly people are much more capable than is believed. I have no doubt that there are a lot of ageist, unjustified assumptions made about them. Still, there are limits to this. For at least some value-conferring properties it will be true that old age is inevitably associated with decline and loss. For example, athletic prowess and physical fitness. At best, elderly people can minimise the losses they suffer with respect to those value-conferring properties: they cannot completely avoid them. So even if some formulations of premise (2) do not work; others probably will and the resultant decline and loss of value will be painful.
4. Finding alternative sources of value A more promising strategy for objecting to the decline argument is to take issue with the first premise. Of course, as noted above, the first premise has no content in the abstract form. You need to identify specific value-conferring properties for it to make sense. Furthermore, it would be odd to reject all potential variations on the first premise: that would be tantamount to nihilism. If you want your life and the lives of others to have value you need to accept the existence of some value conferring properties. So what you need to do is find variations on premise (1) that are either more resilient to old age or not undermined or affected by old age.
This is what Howard Harriott recommends in his article on successful ageing. He argues that life has value to the one who lives it (personal value) when it is characterised by some commitment to ideals. These ideals can take many forms, e.g. commitment to artistic excellence, scientific discovery and so on. If you want to retain personal value into old age then you need to focus on ideals that can sustain your commitment into old age. Again, Matisse is Harriott’s go-to example, but others easily spring to mind. I think a lot of Einstein in this regard. He remained steadfastly committed to developing a unified theory of physics right up until his final days. According to reports, he was scribbling equations in his hospital bed just hours before he died. Admittedly, Einstein’s unified theories weren’t successful in the objective sense, but at least they gave him a sense of purpose right up until his death. He was committed to an ideal — scientific discovery — that was relatively impervious to the vicissitudes of old age. It is also worth mentioning here that studies that have been done on elderly populations suggest that they derive most meaning from sustained social and family relationships, both of which can be sustained despite ageing (although both can suffer too).
The important lesson to learn from both Einstein and Matisse is that retaining value in old age is a function of choosing ideals that are resilient. But what if your ideals are not so resilient? What if, for example, your ideals are built around physical fitness and athletic prowess? The answer is that you could probably sustain some version of those ideals into old age, but it might require some modification. If you were once an elite athlete, you will have to accept that you won’t be able to compete to the highest levels into your dotage. But you could continue to be the best within your age range (injuries permitting) or you could switch to training and educating future generations of athletes. In other words, if you have some flexibility of mind, you can sustain value in the face of changing circumstances.
This last point is worth emphasising. In classic Stoic and Epicurean philosophy, people were encouraged to ‘thanatise’ their desires so as not to be so afraid of death. In other words, they were encouraged to accept the unavoidability of death and factor that into how they structured and planned their lives. Jan Baars recommends a similar strategy when it comes to ageing. He thinks we need to accept that everything in life (peak productivity; cognitive capacity; physical prowess etc) is finite and subject to decay. We need to build a conception of a meaningful life that recognises and makes space for that finitude.
That sounds good in theory, but might be hard in practice. One reason it might be hard is because this whole line of argument assumes that people can simply pick and choose their own values rather than having them imposed from the outside. The great tragedy of ageing in the modern world is that devaluation results from the imposition of values and standards from the outside. How do you deal with that problem?
5. The Value of Escaping Imposed Ideals One way to cope with that problem is to take solace in the fact that if you are no longer perceived to be valuable in the eyes of others you are both (a) more free to determine your own value in life and (b) exempted from the burdens and expectations that are imposed on younger people. This can be liberating and uplifting.
Consider once more the productivist ethos that pervades much of modern life. According to this ethos, you are valuable only to the extent that you make some productive contribution to society. This could take a number of different forms, but for most people it is economic productivity that matters most. While making economic contributions can be very meaningful to some people, it can also be exhausting and dispiriting. Instead of following your passions and talents, you have to fit within the demands of the labour market. You have to do something ‘useful’ and avoid idle luxuries. You have to compromise on your values and sell yourself to others. You have to impress them and suck up to them. You have to ingratiate yourself with the powerful and shower them with false praise. In return, they might do the same for you. As a result, you all benefit from increased perception of social value. But at what personal cost? No longer being seen as productively valuable might give you a nice excuse to be rid of all this fakery and flattery.
There could also be a significant gendered aspect to this. Simone de Beauvoir comments on the different expectations of age in her work. And while researching this article, I randomly came across a piece written by the philosopher Andreas Blank about ageing and self-esteem in the writings of Anne-Thérèse de Lambert. Lambert was an 18th century French intellectual and essay writer who wrote about the ‘economy of self-esteem’ and ageing in women. Blank argues that her views provide a contrast with those of the well-known maxim-writer La Rouchefoucauld. Whereas he essentially accepted the modern view that old age was a period of decline from former prowess, she argued that it could be liberating, particularly for women. In a society that did not value women for intellectual ability or economic productivity, but essentially valued them only for looks, charm and fertility, there was something to look forward to in old age. Freed from the burden of erotic expectation, and from the need to impress powerful men, women could cultivate a more intellectual and satisfying mode of life. As she put it:
[Old age] liberates us from the tyranny of opinion. When one is young, one only dreams of living in the idea of someone else; one must establish one’s reputation and create for oneself an honorable place in the imagination of..
I have new paper. This one is set to appear in the Oxford Handbook of the Ethics of AI, which is edited by Markus Dubber, Frank Pasquale and Sunit Das. The book isn't out yet. I believe it is due out in the Autumn/Fall. You can access the penultimate draft at the links below.
Abstract: Sex is an important part of human life. It is a source of pleasure and intimacy, and is integral to many people's self-identity. This chapter examines the opportunities and challenges posed by the use of AI in how humans express and enact their sexualities. It does so by focusing on three main issues. First, it considers the idea of digisexuality, which according to McArthur and Twist (2017) is the label that should be applied to those 'whose primary sexual identity comes through the use of technology', particularly through the use of robotics and AI. While agreeing that this phenomenon is worthy of greater scrutiny, the chapter questions whether it is necessary or socially desirable to see this as a new form of sexual identity. Second, it looks at the role that AI can play in facilitating human-to-human sexual contact, focusing in particular on the use of self-tracking and predictive analytics in optimising sexual and intimate behaviour. There are already a number of apps and services that promise to use AI to do this, but they pose a range of ethical risks that need to be addressed at both an individual and societal level. Finally, it considers the idea that a sophisticated form of AI could be an object of love. Can we be truly intimate with something that has been 'programmed' to love us? Contrary to the widely-held view, this chapter argues that this is indeed possible.