This is the blog for The British Journal for the Philosophy of Science. We cover trends in (subfields of) philosophy of science, current news/science stories that link up with issues in the philosophy of science, informal philosophy of science conference reports, stories from the world of academic philosophy from a philosophy of science angle, and anything else that might take our fancy.
It’s widely appreciated that contemporary philosophy of science, when done well, engages with actual scientific practices. Philosophers should not sit back (in armchairs, of course), consider what we think good science would look like, then inform scientists of our findings. Rather, current thinking goes, we should take seriously what scientists actually do, using these practices as the starting points for our philosophical accounts of the aims, processes, and products of science.
I’d like to make two points about this approach to philosophy of science. Here’s the first. Philosophers sometimes talk as if practice-based philosophy of science needs to accept scientists’ activities and views as definitive. That a scientist employs some method, or interprets a finding in some way, is often used as evidence for or against a philosophical position. And this does seem closely related to how I described the approach above: that actual scientific practices should be the starting points for our philosophical accounts of science.
But things cannot possibly be so simple. Science is not monolithic. Of course, practices vary across fields and projects. But beyond that, scientists working on the same projects frequently have different approaches and even philosophical disagreements. Accordingly, even if one takes very seriously a commitment to starting from scientific practices, this cannot be definitive. There is sometimes no option for a philosopher other than to disagree with one or more scientists about their approaches and interpretations thereof.
Here are two examples of how this can play out. Significant inspiration for much of my research traces back to a puzzling experience I had in the Biology Department at Stanford University. Two theoretical biologists there worked on many of the same phenomena and shared some graduate students, but they employed different methods: evolutionary game theory and population genetics. They agreed about one thing (and perhaps only that one thing): one of these methods was warranted and the other was not. Of course, they disagreed about which method belonged in each of those categories. I puzzled over what to make of this disagreement and found myself disagreeing with both biologists; both approaches were important. To maintain this, I teased out differences in their specific aims that would account for the different approaches. The starting point of this project was, in essence, a decision to disagree with two preeminent biologists about the nature of their work. On the face of it, that may seem inconsistent with practice-based philosophy of science.
A second example is a recent exchange in the journal Trends in Ecology and Evolution. Connolly et al. () employed philosophical work on mechanisms in the course of advocating greater use of process-based and component-based models in macroecology. Brian McGill, a macroecologist, and I wrote a response letter, urging that macroecologists not underestimate the importance of distant and large-scale causes. In our view, because of the importance of such causes, not all causal models represent processes or components. It was this exchange that inspired these thoughts about practice-based philosophy of science. Here I was, disagreeing with biologists about their own field. Who was I to say? In this instance, I had a convenient answer to that question: I felt comfortable weighing in because a different biologist agreed with me. But I don’t think a philosopher’s ability to jump into the scientific fray is limited to that circumstance.
Scientists regularly take different approaches to their work and disagree with one another about significant matters. This is why philosophers of science not only can but indeed must bring to bear considerations that go beyond existing scientific practices. Sometimes one might think one scientist is right about something while another is wrong. Other times, making sense of the range of practices observed requires disagreeing with all the scientists at the table. So, while contemporary philosophy of science takes actual scientific practices as its starting point, those practices aren’t definitive; legitimate philosophical positions may be at odds with some, or even all, of what the relevant scientists are up to.
That’s my first point about a practice-based approached to philosophy of science. Here’s a second. If we are truly pursuing philosophical accounts of science that take their lead from actual scientific practices, then we need to take seriously how the features of scientists influence the character of science. Scientists take up space, so to speak. Scientific practices are not only influenced by the nature of the world and the specific aims of the research, they also reflect the features of scientists, both individual and shared, and the features of their circumstances, including those that are incidental.
Sarah Blaffer Hrdy () described how the focus of primatology shifted when a critical mass of women primatologists began participating in research. Heather Douglas and Kevin Elliott, among others, have described decision points in science where scientists’ individual and shared values are relevant to the outcome. And I was deeply impressed when, as a high school student worker in a physics lab at the University of Arkansas, I witnessed researchers faced with an unexpected difficulty rummaging in the janitorial closet for something they could use as a different solvent. This ad hoc step was unremarkable to them, and such manoeuvres around scientific setbacks are surely common. But it made clear to even a high school student how little about the scientific process was prescribed and how much depended on individual decision and chance circumstances—like what cleaning supplies were stocked for the janitor.
The general point that features of scientists and their circumstances influence scientific practices is accepted by many or most philosophers of science. This is perhaps a result of work done in history and sociology of science, feminist philosophy of science, and on the topic of values in science. But you wouldn’t know this by the look of many of our other philosophical debates about science. Many of these debates proceed as if scientists’ features and their circumstances are inconsequential or, at most, distracting side issues. This is so even among some who accept that actual scientific practices are the starting point for theorizing about science. Dominant views in philosophy of science have tended to ignore or underplay the significance of scientists’ characteristics in shaping scientific practices, research aims, and the nature of scientific successes.
Most work on scientific explanation, for example, focuses predominantly or entirely on the relationship a satisfactory explanation should bear to the world, that is, on the nature of metaphysical dependence that qualifies as explanatory. How the audience influences what counts as a satisfactory explanation has been brushed off as belonging in ‘the dustbin of pragmatics’ (Carl Craver, in conversation) and likened to merely the atmospheric distortion of light from stars (Strevens ). Even many who grant the in-principle importance of the human influence on explanation focus predominantly on the question of explanatory dependence without offering much commentary on the nature of the human influence on science’s explanations.
I suspect the same goes for other philosophical discussions about the sciences, perhaps especially about science in general. Another example may be discussion of science’s so-called theoretical virtues, like simplicity or parsimony. Much ink has been spilled on whether and in what ways simpler hypotheses or theories may be epistemically superior, particularly whether they may be more likely to be true. But regardless of whether this is so, there’s a much more basic way that simple theories or hypotheses contribute to science, concerning not to how our theories relate to the world but how they relate to us. Simpler theories are easier for us to comprehend. If they are adequate to the world (remaining neutral here on what that requires), this makes them of greater cognitive value.
My basic point is simply that philosophers of science could, and should, get more mileage out of basing our work in scientific practices. We can do so by acknowledging that scientific practices are shaped by science’s practitioners and the circumstances in which they find themselves, and that this influence is philosophically significant. This is the basic idea at the root of my recent book, Idealization and the Aims of Science ().
Considering the two points developed here, I suppose what I’m after is, first, the recognition that philosophy of science can’t simply be based on what scientists say and do. But, on the other hand, there are other aspects of what scientists say and do that need to be taken more seriously. Truly taking on board a starting point in scientific practices requires deeper changes to our philosophical stances about science. Scientific practices are shaped not just by the need for the scientific enterprise to connect with the world, but also by the need for the scientific enterprise to connect with its human practitioners and audience.
The BJPS is pleased to note that two of the papers it published last year have been included in The Philosopher's Annual top ten papers of 2017. These papers have been made free to access, with links below.
Jeffrey A. Barrett and Brian Skyrms, 'Self-assembling Games', British Journal for the Philosophy of Science, 68, pp. 329–53.
We consider how cue-reading, sensory-manipulation, and signalling games may initially evolve from ritualized decisions and how more complex games may evolve from simpler games by polymerization, template transfer, and modular composition. Modular composition is a process that combines simpler games into more complex games. Template transfer, a process by which a game is appropriated to a context other than the one in which it initially evolved, is one mechanism for modular composition. And polymerization is a particularly salient example of modular composition where simpler games evolve to form more complex chains. We also consider how the evolution of new capacities by modular composition may be more efficient than evolving those capacities from basic decisions.
We are moral apes, a difference between humans and our relatives that has received significant recent attention in the evolutionary literature. Evolutionary accounts of morality have often been recruited in support of error theory: moral language is truth-apt, but substantive moral claims are never true (or never warranted). In this article, we: (i) locate evolutionary error theory within the broader framework of the relationship between folk conceptions of a domain and our best scientific conception of that same domain; (ii) within that broader framework, argue that error theory and vindication are two ends of a continuum, and that in the light of our best science, many folk conceptual structures are neither hopelessly wrong nor fully vindicated; and (iii) argue that while there is no full vindication of morality, no seamless reduction of normative facts to natural facts, nevertheless one important strand in the evolutionary history of moral thinking does support reductive naturalism—moral facts are facts about cooperation, and the conditions and practices that support or undermine it. In making our case for (iii), we first respond to the important error theoretic argument that the appeal to moral facts is explanatorily redundant, and second, we make a positive case that true moral beliefs are a ‘fuel for success’, a map by which we steer, flexibly, in a variety of social interactions. The vindication, we stress, is at most partial: moral cognition is a complex mosaic, with a complex genealogy, and selection for truth-tracking is only one thread in that genealogy.
Congratulations to Professors Barrett, Skyrms, Sterelny, and Fraser! Congratulations also to BJPS Associate Editor Lara Buchak whose paper 'Taking Risks Behind the Veil of Ignorance', published in Ethics, was also included.
If you didn't make it to this year's BSPS annual conference in Oxford, we've teamed up with Philosophy Streaming to record the Presidential Address and the plenary discussions for your listening pleasure!
As any journal editor will tell you (at length, possibly via the medium of rant), the trickiest part of the job is not the papers, not the authors, and not even the typesetters. It’s the referees. It is no mean feat to secure referees who are, first, reliable in their academic judgement, second, responsive to emails, and third, willing to return reports when they say they will. But the frustrations of editors aside, the far more pressing concern is for the career prospects of early-career researchers. Jobs and funding can depend on timely decisions. Indeed, whether an early-career researcher gets to become a mid- or late-career researcher can depend on whether a decision is made in a reasonable amount of time.
Common bad behaviour from referees includes (but is not limited to!):
1) Failing to respond to invites in a timely fashion(where timeliness is calculated in days not weeks), even if it’s only to decline the invitation;
2) Agreeing to act as referee and to return the report within an agreed timeframe (in the BJPS’s case, four weeks), only to substantially exceed this timeframe (by weeks, sometimes months) and
i) asking for this substantial extra time for the weakest of reasons*;
ii) not communicating with the relevant editors whatsoever;
3) Returning a report long past the agreed timeframe, and that report being almost useless;
4) Not returning the report and not responding to emails enquiring about the report.
Opinions differ on the obligations of academics as referees. Is it unpaid labour, an act of charity towards the community that ought only to be gratefully received? As much a part of the job as teaching and writing? Something in between? Whatever the answer, authors need more from referees than they ever have done; more depends on papers being reviewed in a professional, timely manner. And at the very least, there’s a ‘pay it forward’ case to be made: A paper sent to the BJPS that isn’t desk rejected can be expected to be read by at least six people (and that’s not counting the work that goes into any resubmissions). For every paper an author submits, other people have attended to their work in detail. The author, qua referee, might be expected to return the favour.
I’ve been lucky to witness some extremely productive philosophical engagement between authors and referees. When it’s good, it’s so good. The only shame is that so much of this is hidden. The process viewed en masse—the view one gets as an editor—is primarily one of cooperation and collegiality, and it’s a wonder that puts the lie to the notion of philosophy as anything like an individualistic endeavour.
But what to do about the bad referees, the system’s free riders? Relentless pestering and various forms of emotional blackmail fall on deaf ears. At the BJPS, we operate a flag system for persistent offenders, but all this amounts to is bad referees being asked to perform fewer reviews, while good referees carry more of the load.
So here’s a radical suggestion, using the only weapon motivational device editors have: If someone fails to fulfil their duties as referee, the journal will not accept submissions from that referee, for some period of time to be determined. The time period should reflect the severity of the dereliction of duties. For instance, agreeing to act as a referee but then disappearing off the radar might warrant the most substantial ban. Delivering a meagre report that’s extremely late, and without communicating with the relevant editor about the delay, might mean some shorter period of time on the bench. First-time offenders surely deserve different treatment to persistent re-offenders. And the embargo period will need to be substantial enough to be effective (too short and it will have no real impact; too long and it’s probably not practical due to the changes in the editorial team). The details can be ironed out.
It’s not just badly behaved referees that stand to suffer here. There’s a risk for the journal in question too: bad referees aren’t necessarily bad authors, and we risk losing good papers to other journals by refusing those authors’ papers. But the problem is so rife and its upshot so dire for early-career researchers that maybe something more radical is required to make clear what is expected of referees and ameliorate, at least to some degree, the problem of free-riders. All thoughts on this proposal very welcome!
* 'I decided to go on holidays' and ‘I have other deadlines that I decided to prioritise after agreeing to referee this paper’ are the problems, not the excuse. On the other hand, there are perfectly good reasons to be delayed in returning a report. Not only do we understand, we’ve been there. You are not the droids we’re looking for.
[Update: 23 May 2018]
Thanks to everyone, here and elsewhere, for their feedback—it’s been really helpful. I thought I’d add some clarifications to the original post and respond to some of the alternative suggestions. Some concerns stem, I think, from the thought that we're concerned with a wider set of behaviours than is the case. Some alternative suggestions can't be accommodated for ethical or practical reasons. I'm sure I haven't covered every issue here, and so very happy to receive more feedback.
The most important point is that the idea here isn’t to apply a rule mechanically (for example, being banned for being one day late). The reviewer would also receive explicit warnings so this wouldn't come out of the blue. Like every other aspect of peer review, this proposal isn’t without its drawbacks. Nonetheless, it may be that this is an imperfect solution to a much worse (and very common) problem.
We are not proposing any punishments for those who simply decline to review in the first place (at least, so long as they actually communicate this rather than leaving the invitation hanging—and declining here is only ever a matter of clicking a link in an email).
We are not asking for the intimate details of reviewers’ lives. While it’s not unknown for us to receive tearful emails from authors and reviewers in terrible situations, we do not expect reviewers to bare all; it’s simply not our business. We will take at face value your reasons for any delay or for withdrawing from a review; there’s no need to elaborate or provide ‘proof’ (whatever that might be). We just ask that reviewers make contact!
The aim of this proposal is to promote timeliness. It would be only the most exceptional and egregious cases where the content of the report itself might warrant an embargo—and even then only in combination with tardiness.
One worry that has been expressed is that people will be more inclined to turn down requests to review. It’s hard to know what to make of this, unless one is really committed to (a) the ability to walk away from a promise to review a paper while (b) not communicating this decision with the editors. But anyway: One of the reasons we’re proposing this is because we actually want people to turn us down if they can’t realistically meet the deadline. And anyone who has reviewed for us in the past and found themselves in need of a reasonable extension will know that we always accommodate this.
Another worry is that the proposal is disproportionally harmful to early career researchers. Our internal rating system for reviewers suggests that ECRs are not the problem here.
The publishing industry is evil and we oughtn’t cooperate with it: (a) I guess this might eventually harm the publisher, but it will definitely harm the authors and editors (your equally unremunerated colleagues) in the meantime; (b) not all publishers are equal and the BJPS’s income supports the British Society for the Philosophy of Science, who funnel that money right back into the philosophy of science community, via PhD and conference grants; (c) if these reasons don’t motivate you, fine—but then please don’t accept invitations to referee!
We’d love to be able to pay reviewers (and editors!), but that’s not within our gift.
In the past, and with their permission, we have thanked our referees by name for their work (e.g. here). We stopped doing this, and maybe we should reconsider, but we assumed reviewers didn’t care because these annual blogposts received very little interest. Also, a hypothesis: those unmoved by the multiple, pleading emails of their (again, also unpaid) colleagues and/or the potentially precarious situation of authors will not be motivated by being credited in this (admittedly very limited) way.
Publishing the names of the reviewers alongside published papers creates its own problems. Others have raised examples of this. Here’s another: If I suspect that the author of the paper I’m reviewing is likely to be on a hiring committee I’m soon to face, I might decide to offer a more generous review. Submissions from more senior members of the community would then have a greater chance of publication. And from an editorial perspective, no editor should welcome a move that damages the credibility of reviewers’ reports.
Good reviewers could have their own papers ‘fast tracked’ through the peer review system. How? If it was within our ability to fast track papers, we wouldn’t be discussing this proposal.
Finally, there is presumably a reward for referees already in operation: a referee's work is, in turn, reviewed by their peers (maybe in the same journal—in my original post, I mentioned that six people tend to any one BJPS paper that is not desk rejected—or maybe elsewhere).
Again, we're very happy to hear more thoughts on this and to answer any questions I've left unanswered!
It's not the despair, Laura. I can take the despair. It's the hope I can't stand.
The Editors of the BJPS and the BSPS committee are delighted to announce that Grant Ramsey and Andreas de Block are the 2017 winners of the BJPS Popper Prize for their article 'Is Cultural Fitness Hopelessly Confused?'. Here is the citation from Editors-in-Chief Wendy Parker and Steven French:
Grant Ramsey and Andreas de Block’s paper, ‘Is Cultural Fitness Hopelessly Confused?’ (Brit. J. Phil. Sci. 68 (2017), pp. 305–28), addresses a hotly debated topic: can evolutionary theory be extended to human culture? Those who answer ‘no’ typically point to certain well-entrenched critiques of extending biological fitness to the cultural domain. However, as Ramsey and de Block point out, there is considerable uncertainty as to how biological fitness itself should be ‘modelled and measured’ and, while granting that great caution must be exercised in exporting this concept, they argue that the pursuit of a coherent and useful notion of cultural fitness is by no means a hopeless task. In particular, they note that certain problems that arise for this notion must also be faced by its biological counterpart. Having defended its tenability, they then map out a more positive conception of cultural fitness by carefully considering how the core elements of biological evolution align with those of cultural evolution, while acknowledging the relevant differences between the two domains. In particular, by shifting the focus of individuation from memes to organisms, and tying cultural fitness to the latter rather than the former, they show how it ‘can do similar work in the study of cultural evolution as biological fitness does in the study of biological evolution’ (p. 324). Overall, this yields a sophisticated and nuanced account that sheds new light on and further advances a major contemporary debate. As such, this paper is, in the opinion of the Co-Chief Editors and the BSPS Committee, a worthy recipient of the BJPS Popper Prize for 2017.
The BJPS Popper Prize is awarded to the article judged to be the best published in that year's volume of the Journal, as determined by the Editors-in-Chief and the BSPS Committee. The prize includes a £500 award to the winner(s). More information about the prize and previous winners can be found here.
Endowed by the Latsis Foundation, the Lakatos Award is given to an outstanding contribution to the philosophy of science. Winners are presented with a medal and given the chance to deliver a lecture based on the winning work. To celebrate the 2015 and the 2016 award winners—Thomas Pradeu and Brian Epstein, respectively—they each delivered a lecture at the LSE last week. Introduced by Hasok Chang, Pradeu's lecture is entitled 'Why Philosophy in Science? Re-Visiting Immunology and Biological Individuality' and Epstein's is 'Rebuilding the Foundations of the Social Sciences'.
Paradigmatic physical attributes, like energy, mass, length, charge, or temperature are quantities. That these attributes are quantitative is important for experiments (they can be measured), as well as theories (we can formulate quantitative laws that hold between them). Quantities are arguably central to science, and especially to the physical sciences.
Quantities pose peculiar epistemological and metaphysical challenges. A natural way to describe what is special about quantities is to say that quantities, in contrast to other attributes, come in degrees. Dogs may be ranked by how fast they can run or how big they are, but there is no ranking of them by how much they are dogs. Being a dog is a sortal, whereas speed and size are quantities. A quantity’s ‘coming in degrees’ can be understood as saying that quantities have (at least) one dimension of variation. For many paradigmatic physical attributes, we find a range of possible ‘amounts’ of that attribute, which we typically express as numerical values in terms of some unit. Having a range of possible amounts seems to be required by the idea that a quantity is an attribute that comes in degrees: gradations are possible in virtue of there being different amounts of the same quantity. To understand the metaphysical status of quantities, we need some account of how a gradable property like mass relates to specific amounts of mass. In the metaphysics literature, this question is often formulated in terms of determinables and determinates. But since this terminology comes with a specific understanding of the relationship between quantities and magnitudes, I will not use these terms here. In fact, I argue that the model of determinables and determinates is ultimately a poor fit for quantities, despite superficially appealing features.
A second intuitive way of characterizing what is different about quantities, when compared to other attributes, is that only quantities involve numbers. This characterization has a certain immediate appeal, based on the ordinary way in which we express quantitative claims. I might say that the temperature today is 10ºC, that the average speed on the tube is 20.5mph, that the flying distance between London and Leeds is about half of that between London and Edinburgh, or that the average discharge of the Danube is about three times that of the Rhine. All of these are paradigmatically quantitative claims, some of which mention numbers and units, whereas others are unit free but nonetheless contain a numerical comparison. By contrast, if I say that my mug is blue, that my office is warm, or that the birds outside my window are pigeons, the claims I’m making are paradigmatically non-quantitative and they do not contain any numerical expressions. Quantities seem to be bound up with numbers. Unlike purely mathematical entities, however, the numerical representations of quantities typically come with units, and the attributes themselves stand in causal relations to one another. Are numbers merely dispensable representational devices, or does their use in our representation of physical quantities indicate a deeper relationship between the mathematical and the physical? This question has been a major concern of Hartry Field’s work on quantities, and of subsequent research engaging with Field’s programme.
To make matters more complicated, while numbers and units are typically involved in the presentation of quantities, the particular numbers and units used to represent any given magnitude seem quite arbitrary. Establishing standard units for quantities is obviously convenient, and perhaps there are pragmatic values, such as simplicity, favouring certain systems of units over others. But it seems unlikely that there is any sense in which certain units ‘correspond better’ to the world. Accordingly, the ‘received’ view on the matter is that units, and the particular numerical assignments that come with them, are arbitrary and ultimately a matter of convention. While the conventionality of units is widely conceded, at times the suggestion has been made that quantitative claims are conventional beyond choice of units. Brian Ellis famously suggested that not only the unit, but also the ‘summation’ operation for certain quantities can be freely chosen. When ‘adding’ lengths, for example, nothing requires that we add measuring rods end-to-end along a single line, as opposed to at 90º angles. Both operations result in numerical representations of lengths; it’s just that the former is more familiar than the latter. If the particular numerical values and units chosen are arbitrary, this then raises the more general question of how we can tell which aspects of quantitative representations are arbitrary or conventional, and which are actually required by the phenomena represented. Call this issue the question of conventionality in the representation of quantities. Field further argued that we should aim to give intrinsic explanations of quantitative facts, namely, explanations that do not make reference to arbitrary elements. While Field’s own intrinsic explanations have been criticized,  the problem remains a live issue.
The most recent question raised about the status of quantities has been dubbed the ‘absolutism versus comparativism debate’ by Dasgupta. Prima facie quantities seem to have both determinate magnitudes (for example, 4kg) and determinate relations (for example, being four times as heavy as). The question is whether both are needed for quantities, and if so, which is more fundamental: the monadic properties or the relations? The resulting debate shares certain features with the absolutism–relationism debate over spacetime. In particular, it seems as though absolutists treat certain possibilities as distinct, whereas comparativists only see a difference in representation. The absolutism–comparativism debate thereby connects back to the question of arbitrariness in the representation of quantities more broadly.
The starting point for my take on these philosophical challenges is the representational theory of measurement, as canonically formulated by Krantz et al. The representational theory of measurement is the most comprehensive formal treatment of the relationship between empirical phenomena and their mathematical representation. It thereby provides a systematic framework we can employ to address the challenges articulated above.
The representational theory of measurement provides a conditional answer to the central question of how we can use numerical representations for empirical phenomena. If a phenomenon exhibits a certain kind of relational structure, then (it can be shown that) there exists a homomorphic mapping from the phenomenon to the real numbers. Representation theorems of this sort establish that a numerical representation is indeed possible for a given phenomenon.
It turns out that the existence of some numerical representation or other can be established for a wide range of different (empirical) relational structures, but only for some of those relational structures will these numerical representations actually be informative. The numerical labels on the dumbbells at my gym are informative, because the numbers on them tell me not only which dumbbells are heavier than others, but also how much heavier one set is compared to another. By contrast, the numerical order established by ranking the ten best movies in 2016 tells me nothing about whether the fifth ranked movie is a lot better than the sixth ranked, or whether they are instead very similar in quality. Measurement theory describes these differences between numerical representations in terms of ‘uniqueness theorems’. The more unique a numerical representation is, the more information can be gained from it about the phenomenon represented.
The representational theory of measurement establishes for a wide range of relational structures that numerical representations are possible, and how unique these representations are. In doing so, representationalism offers a formal foundation for the ‘hierarchy of scales’, originally introduced by the psychologist S. S. Stevens, according to which numerical representations are possible at ratio, interval, and ordinal scales, with ratio scales being the most informative. Weight is measured on a ratio scale, whereas movie rankings are an example of an ordinal scale.
Representationalism thereby answers one important question about the relationship between numbers and quantities: numbers are representational devices that represent phenomena in virtue of a shared structure between the phenomenon and the real numbers. This answer presupposes treating both numbers and quantitative attributes as structures, since it is otherwise unclear how they could share a structure. What are the metaphysical consequences of this explanation of the role of numbers as representational devices? This is the question I focus on in my current project.
I address the different metaphysical challenges set out above, using representationalism as the starting point. I argue in favour of a structuralist and substantivalist conception of quantities: quantities are substantival manifolds with certain relational structures on them. The account I offer is structuralist, because I argue that among the various numerical representations, none is privileged. Since all that is in common between these representations are structural features, a plausible explanation for this equivalence of representations is that quantities themselves are just structures. This position takes much about how we (numerically) represent quantities to be conventional. That does not mean that quantities themselves are conventional or ‘constructed’. On the contrary, I advocate a form of realism about quantities: in attributing quantitative structure to an attribute or phenomenon, we make theoretical commitments that go beyond what is observable.
Suppose that it is already determined that the coin I just flipped will land heads. Can it also be the case that that very coin, on that very flip, has some chance of landing tails? Intuitively, the answer is no. But according to an increasing number of contemporary philosophers, especially philosophers of physics, the answer is yes.
According to these philosophers there are non-trivial chances (chances between zero and one) in worlds where the fundamental dynamical laws are deterministic. In such worlds, at any time t, for any event e, it is already (at t) determined that e will happen or that e will not happen. Nonetheless, these philosophers say, in at least some such worlds and for at least some events, the chance of those events happening is between zero and one. Call the chances that are supposed to exist in such world ‘deterministic chances’, and the view that there are such chances ‘compatibilism about chance and determinism’, or for brevity just ‘compatibilism’.
Should we endorse compatibilism? You might think that before we can answer this question we first need to answer the question of what sorts of things chances are in general. (Is the chance of some event happening just the relative frequency with which that type of event happens? Or is it something more metaphysically robust, like the propensity of certain set-ups to produce that type of event? And if the latter, what on earth is a propensity?) But there’s good reason to think that we might be able to make significant progress on the question of whether to be compatibilists even if we aren’t yet willing to take a stance on the metaphysics of chance more generally. To see why, it’s helpful to keep in mind the following two distinctions.
First, it is helpful to distinguish between arguments that merely establish what I will call ‘weak compatibilism’, on the one hand, and arguments for ‘robust compatibilism’, on the other. Weak compatibilism about chance and determinism is the view that deterministic chances are merely metaphysically possible. Robust compatibilism is the view that deterministic chances are more than merely metaphysically possible—they exist either in the actual world or in some relatively nearby worlds. Versions of robust compatibilism can be differentiated depending on which sorts of nearby worlds they target, but of particular interest are those worlds in which our best deterministic scientific theories—like Bohmian mechanics, a deterministic interpretation of quantum theory—are true.
Arguments for weak compatibilism take the form of arguments against supposed platitudes about the nature of chance that seem to rule out any possibility of deterministic chance. Take, for instance, the highly intuitive thought that if there is a non-trivial chance of something happening, then it must be possible for that thing to happen and possible for it not to happen. Some philosophers (for example, Schaffer ) endorse something like the following principle as a way of capturing this thought:
The Chance–Possibility Platitude (Incompatibilist Version): If the chance, at world w, at time t, of event e is greater than zero, then there exists a world, w', such that (i) w' matches w in laws, (ii) w' and w have the same micro-physical history up until time t, and (iii) e happens at w'.
According to the incompatibilist’s version of the chance–possibility platitude, if there is a non-trivial chance of something happening at some time, then it must be possible for that thing to happen and possible for it not to happen, while holding fixed both the history of the world up until that time and the laws of nature. It follows that in a deterministic world (where at any time, for any event e, it is already (at t) determined that e will happen or that e will not happen), the chance of some event happening is always either one or zero.
In (Emery [2015a]), I argued against the incompatibilist’s version of the chance–possibility platitude on the grounds that either it was based a highly contentious and unpopular view about the nature of time, or it was based on considerations that supported eliminating chance—deterministic or not—from our theories altogether. Given the important work that chances do for us in science, in philosophy, and in everyday reasoning, I take eliminativism about chance to be a non-starter.
This sort of argument is an argument for weak compatibilism. It removes a reason for thinking that the nature of chance itself makes deterministic chance impossible. If it succeeds, it gets us at least one step closer to thinking that deterministic chances are metaphysically possible. (How close precisely depends on what other supposed platitudes regarding the nature of chance are waiting in the wings.) But it doesn’t do anything to establish that deterministic chances exist in the actual world, or in any relatively nearby worlds.
What sort of arguments are there for robust compatibilism? Here is where the second distinction becomes helpful. On the one hand, there are metaphysics-based arguments for robust compatibilism; on the other, there are role-based arguments. Metaphysics-based arguments start from some particular metaphysical analysis of chance—various forms of frequentism, versions of the so-called best systems analysis, propensity theories, or what have you—and then argue that given this analysis, there are chances in relatively nearby worlds where the fundamental laws are deterministic. Perhaps most obviously, this sort of argument is readily available to those who endorse straightforward versions of frequentism. (Of course, there are non-trivial relative frequencies in relatively nearby worlds where the fundamental laws are deterministic, the argument would go; so, there are non-trivial chances in such worlds.) But following Albert () and Loewer (), many advocates of the best systems analysis (according to which chances are those probability functions that appear in the simplest, most predictively powerful summary of the actual patterns of events) have also given metaphysics-based arguments for robust compatibilism.
Role-based arguments, by contrast, start from some particular role that probabilities are supposed to play—in our best scientific theories, in our philosophical theorizing, in our everyday reasoning, or what have you—in relatively nearby deterministic worlds, and then argue that in order to play that role, the probabilities in question must be objective—they must be genuine chances.
For my own part, I am more interested in role-based arguments for robust compatibilism than I am in metaphysics-based arguments. For one thing, metaphysics-based arguments are inevitably as contentious as the metaphysical analyses that they are based on. They tend to do little to change the minds of those who are initially sceptical of compatibilism. But also, as I argue in (Emery ), role-based arguments are some of the strongest arguments we have in metaphysics. After all, such arguments are often used to license the introduction of weird or novel entities in science. And while metaphysicians are hardly limited to the methodology of science, insofar as they make use of that methodology, their arguments are all the more convincing.
If you attend to our best deterministic scientific theories, it might seem at first that a role-based argument for deterministic chance is straightforward. Start from the fact that some of our best deterministic scientific theories involve probabilistic predictive rules. (Think, for instance, of Born’s rule, as it plays a role in interpretations of quantum phenomena, including Bohmian mechanics. Born’s rule says that the probability that a system with wave function at t will be found in configuration c if we perform a measurement on it at t is given by.) This is the first premise of the argument. The second premise is that any probabilities that play a role in the predictive rules of our best scientific theories are objective probabilities.
On the face it, this may look like a decent argument for robust compatibilism, but after more careful examination, I find myself unconvinced. In particular, I think that whether we should endorse the second premise depends on what we mean by ‘objective’. If we mean something like ‘wholly independent of the beliefs we have, the evidence available to us, and the ways in which creatures like us reason’, then it seems as though probabilities could play a role in the predictive rules of our best scientific theories without being objective. Maybe the predictive rules of our best scientific theories depend in part on the ways in which creatures like us reason or the sorts of evidence that are available to us.
In (Emery [2015b]), I described a different role-based argument for robust compatibilism, which I think is more promising, though also much more complicated. This argument starts with the role that probabilities play in determining the truth conditions of counterfactuals (Lewis ). For example, one natural account of those truth conditions is that the counterfactual ‘if I had dropped this glass just now, coffee would have spilled all over the floor’ is true if and only if in the nearest world where I dropped the glass just now, coffee was very likely to spill all over the floor. (Or if, like Hajek ([unpublished]), you think that most counterfactuals are not true, it starts with the role that probabilities play in determining the assertibility conditions of counterfactuals. ‘If I had dropped this glass just now, coffee would have spilled all over the floor’ is not true, but it is assertible if and only if in the nearest world where I dropped the glass just now, coffee was very likely to spill all over the floor.) Given the important role that counterfactuals play in scientific theorizing (for example, in theorizing about which universal generalizations are laws, or in theorizing about which properties of a system depend on others and which do not), if probabilities play a role in determining the truth (or assertibility) conditions of counterfactuals, then those probabilities must be objective. Or so I argued.
Lately I’ve been thinking more about a different, and to my mind even more promising, role-based argument for robust compatibilism. This argument starts from the explanatory role that probabilities play in our best deterministic scientific theories and argues that in order to play that role, such probabilities must be genuine chances. This argument is often suggested in the literature, but has rarely been developed in detail.
One reason to be especially interested in this sort of explanatory role argument is that if it works, it has the potential to suggest a novel metaphysics for deterministic chance. Thus far, most proponents of deterministic chance in the literature tend to be Humeans, in the sense that they think that the laws and chances in each world are determined (in some important sense) by the patterns of events that actually occur in those worlds. But it is well known that Humean analyses of laws have trouble justifying the explanatory power of laws (Armstrong ; Maudlin ). After all, if the laws are determined by the patterns of actually occurring events, how can they also explain those patterns of events? And it is plausible to think that similar worries will trouble Humean analyses of chance (Emery ). So insofar as it is the explanatory role that deterministic chances play that motivate us to be compatibilists, we may end up adopting a more metaphysically robust, non-Humean account of such chances. Allowing such chances into their ontology will violate the scruples of many a Humean metaphysician and philosopher of physics. But recall what I mentioned earlier about role-based arguments: such arguments are often used in science to introduce weird or novel entities. Indeed, there’s a natural reading on which the introduction of all sorts of strange or sui generis scientific entities—from the electromagnetic field, to the wave function, to the neutrino, to dark energy—was originally, or still is, justified by the explanatory role that those entities play. So insofar as a plausible explanatory role argument forces us to posit deterministic chances—and non-Humean deterministic chances at that—even the Humeans among us may be forced to set their scruples aside.
Mount Holyoke College
Albert, D. Z. : Time and Chance, Cambridge, MA: Harvard University Press.
Armstrong, D. M. : What Is a Law of Nature? Cambridge: Cambridge University Press.
Christian Wüthrich delivered one of the plenary talks at this summer's BSPS conference in Edinburgh and lo! It was recorded (future is now!). For your listening pleasure, here is his 'The Temporal and Atemporal Emergence of (Space-)Time' with a link to the slides below.
The Temporal and Atemporal Emergence of (Space-)Time
On the first page of his preface to the English edition of The Logic of Scientific Discovery, Popper writes:
I, however, believe that there is at least one philosophical problem in which all thinking men are interested. It is the problem of cosmology: the problem of understanding the world—including ourselves, and our knowledge, as part of the world. All science is cosmology, I believe, and for me the interest of philosophy, no less than of science, lies solely in the contributions which it has made to it. For me, at any rate, both philosophy and science would lose all attraction if they were to give up that pursuit. (p. 15)
Popper states here the classic, inclusive view of cosmology as the study of the all-inclusive, big ‘U’ Universe. Given the suggested philosophical nature of cosmology, it may seem somewhat surprising that philosophers have paid relatively little attention to the physical study of cosmology, namely, what one might call the science of little ‘u’ physical universes. (For the difference between Universe and universe, have a look at Edward Harrison’s classic textbook on cosmology, Cosmology: The Science of the Universe; it’s the book that first got me interested in the subject.) If philosophy aims at understanding the Universe, then surely an important piece of the complete story is to be found in its physics.
Admittedly, cosmology qua physics only came into its own as a branch of physics with the advent of more powerful telescopes and the theoretical possibilities afforded by the cosmological models of the general theory of relativity in the early-twentieth century, although it remained a research backwater for most of the rest of the century—that is, until once again improved observational capabilities and theoretical developments made it into a high-profile hotbed of research activity, which it continues to be.
One might take the attitude that philosophical study of scientific research may only bear fruit once enough time has passed for the research to become well established. If that attitude is right, then cosmology is too much in flux now for any firm conclusions to be drawn with confidence. Like many contemporary philosophers of physics, however, I am intrigued by the flux. I believe many important lessons in the philosophy of science can be learned by looking at current physics, and I believe that philosophical investigations can have a constructive role to play in contemporary physics (and science more generally) as well.
Before saying a bit about what the philosophy of cosmology can do, I should say a little about what current physical cosmology is up to and how it has got to this point. The full history of cosmology, from ancient times until now, is certainly a fascinating subject. To save some time, though, I will start with what was taken as the standard model of cosmology of the latter half of the twentieth century: the hot big bang model.
According to this model, the universe began in an extremely hot, dense state, after which it expanded and cooled, generally remaining in equilibrium. At certain times, the expansion rate caused it to fall out of equilibrium; during these periods, new structures formed out of the equilibrium soup: from the fusion of nuclei and the formation of hydrogen atoms to the creation of stars and the generation of galaxies. It's quite a grand story, as befits the subject, but what is perhaps most amazing is that we have very good evidence that most of it happened.
Nevertheless, anomalies began to be discovered in recent decades. Among the most serious were (i) the discovery of discrepancies between the amount of gravitating matter and the amount of visible matter in the universe (and galaxies), and (ii) the discovery that distant galaxies were receding at an accelerated rate (whereas according to the standard model they should be receding at a decelerated rate). The first led cosmologists to introduce dark matter into their models (dark matter interacts gravitationally, but otherwise negligibly), and the second led them to introduce dark energy. The physical nature of these dark components remains the target of considerable amounts of observational and theoretical investigation, and it is quite unclear what they might be.
There is a third component of one such new standard model, the Lambda-CDM model. As well as dark energy and dark matter, it includes inflation. (‘Lambda’ is the symbol for Einstein's cosmological constant, appropriated for the cosmological constant-like dark energy, and ‘CDM’ stands for ‘cold dark matter’.) Unlike dark matter and energy, there was no empirical problem that ushered inflation into contemporary cosmology; it was primarily explanatory considerations that motivated its proposal and adoption. These considerations centred on the explanation of certain cosmological conditions of the present universe—in particular, the geometrical flatness of space and the uniformity of its contents.
The hot big bang model's explanations of these conditions depend on the existence of precise initial conditions of the universe. These ‘special’ initial conditions are thought to be problematic in a way that motivates the search for a theoretical solution. Inflationary theory purports to solve these ‘fine-tuning’ problems by introducing a brief epoch in the very early universe where space underwent an exponential, accelerated expansion before relaxing into the decelerated expansion of the hot big bang model. This expansion (intuitively) thins out any inhomogeneities that may have existed and flattens any curvature of the geometry of space, or so the story goes. Thus inflationary theory is taken to solve the hot big bang model's fine-tuning problems by providing a better explanation of flatness and uniformity than that offered by its predecessor.
What makes the case interesting is that inflationary theory was soon shown to lead naturally to predictions of a spectrum of small, unobserved inhomogeneities in the otherwise apparently uniform contents of the universe. It took a couple of decades to develop the observational capabilities to detect them, but eventually these inflationary theory predictions were observationally confirmed. Although (like dark energy and dark matter) little remains known about the inflationary mechanism itself, this episode is generally seen as a triumph of theoretical reasoning in physics.
Not only do these stories of dark matter, dark energy, and inflation make for fascinating tales from the forefront of physics, they provide philosophers of science with ample opportunity to investigate methodological and foundational questions in a quickly moving field where much remains unknown. Only a handful of philosophers have dived in so far. Although the scope for investigation is large, I will mention here just a couple strands of my own research that attempt to address these kinds of questions.
Until recently I have primarily concentrated on inflationary theory, as I found it puzzling how a theory motivated ostensibly on explanatory grounds alone could lead to empirically sanctioned success. Why should solving fine-tuning problems matter to methodology? The empiricist-minded among us will likely see no puzzle: science is in the business of saving phenomena, and how it does so doesn’t matter to epistemology. I have found inspiration in the problem-solving approaches to progress, however, and believe that some valuable insights could come from carefully analysing fine-tuning problems and what makes them problematic. Cosmologists are, unfortunately, seldom very clear about what they mean by fine-tuning and the problems associated with it, so real interpretive work can be required. The results of this particular strand of my research are found in my papers, ‘Does Inflation Solve the Hot Big Bang Model's Fine-Tuning Problems?’ and ‘Do Typicality Arguments Dissolve Cosmology’s Flatness Problem?’. My answers to the questions in these two titles (in accord with Betteridge’s law of headlines) were ‘No’—although I emphasize that I do see some interesting possibilities for developing accounts of fine-tuning that could change the answer to the former question to a ‘Yes’.
Currently, I am a researcher on Michela Massimi's ERC-funded project, ‘Perspectival Realism: Science, Knowledge, and Truth from a Human Vantage Point’. Stage one of the project looks at how experimental and observational physicists in cosmology and high-energy physics deal with the perspectival nature of modelling. The cosmological half of this stage focuses on the dark energy survey, a large collaborative project aiming to probe the cosmos for further observational data relevant to uncovering the nature of dark energy. (The high-energy half focuses on beyond-standard-model searches at the large hadron collider at CERN.) We are interested in the extent to which observation here is ‘model independent’, and how data from different probes are integrated. The relationship between observation and theory is particularly interesting due to the remarkable diversity in possible theoretical explanations of dark energy: a feature of gravity (Einstein's cosmological constant), a modified theory of gravitation, a new fundamental quantum field... or just an effect of averaging the inhomogeneities of a more fine-grained model. We hope to be sharing our initial results soon!