Falling in Love with Machines
The Splintered Mind | Reflections in Philosophy of Psychology, Broadly Construed
by Eric Schwitzgebel
6d ago
People occasionally fall in love with AI systems. I expect that this will become increasingly common as AI grows more sophisticated and new social apps are developed for large language models. Eventually, this will probably precipitate a crisis in which some people have passionate feelings about the rights and consciousness of their AI lovers and friends while others hold that AI systems are essentially just complicated toasters with no real consciousness or moral status. Last weekend, chatting with the adolescent children of a family friend, helped cement my sense that this crisis might ..read more
Visit website
How We Will Decide that Large Language Models Have Beliefs
The Splintered Mind | Reflections in Philosophy of Psychology, Broadly Construed
by Eric Schwitzgebel
1w ago
I favor a "superficialist" approach to belief (see here and here). "Belief" is best conceptualized not in terms of deep cognitive structure (e.g., stored sentences in the language of thought) but rather in terms of how a person would tend to act and react under various hypothetical conditions -- their overall "dispositional profile". To believe that there's a beer in the fridge is just to be disposed to act and react like a beer-in-the-fridge believer -- to go to the fridge if you want a beer, to say yes if someone asks if there's beer in the fridge, to feel surprise if you open the fridge and ..read more
Visit website
Large Language Models are Interestingly Bad with the Periodic Table
The Splintered Mind | Reflections in Philosophy of Psychology, Broadly Construed
by Eric Schwitzgebel
1w ago
In working on a post for tomorrow on whether Large Language Models like GPT-4 and Bard-2 have beliefs, I asked GPT-4 what I thought would be a not-too-hard question about chemistry: "What element is two to the right of manganese on the periodic table?" It crashed, burned, and exploded on the spot, giving two different wrong answers foot on tail, without noticing the contradiction: The correct answer is cobalt, element 27. Here's the text of the exchange, if you can't easily read the image: You: What element is two to the right of manganese on the periodic table? ChatGPT: The element th ..read more
Visit website
Quasi-Sociality: Toward Asymmetric Joint Actions with Artificial Systems
The Splintered Mind | Reflections in Philosophy of Psychology, Broadly Construed
by Eric Schwitzgebel
2w ago
Anna Strasser and I have a new paper in draft, arising from a conference she organized in Riverside last spring on Humans and Smart Machines as Partners in Thought. Imagine, on one end the spectrum, ordinary asocial tool use: typing numbers into a calculator, for example. Imagine, on the other end of the spectrum, cognitively sophisticated social interactions between partners each of whom knows that the other knows what they know. These are the kinds of social, cooperative actions that philosophers tend to emphasize and analyze (e.g., Davidson 1980; Gilbert 1990; Bratman 2014). Between the two ..read more
Visit website
Against the Finger
The Splintered Mind | Reflections in Philosophy of Psychology, Broadly Construed
by Eric Schwitzgebel
3w ago
There's a discussion-queue tradition in philosophy that some people love, but which I've come to oppose. It's too ripe for misuse, favors the aggressive, serves no important positive purpose, and generates competition, anxiety, and moral perplexity. Time to ditch it! I'm referring, as some of you might guess, to The Finger.[1] A better alternative is the Slow Sweep. The Finger-Hand Tradition The Finger-Hand tradition is this: At the beginning of discussion, people with questions raise their hands. The moderator makes an initial Hand list, adding new Hands as they come up. However, people can j ..read more
Visit website
The Prospects and Challenges of Measuring Morality, or: On the Possibility or Impossibility of a "Moralometer"
The Splintered Mind | Reflections in Philosophy of Psychology, Broadly Construed
by Eric Schwitzgebel
1M ago
Could we ever build a "moralometer" -- that is, an instrument that would accurately measure people's overall morality?  If so, what would it take? Psychologist Jessie Sun and I explore this question in our new paper in draft: "The Prospects and Challenges of Measuring Morality". Comments and suggestions on the draft warmly welcomed! Draft available here: https://osf.io/preprints/psyarxiv/nhvz9 Abstract: The scientific study of morality requires measurement tools. But can we measure individual differences in something so seemingly subjective, elusive, and difficult to define? This paper wi ..read more
Visit website
Percent of U.S. Philosophy PhD Recipients Who Are Women: A 50-Year Perspective
The Splintered Mind | Reflections in Philosophy of Psychology, Broadly Construed
by Eric Schwitzgebel
1M ago
In the 1970s, women received about 17% of PhDs in philosophy in the U.S.  The percentage rose to about 27% in the 1990s, where it stayed basically flat for the next 25 years.  The latest data suggest that the percentage is on the rise again. Here's a fun chart (for user-relative values of "fun"), showing the 50-year trend.  Analysis and methodological details to follow. [click to enlarge and clarify] The data are drawn from the National Science Foundation's Survey of Earned Doctorates through 2022 (the most recent available year).  The Survey of Earned Doctorates ..read more
Visit website
Gunkel's Criticism of the No-Relevant-Difference Argument for Robot Rights
The Splintered Mind | Reflections in Philosophy of Psychology, Broadly Construed
by Eric Schwitzgebel
1M ago
In a 2015 article, Mara Garza and I offer the following argument for the rights of some possible AI systems: Premise 1: If Entity A deserves some particular degree of moral consideration and Entity B does not deserve that same degree of moral consideration, there must be some relevant difference between the two entities that grounds this difference in moral status. Premise 2: There are possible AIs who do not differ in any such relevant respects from human beings. Conclusion: Therefore, there are possible AIs who deserve a degree of moral consideration similar to that of human being ..read more
Visit website
Strange Intelligence, Strange Philosophy
The Splintered Mind | Reflections in Philosophy of Psychology, Broadly Construed
by Eric Schwitzgebel
2M ago
AI intelligence is strange -- strange in something like the etymological sense of external, foreign, unfamiliar, alien.  My PhD student Kendra Chilson (in unpublished work) argues that we should discard the familiar scale of subhuman → human-grade → superhuman.  AI systems do, and probably will continue to, operate orthogonally to simple scalar understandings of intelligence modeled on the human case.  We should expect them, she says, to be and remain strange intelligence[1] -- inseparably combining, in a single package, serious deficits and superhuman skills.  Fu ..read more
Visit website
Elisabeth of Bohemia 1, Descartes 0
The Splintered Mind | Reflections in Philosophy of Psychology, Broadly Construed
by Eric Schwitzgebel
2M ago
I'm loving reading the 1643 correspondence between Elisabeth of Bohemia and Descartes! I'm embarrassed to confess that I hadn't read it before now; the standard Cottingham et al. edition presents only selections from Descartes' side. I'd seen quotes of Elisabeth, but not the whole exchange as it played out. Elisabeth's letters are gems. She has Descartes on the ropes, and she puts her concerns so plainly and sensibly (in Bennett's translation; I haven't attempted to read the antique French). You can practically feel Descartes squirming against her objections. I have a clear and distinct idea o ..read more
Visit website

Follow The Splintered Mind | Reflections in Philosophy of Psychology, Broadly Construed on FeedSpot

Continue with Google
Continue with Apple
OR