Loading...

Follow The Electric Agora – A modern symposium for the .. on Feedspot

Continue with Google
Continue with Facebook
or

Valid

by Daniel A. Kaufman

If someone who looks like a man and has XY chromosomes tells me he feels female – I cannot tell her she is ‘wrong’. Would you?

–Prof. Alice Roberts, University of Birmingham (1)

____

The “thought” behind the idea of gender self-identification is about as confused as any in contemporary public discourse, which explains why the conversation on the topic is so fraught. Those in the vanguard fighting on behalf of the rights of people who fall under the “trans” umbrella are convinced that the concept of gender self-identification is absolutely essential to their success, which means not only that they are incapable of recognizing its (ultimately disqualifying) problems, but that they are inclined to double, triple, and quadruple down when confronted with them, lending a desperate, shrill aura to any discussion of the issue and inducing aggressive protests, histrionic public displays, no-platformings, attacks on peoples’ livelihoods, and even outright violence. (2)

Elsewhere, I addressed the issue of gender self-ID from the perspective of the “self” portion of the concept and suggested that social identities, of which gender is one, are not self-made, but publicly negotiated. (3)  Along this vector, the problem with gender identity activism is that it misunderstands what kind of identity a gender identity is, construing it as personal, when it is, in fact, social. My interest here, however, is in a somewhat different problem, having to do with the “identify” portion of gender self-ID.  Sexes are not things one can “identify with,” and identifying with genders is tantamount to embracing sexist stereotypes, something that any genuinely feminist – and more generally, liberal – philosophy and politics must oppose.

The use of ‘identify’ in the context of sex and gender is odd.  A judge can order that a plaintiff not be identified, meaning that the person’s identity should be kept a secret. One can identify oneself with a political movement, by which one means that one should be associated with it.  One can identify with the plight of a people, meaning that one has sympathy for – or even empathizes with – them.  A doctor can identify the cause of a cough, meaning that he has found the bacterial or viral or other thing that is responsible for it.  A suspect can be identified by the police, meaning that they have determined who he is.

But what could it mean to “identify as a man/woman”?  From what I can discern from gender self-ID theorists and activists, it could mean one of two things, both of which strike me as untenable.

The first is reflected in the Alice Roberts quote, above: for me to identify as a man/woman is to feel like I am one.  Now, I am a man, but if you asked me what it feels like to be one, I couldn’t answer; while I am a man, there is no sense in which I feel like one.  Being a man is a matter of belonging to a certain sex-category, specifically, the male one, but there is nothing that it feels like to be male. Certainly there are things that only males can feel: in my middle age, I know what it feels like to have an enlarged prostate – a distinctive sort of discomfort that only males experience.  But to feel something that only males can feel is not the same as feeling male, and certainly, it is not something that makes you male.  After all, there  are any number of males that don’t feel it, because their prostate glands are not enlarged or because, perhaps, their prostates have been surgically removed as part of a cancer treatment. (4)

So “feeling” male or female is not going to help us make sense of “identifying as a man or woman,” because there is no such thing: one is male or female, but one doesn’t feel male or female, just as one is a mammal, but one doesn’t “feel like a mammal.”

The second pursues the line of gender: to feel like a man/woman is to feel masculine or feminine; manly or womanly.  And certainly, there is something that feels like.  I might feel manly after an especially tough workout or while moshing at a Slayer concert or upon realizing that women are checking me out (I haven’t felt the latter, since middle-aged decrepitude set in), but such feelings of manliness are little more than the products of sexist expectations. It is because males are expected to be macho and muscular and aggressive and “players” that the experiences associated with these things are deemed “manly” experiences. And because the opposite sorts of expectations are held of females, males who are not into working out or thrash metal or strutting in front of women are deemed unmanly and even effeminate or womanly.

It would be regressive, then, to take this tack in trying to make sense of “identifying as” a man/woman and even worse to suggest that meeting these sexist expectations makes a person one or the other. For decades, feminists and other forward-thinking people have been fighting against precisely these sorts of expectations and rejecting the idea that such notions of manliness or womanliness should determine what one is or what one should do.  In a previous essay, I referenced Marlo Thomas’s seminal Free to Be You and Me, which my parents gave me as a young child and the entirety of which is devoted to opposing these sexist conceptions of manhood and womanhood and making the case that beyond our sexual identities as males and females, which are determined by nature, the rest should be up to the individual person and the course he or she chooses to pursue in life. Particularly effective in this regard is the wonderful opening skit, in which Thomas and Mel Brooks play infants, trying to figure out which one is the boy and which one is the girl.

Boy Meets Girl - YouTube

The notion of “identifying as” a man/woman, then, is either incoherent or retrograde. It is the farthest thing from being liberatory or progressive, and I find it hard to understand why anyone interested in advancing the cause of trans people would want to have anything to do with it, let alone plant their flag in it. As I have argued many times, everything required to make a complete and compelling case for trans civil rights is already contained within the liberal tradition. And beyond the advantage of being grounded in a stronger, more rigorous, more universal set of principles, to pursue such a course would avoid the sexist logic and tropes that have done so much to put trans activism in conflict with its feminist and gay/lesbian counterparts.

Notes

(1) Alice Roberts, Professor of Public Engagement in Science at University of Birmingham and President of Humanists UK.

https://twitter.com/theAliceRoberts/status/1141261931556286464

https://www.birmingham.ac.uk/schools/biosciences/staff/profile.aspx?ReferenceId=122726&Name=professor-alice-roberts

(2) https://www.bbc.com/news/world-us-canada-47301007

https://www.thetimes.co.uk/article/trans-goldsmiths-lecturer-natacha-kennedy-behind-smear-campaign-against-academics-f2zqbl222

https://www.standard.co.uk/news/crime/transgender-activist-tara-wolf-fined-150-for-assaulting-exclusionary-radical-feminist-in-hyde-park-a3813856.html

https://www.thetimes.co.uk/article/julie-bindel-the-man-in-a-skirt-called-me-a-nazi-then-attacked-8dfwk8jft

(3) https://theelectricagora.com/2017/05/25/self-made/

(4) Our own E.J. Winner also was quite critical about this notion of “feeling like” a man/woman, although on somewhat different grounds.

https://theelectricagora.com/2017/12/07/sex-gender-politics/

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Kevin Currie-Knight

__

Does postmodernism spell the death of reason? If you have been caught up in some recent online discussions of it, you’d think so. Postmodernism, its critics (often of the so-called Intellectual Dark Web) say, poses an existential threat to reason, a bedrock value of “the West.” An article on how French postmodernist intellectuals “ruined the West,” explains that thanks to postmodernism, “the need to argue a case persuasively using reasoned argument is now often replaced with references to identity and pure rage.” Jordan Peterson, talking on the Joe Rogan Podcast, suggests that postmodernism is, among other things, a complete assault on “everything that’s been established by the Enlightenment,” namely “rationality, empiricism, [and] science.”

Wow! Any philosophy that uses reason to argue against reason must be not only awful and dangerous, but self-contradictory. I want to argue that it isn’t necessarily so. My goal isn’t to convince people that postmodernism can’t be taken in irrationalist or even dangerous directions, but that it need not be, and probably wasn’t originally intended to be. If anything, I do not see postmodernism, as Peterson does, as a “complete assault” on “the” Enlightenment (there were several Enlightenments, not just one). I see it as a potentially valuable corrective to some of the Enlightenment’s excesses.

Here is an admittedly corny, but possibly helpful, example to show how I conceive of postmodernism’s relationship to reason. Imagine an infomercial for a tool a company wants to sell. For sake of simplicity, just imagine that the tool is a hammer. Like all infomercials, this one is pitched to put the product – the hammer – in the best possible light. And like all infomercials, that means not only showing what the product can do, but maybe exaggerating a bit about what it can do. We’ll imagine that this particular infomercial gives a long list of things this nifty hammer can help you do: “It helps pound nails…. like, really well; it helps extract nails too.” Fine so far. “But there’s more! It is an excellent mallet for cracking crabs and lobster, is a great back-scratcher, it can help you saw wood (just smash the wood really hard until it breaks), and has tons of other everyday uses!”

Now, certainly this infomercial is overly generous toward the hammer and its uses. The first two uses – hammering in and extracting nails – are surely on point, but the others are probably exaggerations. So, imagine that a truth-in-advertising campaign comes along geared toward potential consumers: “Our independent tests indicate that the hammer is really good for some of the things listed, but buyers should beware that hammers are not so good at the other things. The hammer itself is a good tool, but only when confined to its proper uses.”

What does this have to do with postmodernism and its relationship to reason? Well, imagine that the hammer is reason, the exaggerated infomercial is what happened to reason under (the excesses of) the Enlightenment, and the truth-in-advertising campaign is postmodernism. The way I see it, the problem is not that the Enlightenment advocated for things like reason and science. Those things were and are good things. The problem is that some of the more enthusiastic Enlightenment figures, mainly within the French and German Enlightenments made some really extravagant claims for what reason and science could do. Postmodernism is not trying to kick reason to the curb any more than the truth-in-advertising campaign wants us not to buy hammers. Rather, like the truth-in-advertising campaign, postmodernism is just trying to check some possible excesses and exuberance about what reason and science can do.

To see this, let’s look at some of the claims that certain Enlightenment figures made about what reason could do. In his study of the Enlightenment and his reaction to it, Isaiah Berlin depicted the message sent by some of the Enlightenment’s more enthusiastic champions, the French Encyclopedists:

A wider thesis underlay this: namely, that to all true questions there must be one true answer and one only, all the other answers being false, for otherwise the questions cannot be genuine questions. There must exist a path which leads clear thinkers to the correct answers to these questions, as much in the moral, social and political worlds as in that of the natural sciences, whether it is the same method or not; and once all the correct answers to the deepest moral, social and political questions that occupy (or should occupy) mankind are put together, the result will represent the final solution to all the problems of existence.

This is certainly not a view that all Enlightenment figures held, and doesn’t come close to describing Hume, Smith, and a host of others we readily recognize as participants in the Enlightenment. But certainly,  the view Berlin depicts has left a big cultural mark. However big you think that cultural mark is – and there is room for debate there – it is that mark that I see postmodernists as intending to call into question. They’re not aiming at hammers, but against being led by overeager advertisers to expect more of hammers than they can actually provide.

The big message in postmodern thinking is that there are many ways to understand and interpret the world, and when those ways battle for supremacy, there isn’t a neutral way to adjudicate between ways. Anyone who attempts to adjudicate between ways of seeing the world is doing so against the backdrop of some non-neutral way of interpreting the world. If you are arguing that God exists and is omnipresent and I am arguing that God is a fiction, anyone who hears our debate and wants to decide which side is right will be doing so with some non-neutral framework for making the decision. It may be that she is an atheist and, as such, puts more burden of proof on you than I do (or vice versa if she is a theist). It may even be that she has no existing view on whether God exists, but even then, she is not appraising neutrally. She probably has some idea of how to appraise arguments. Should I give weight to personal testimony, or should I only consider evidence that can be independently verified? How do I make sense of what ‘God’ means in this debate? How much weight to I give to argument appealing to logic, or arguments from authority? What criteria makes an argument convincing? Here’s neopragmastist-cum-postmodernist philosopher Richard Rorty’s way of explaining:

Philosophy, as a discipline, makes itself ridiculous when it steps forward at such junctures and says that it will find neutral ground on which to adjudicate the issue. It is not as if the philosophers had succeeded in finding some neutral ground on which to stand. It would be better for philosophers to admit there is no one way to break such standoffs, no single place to which it is appropriate to step back. There are, instead, as many ways of breaking the standoff as there are topics of conversation.

A more concrete way of describing this situation is made by the Taoist philosopher Zhuangzi, who envisions a disagreement between interlocutors:

Whom should we have straighten out the matter? Someone who agrees with you? But since he already agrees with you, how can he straighten it out? Someone who agrees with me? But since, she already agrees with me, how can he straighten it out? Someone who disagrees with both of us? But if he already disagrees with both of us, how can he straighten it out? Someone who agrees with both of us? But since he already agrees with both of us, how can he straighten it out? So neither you nor I nor any third party can ever know how it is.

Devotees of the Enlightenment might retort with: “But of course, there is a neutral way. Just look at the facts and adduce from there/Just go where reason takes you/just look at the situation objectively.” (And not coincidentally, we all think that we are the ones doing this and our interlocutors are not.) Yet, facts must be interpreted (Is this fact decisive to refute the claim?), reason must proceed via some method (Are appeals to authority acceptable?), and the interlocutors almost certainly all believe that they, not their opponents, are looking at the world objectively. (No debate was ever solved by a third party coming in and saying: “Hey, I got an idea; let’s just all look at the world objectively. The correct answer will stare us in the face!”)

Yet, none of this means that we must abandon reason. At best, it means that we must take an inventory of what is and isn’t realistic to expect from reason. Humans have to live and act in the world, and we all want to act intelligently (yes, even postmodernists). Insofar as reason is a great tool for thinking, we should use it! And even if some disagreements might be un-resolvable – for the reason that there is no fully neutral way to adjudicate disputes – that doesn’t mean that reason can’t or shouldn’t be used for argument. I think my belief is a better one than yours, and I want to persuade you that adopting it would make you better off. Even if I can no longer say that my view just represents The Way Things Really Are and complain about how if only you’d just listen to objective reason, persuading you will still mean providing you with reasons, and if I really want to persuade you, I should provide you with strong reasons.

One can be a postmodernist and recognize all of these things. But, a postmodernist might say, reason is probably not good for the things Berlin writes about: leading all correct reasoners to the same once-and-for-all answers, and creating consensus around what those answers are. Nor is reason good at seeing the world objectively. Surely, it can be used to detect some of our own biases, but since reason is as much a part of us as our biases are, it can’t detect those biases it can’t be aware of.

I’ll close with a quote that philosopher Stephen Hicks wrongly attributes to Foucault in his book Postmodernism Explained. Even though the author was really philosopher Todd May, May is describing what I think is an accurate read of Foucault: “it is meaningless to speak in the name of — or against — Reason, Truth, or Knowledge.” Well, that sounds ominous, doesn’t it? Okay, we can hand it to Foucault (or May) that it is meaningless to speak against reason, but see, he wants to end all talk in defense of it, too! In context, however, Foucault (or May) is saying that “There is no Reason; there are rationalities.” it is not that we should banish all attempts to give and listen to reason. It is that when we do, we are always working within one of many possible sets of rules that we are reasoning within (such rules as what types of arguments are acceptable and not acceptable, will and won’t be deemed convincing).

Whether you agree with the postmodernists on this is beside my point. My point is that this postmodernist vision would only undermine reason if you believe that reason is either the type of thing Berlin describes or nothing at all. Either the hammer works as a back scratcher and a saw, or it is nothing. By my interpretation, postmodernism isn’t against reason any more than our fictitious truth-in-advertising campaign is against hammers. Postmodernists are simply trying to give us a better understanding of what we should and shouldn’t realistically expect reason to do. Moreover, I wonder if tempering our expectations in this way might actually help us appreciate reason more: a hammer will likely be better appreciated if one doesn’t buy it expecting it to be a good saw.

Kevin Currie-Knight is a Teaching Associate Professor in East Carolina University’s College of Education. His teaching and research focuses on the philosophy and history of US education. His more popular writings –  on a range of issues from self-directed education to philosophy – have appeared in venues like Tipping Points Magazine and the Foundation for Economic Education.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Nathan Eckstrand

The “climate kids” strike is inspiring. At an age when students are traditionally focused on studies, relationships, and hobbies, these teenagers are advocating for a solution to perhaps our most intransigent problem. Their words are equally encouraging. 16-year old Nobel Peace Prize nominee Greta Thunberg describes the situation by saying, “There is a crisis in front of us that we have to live with, that we will have to live with for all our lives, our children, our grandchildren and all future generations…We won’t accept that, we won’t let that happen and that’s why we go on strike. We are on strike because we do want a future, we will carry on.” The world’s children say they won’t accept the future the adults are currently leaving them, and they are right to do so. It should shame all of us that we knowingly created a deadlier world for them and did relatively little to fix it.

Their refusal to stay silent compels me to do the same in support of a similar issue: the tragic state of the humanities. For decades they have been shrinking throughout parts of the world, quite drastically so since the Great Recession. While we should not conflate an issue that poses an existential threat to the earth with the loss of institutions, practices, and ideas that help us understand the human experience, neither should we avoid clearly stating what we are losing. As philosophy, history, theology, languages, classics, linguistics, and art disappear from society, we give up our soul—the very things that make us who we are. Without philosophy we lose our ideas; without history, our past; without theology, our spirituality; without English, our communication; and so on. Ending the humanities is the death of what it means to be human.

On a basic level, the value of the humanities is undeniable. Anyone who enjoys reading a book, appreciates good speaking, is fascinated by new ideas, or treasures knowing their ancestry benefits from the humanities. But the institution of the humanities adds to society as well. Humanities courses are among the most common courses taken at college, and several polls show certain humanities fields are among the most popular college majors (one site says the humanities, as a group, are more popular than business, nursing, and health professions). During times of middle-class economic growth, humanities class enrollment normally increases, implying that when economic security is not an issue, people desire a humanities education. Additionally, research shows that humanities undergraduates feel more support from their mentors than others, that the arts produce economic wealth, that the public appreciates the qualities art cultivates, that critical thinking is an essential part of every college’s pedagogical model, and that humanities majors do very well in the workforce (in part because employers prize those trained in it).

The damage done when the humanities disappear is stark. People aren’t trained in critical thinking, communication, and basic facts about society. Rational discussion based on a common understanding of the world vanishes, replaced by dogma and antagonism.[1] Despite this near universal acclaim for the work the humanities do, the budget for the National Endowment for the Humanities and National Endowment for the Arts has been slashed numerous times over the last few decades (and Trump’s suggested budget eliminates it entirely), though it has risen slightly in the last few years. Academic jobs have disappeared across the board while more PhDs are leaving academia entirely. Multiple articles report on the attempts of governors, presidents, and other administrators to consolidate or completely slash humanities departments (two of the most recent examples are the University of Wisconsin-Stevens Point’s attempt to drop 13 majors, causing protests that led to a complete abandonment of the cuts, and the University of Tulsa’s proposed elimination of the philosophy and religion departments). The only way to square these two facts is to realize just how much the adjunct crisis has affected higher education, which, as data shows, harms both students and professors. Part-time faculty have become more common than tenured, tenure-track, or full time non-tenure-track faculty, a fact made even more disturbing when you note the disproportionate number of women and people of color occupying those positions. As the American Association of University Professors notes, this trend threatens academic freedom, a bedrock of the educational system—and democracy—for generations.

There are similar problems abroad. Arts and humanities graduates in Britain are more likely than any other type to be under-using their degree. Last March, the UK government suggested a two-tiered system for funding different degree programs—STEM programs on one tier, and arts and humanities on another—that would see the latter lose a significant amount of government money and shrink even further. The amount of money the UK’s Arts and Humanities Research Council spends each year has been relatively stable for about a decade, though inflation means this money does less now than it could previously. And the UK’s Education secretary recently called for an end to “low value” degrees (defined as any degree where students aren’t making enough in five years to begin loan repayments). While perhaps not targeting the humanities directly, this final policy makes education serve the needs of business alone, and it’s questionable whether many humanities degrees would survive this culling.

Many schools in Russia are seeing significant cuts in both faculty and budgets, and the Russian State University for the Humanities has lost many professors and administrators. Brazil’s new president Jair Bolsonaro is trying to eliminate philosophy and sociology departments altogether because of the “Marxist rubbish” they teach.  Combine these facts with the numerous examples of states persecuting humanities scholars for their speech or work, and it’s clear the problem is not limited to one country.

The near universal consensus that the humanities should play a role in society, accompanied by their continuing atrophy, would at first glance seem to provide us with one of those rare easy-to-solve problems. If most people want the humanities and recognize they are in crisis, it should be easy to muster the necessary will to reinvest in them. Unfortunately, several factors currently prevent this. The first and most obvious is the neoliberal economy that demands liberal arts departments be financially sustainable. The argument that there are long-term and not directly monetizable values that the humanities provides falls on deaf ears, especially when the board of trustees or other investors compel schools to focus on the bottom line. Another problem is that helping the humanities takes a backseat to other troubles. The suffering of underpaid and overworked teachers seems minor next to the world’s worst humanitarian crisis, big data’s invasions of privacy, and environmental threats. But is it wise to separate these problems? Fixing our broken politics, society, and environment requires communication, reasoned argumentation, and intellectual thought—the very things the humanities cultivate. The best path forward is not always the most obvious. While fixing the humanities is not a panacea, it may be a necessary condition for solving our other problems.

The third issue is that while solving this crisis has a consensus, the form it should take does not. The humanities are criticized for their lack of gender and racial representation (among other things), for being too political, and for not serving the vocational needs of society. Groups articulating each view often disagree with the other views, leading to fights about the best way forward. Some prefer for the humanities to fail rather than another perspective to succeed. Though the humanities aren’t perfect, they are undoubtedly better to have—even when imperfect—than to do without. The humanities are fundamental for democracy, as citizens must use them to do their civic duties. Even if the humanities aren’t prefigured as we desire, they should be given as much consideration as we give to other democratic practices.

Some say the humanities aren’t disappearing, only settling into a new role. Advocating for them is unnecessary, since if we wait they will find a more appropriate place. The naiveté of this belief is revealed by history. The desires that produce the humanities may not disappear, but we can still lose something vital. These desires didn’t prevent Europe from falling into the Dark Ages for centuries, nor did they prevent many instances of human oppression. While the humanities have at times contributed to subjugation, abandoning them increases the likelihood we will revisit, rather than learn from, such negative experiences. When you include climate change, totalitarianism, influenza outbreaks, and more, it becomes clear we cannot trust natural or social systems to save us from disaster. And while other disasters can be observed as they happen, we may not realize what losing the humanities costs until it’s too late.

Suggestions for improvement—like recognizing that non-tenure-track positions aren’t going to disappear, understanding the power structure of your university, and organizing adjuncts on campus—are helpful but inadequate. The crisis of the humanities is global, and linked to many highly interconnected systems. Working on the local level with one’s administration, department, and colleagues will only be marginally effective, for it doesn’t raise the issue with those who need to hear about it: the public and both political and economic decision makers. These groups need to know what they are losing and why it should be preserved. College administrators must also be pressured to support the humanities. Often, they are compelled by the college’s investors to produce profitability at the expense of a well-rounded education. Personal experience has taught me that while university officials speak glowingly of the liberal arts and how it changed their lives, they follow that up by saying economic demands are forcing cuts and layoffs. Similarly, politicians I have tried to speak with (as part of Humanities Advocacy Day) fail to show up, instead sending members of their staff to express concern without making any promises. While politicians cannot attend every meeting nor specialize in every issue (and some are concerned about the humanities), the haste and attention which some issues get, compared with that which the humanities receive, indicates a set of priorities that renders this problem perpetually unaddressed. Clearly the urgency of this crisis must be conveyed.

As humanities teachers lack access to the mass media, the best way to communicate this concern may be through a strike. Though many questions need to be discussed (Who would strike? What would the demand be? To whom would it be issued?), it may be the best option. The loss of education that will occur if this happens, while regrettable, is a small price to pay to ensure the humanities will remain. It is time to say that we’ve had enough of seeing friends and colleagues leave behind studies they love; of signing contracts with so many responsibilities a long-term career is impossible; of reading about teachers being unable to afford basic necessities; of families being broken up because someone is following a calling that makes staying in one place difficult. Perhaps most of all, we must reject a system that refuses to allow students to explore their passion. It is time to see what a society lacking the humanities looks like, and ask whether we find it tolerable. Now is a good time; teacher strikes are already happening in many parts of the US and the UK. And strikes have been effective in many parts of the world.[2] While it is too early to tell if the Children’s Climate Strike will be effective, the fact that they’ve received international attention is an achievement itself. If we are not willing to pay the price necessary to ensure the humanities’ success, then this crisis will become one our children, grandchildren, and future ancestors will have to live with. And to paraphrase Greta Thunberg, that is something we must not accept.

Notes

[1] Many have realized this fact, as shown by articles in The AtlanticThe TelegraphThe Washington PostTimes Higher EducationHuffington PostForbesThe New RepublicThe Guardian, and New York Times. Even more conservative magazines like The National Review and Weekly Standard don’t advocate for ending the humanities, only de-politicizing them.

[2] As exemplified by the US Coal Strike of 1902, the 1946 Montrel Cottons Strike, the 1946 Pilbara strike in Australia, the US Steel Strike of 1959, the UK Miner’s Strike of 1972 and 1974, the UPS Strike of 1997, and the 2007 South African public servants’ strike.

Nathan Eckstrand is a Visiting Assistant Professor at Fort Hays State University and editor of the APA Blog. His primary research project at the moment is the question of how to conceive of revolution and resistance without making revolution advocate for one type of political state.

This piece was originally posted at the London School of Economics’ Higher Education Blog and the Blog of the APA.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Daniel A. Kaufman

___

After bearing witness to the train wreck that was the “White Paper on Publication Ethics,” I was convinced that woke philosophy couldn’t possibly get any worse.  I was wrong, of course, and in hindsight, it was foolish of me even to have imagined such a thing.  After all, I had similar thoughts after reporting on Justin Weinberg’s essay celebrating the Daily Nous’s anniversary, which was separated from the White Paper only by a few months.  I won’t make the same mistake again.  When I describe what is happening now as being “peak,” my point is not to suggest that there will be less of it in the future – in fact, I’m absolutely certain that there will be much more of it – but rather that we’ve now seen everything that woke philosophy’s got.  Its hand is revealed.  Its tactics are exposed.  Its shills are clearly identified.  Its self-serving logic is out in plain sight. (There is still one thing I remain unsure of, about which later.)  The only thing left now is for the rest of philosophy to decide what to do about it, and I would be lying if I said I was hopeful, given that woke philosophy already has succeeded in capturing our profession’s primary institutions, something I also have spoken and written about at some length. (1)

So what’s happening now?  Woke philosophy’s most recent moves can be found in an “open letter” to the profession, published anonymously (by “t-philosopher”) and entitled “I am leaving academic philosophy because of its transphobia problem,” as well as a lengthy essay, written by none other than our intrepid Weinberg, “Trans Women and Philosophy: Learning from Recent Events” and published at the Daily Nous. The two pieces are an exquisite pairing: T-philosopher is wounded and empowered and terrified and accusatory and defeated and defiant, all at once – sometimes, even in the same sentence – and then, suddenly, thankfully, as if out of a puff of smoke, Weinberg appears on the scene to help us sort it out so that we all might become Better People.

T-philosopher announces to the profession – all of it – that she is leaving because of philosophy’s “transphobia” and the terrible harm she has suffered at the hands of “bigots” like Kathleen Stock (who else?), whose presence renders her no longer “safe in professional settings.” Then comes the inevitable “call to action”: Journals must refuse to publish articles critical of gender identity theory and activism; conferences must no-platform philosophers seeking to present gender critical arguments; gender critical thinkers must be barred from public discourse, whether on blogs, discussion boards, social media sites, comments sections, or other online venues; and anyone and everyone who is going to engage in both professional and public philosophical discourse on the subject had better accept that “any trans discourse that does not proceed from this initial assumption — that trans people are the gender that they say they are — is oppressive, regressive, and harmful” and that “trans discourse that does not proceed with a substantial amount of care at amplifying trans voices and understanding the trans experience should not exist.”

If you’ve raised a teenager, as my wife Nancy and I have done, you’ll immediately recognize this as very typically adolescent behavior. The clueless narcissism (“to the academic philosophy community…”); the catastrophizing (I know Kathleen Stock. You can watch video of Kathleen Stock.  One cannot possibly be “unsafe” because of Kathleen Stock); the empty (because toothless) demands; the emotional blackmail (You see what you’re making me do!); even the proverbial running away from home (I’m leaving and never coming back!)  It’s all there.

Weinberg, of course, does as only Weinberg can do, and I’ve written enough about him that it’s unnecessary to do so again in any detail.  Suffice it to say that Weinbergism is alive and well and holding court: the phony even-handedness (a not-very-effective trick he employs is to repeatedly suggest that those on his side of the issue are likely as dismayed by what he has said as his opponents); the credulous embrace of the testimony of those with whom he is already sympathetic (“Reader, what do you do when you are confronted with the anguish of another person?”); the breathtaking hypocrisy (“Be attentive to hostile rhetoric in work you are considering hosting or publishing”); the false modesty (“Yes, that’s my name up there. No, I’m not going to defend myself in this post. That’s not the point of this”); the obligatory swipe at Brian Leiter, with the equally obligatory misrepresentation of things that anyone with a pulse, two fingers, and an internet connection can check for themselves (“a well-known philosophy-blogger’s obsession with belittling graduate students who use Twitter to discuss trans issues” (2)); the by-now legendary lack of self-awareness (“Note the venues. Much of the trans-exclusionary writing by philosophers that has fueled recent controversies has been self-published (e.g., at Medium) by philosopher-activists..,” published on Weinberg’s personal site, in an essay about a politically-soaked letter published on Medium).  It’s classic Weinberg; Weinberg as only Weinberg can be.

The essential thing to realize is that woke philosophy isn’t philosophy at all, but politics by another name.  Philosophy, for the most part, is conducted by way of arguments and aspires to relative dispassion and (in the modern era) is largely an intellectual endeavor, the purpose of which is to raise tough, serious questions with regard to a highly diverse set of topics. It’s mode is essentially critical.  The aim is not to win or to feel good about oneself or to obtain a particular policy outcome or to identify and punish wrongdoers of one stripe or another.  These are the aims of social and political activism and agitprop. And yet, this is what woke philosophy is all about: specifically, the advancement and establishment of contemporary identitarian politics within the profession and the society at large. It’s what Rachel McKinnon is doing when she orchestrates Twitter flame-wars against Martina Navratilova and goes after her sponsors and those of other gender-critical athletes; it’s what the signatories to the “open letter” attacking Rebecca Tuvel are doing when they demand that Hypatia retract her already-published article (3); and it’s what t-philosopher is doing, when she slanders Kathleen Stock and Brian Leiter and advocates for the censoring and de-platforming of those in the profession who are not on board with (in this case trans) identitarian politics.

What I’m not sure of is whether woke philosophy represents the tip of a premeditated political spear – whether the hyperventilating and censoring and character assassination and attacks on peoples’ livelihoods, etc., are a tactic – or whether it is the expression of an essentially “adolescent Id”; a function of arrested psychological development on the part of a group of philosophers, almost all of whom, it should be noted, are part of the Millennial and I-generations, the unending juvenility and emotional brittleness of so many of whom has been discussed at length by social scientists and cultural critics. (4) There is much to be said for this second account, insofar as it avoids (a usually fallacious) conspiratorialism and explains so much of the infantile behavior of woke philosophers over the last several years, one of the more memorable examples of which occurred during l’affaire Swinburne, when a notable woke philosopher told an 80-plus year old Christian philosopher (who, to her shock, had said Christian things at a Christian conference) to “suck my giant queer cock.” (When I was in graduate school, it was quite clear that Jerrys Fodor and Katz couldn’t stand one another, but even given the copious amount of mind-altering substances we’d all ingested over the years, none of us ever could have hallucinated a scenario in which we would hear this sort of talk out of them.)

Woke philosophy is reminiscent of those histrionic, scripted WWF feuds I used to watch on WPIX in the early 1980’s, by which I mean that it’s such transparently melodramatic bullshit, performed by such a manifestly absurd group of clowns that a six year old should be able to see through it. But this is academic philosophy in 2019, where it seems that there is nothing so stupid or disingenuous or juvenile that a sizable portion of the current members of a discipline that once counted the likes of Wittgenstein, Quine, and Rawls among its leading lights won’t embrace it.

Notes

(1) https://meaningoflife.tv/videos/41435

https://theelectricagora.com/2018/10/10/time-for-a-change/

https://theelectricagora.com/2016/12/02/provocations-8/

(2)  The graduate students of whom Leiter was critical had been part of the vicious online attacks on Kathleen Stock.

https://leiterreports.typepad.com/blog/2018/09/complete-and-utter-dickhead-watch-nathan-oseroff-edition.html

(3) https://en.wikipedia.org/wiki/Hypatia_transracialism_controversy

(4)  I’ve both written and talked about this phenomenon quite a bit.

https://theelectricagora.com/2019/02/12/adolescent-politics/

https://bloggingheads.tv/videos/55928

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Mark English

Lee Smolin is a respected physicist who has always had strong philosophical interests and convictions. He recently articulated his realist views in a public lecture. What follows are my notes on his lecture mixed in with a few comments and observations.

Smolin is strongly opposed to postmodernists who reject the notion of objective truth and who see reality as a social or historical construct. He draws parallels between the anti-realism of postmodernists and the anti-realism of the generation of physicists who, from the late 1920s, embraced and promoted quantum mechanics (QM) and the so-called Copenhagen interpretation.

Because the measurement process is built into QM, Smolin claims (as Einstein did) that QM is an incomplete theory and so, in a real sense, wrong. It is not just that a particular interpretation of the theory is wrong. The basic structure of the theory itself is flawed because it has at its heart two laws which are in tension with one another.

This, then, is the key problem with QM as Smolin sees it. It is often called the “measurement problem”, and it relates specifically to the notion of wave-particle duality and the two laws or rules that QM provides to describe how things change over time. Rule 1 or Law 1 says, in effect, that (except during a measurement) the wave evolves smoothly and deterministically (somewhat like a wave on water). This allows the system to simultaneously explore alternative histories which lead to different outcomes all of which are represented by the smooth flow of the wave. Rule 1 applies when you are not making a measurement. Rule 2 applies only when you make a measurement and it tells you, when you make a measurement of position, say, that the probability that you will find a particle in a particular position is correlated with the height of the wave at that position.

Smolin argues that the 2nd rule means that QM is not a realist theory. If we (or other observers) were not around, only Rule 1 would apply.

One of the main developers of the theory, Erwin Schrödinger, was uncomfortable with the theory and its implications. He crystallized his doubts in the form of the famous live/dead cat-in-the-box thought experiment (which is explained by Smolin in his talk (starting at 39.54)).

Niels Bohr, in contrast to Schrödinger, embraced the paradoxical nature of QM, partly because it fitted in with ideas which he had developed previously. The notion of wave-particle duality predated QM by many years and marked the beginning of the quantum revolution, and Bohr’s notion (or philosophy) of complementarity was inspired by these ideas and by the observed behavior of elementary particles. Sometimes such particles seem to behave as if they are waves, sometimes as if they are particles and, crucially, how they are observed to behave depends on the details of how we go about observing them.

Smolin takes an unequivocally negative view of Bohr’s metaphysical views as well as of the views of Bohr’s protégé, Werner Heisenberg. Here he is on the former:

Now, of course, Bohr had a lot to say about things being complementary and in tension all the time and you always have to have two or more incompatible viewpoints at the same time to understand anything, and that especially goes […] for knowledge and truth and beauty. And he got off on the Kabbalah, of course. Anyway [long pause] … it doesn’t cut it with me.

Smolin’s claims are more than just an expression of a particular metaphysical disposition, however. It is generally accepted that QM doesn’t “make sense.”

“I think I can safely say,” Richard Feynman ventured, “that nobody understands quantum mechanics.” And according to a long line of physicists, from Einstein and Schrödinger through to contemporary figures such as Smolin and Roger Penrose, the reason QM doesn’t make sense is because it is incomplete. Or, to put it in a stronger way, because it is wrong.

How do we know that it is incomplete? Well, apart from the problems mentioned above, there is the small matter of gravity. QM in its current form can’t deal satisfactorily with gravity.

But Smolin is convinced that its weirdness and incompatibility with realism also count against it. QM is not consistent with realism because the properties it uses to describe atoms depend on us to prepare and measure them.

“A complete theory,” insists Smolin, “should describe what is happening in each individual process, independent of our knowledge or beliefs or interventions or interactions with the system.” He is interested in understanding “how nature is in our absence.” After all, we were not around for most of the history of the universe.

Smolin defines realism as the view that nature exists independently of our knowledge and beliefs about it; and that the properties of systems in nature can be characterized and understood independently of our existence and manipulation. Our measuring, etc. “should not play a role in what the atoms and elementary particles are doing.” What he means, I think, is that our interventions should not play an essential or crucial role in the descriptions and explanations that our theories provide.

“A theory can be called realist,” Smolin explains, “if it speaks in terms of properties whose values do not require us to interact with the system. We call such properties ‘beables’.”

By contrast, a theory whose properties depend on us interacting with a system is called operational. Such properties are called “observables.”

Observables are defined as a response to our intervention. Beables, by contrast, are not defined as a response to our intervention. They are just there, it seems.

But how do we get to know the values of these properties unless we interact with the system? Also, there is the framework question. Properties and values arguably only exist within the context of a particular perspective or theory. In order for properties and values to be properties and values, we need to conceptualize them as such. I will ignore this broader question, however, and focus on what Smolin means by interaction.

Even ordinary observations (like seeing or hearing or recording something electronically) involve us or our measuring devices interacting in some way with the system we/they are observing/recording. Smolin appears not to be concerned with such interactions here because, although the nature of the observer’s perceptual apparatus and/or the nature and settings of the equipment being employed determine or pick out what is and what is not being observed or recorded, the results are otherwise quite independent. The type of datum is determined by the nature of the observer or the observing or recording process, but not the data themselves.

In the case of experiments with elementary particles, however, the situation is subtly – and sometimes dramatically – different. Interactions are such that they determine, or play an active role in determining, the values in question.

Arguably, ordinary cases of measurement and observation do not pose problems for the commonsense realist. But if our observations alter in a material way whatever it is which is being observed – as appears to be the case in the quantum realm – problems arise.

Operationalism was first defined by the physicist Percy Bridgman (1882–1961). The book in which he elaborated his views, The Logic of Modern Physics, was published in 1927, the same year QM was put into definitive form. Bridgman’s philosophical approach has much in common with the instrumentalism which characterized the views of the majority of thinkers (physicists, logicians, philosophers) associated with logical positivism. Bridgman was in fact personally involved in the activities of the Vienna Circle.

It was the physicist John Bell who introduced the concept of beables. According to Bell – and according to Smolin – it should be possible to say what is rather than merely what is observed. This is all very well but – quite apart from philosophical arguments questioning the notion of a noumenal world – experimental results continue to come out against the realists. Experiments with entangled particles, for example, seem to exclude the possibility of any form of local realism. Some form of non-local realism is still very possible however.

Smolin is at his weakest when he talks history. The story he tells about the generation of physicists who grew up during the Great War is hard to swallow. It seems that they were predisposed to anti-realism by virtue of the unusual circumstances of their early lives. They had witnessed at an impressionable age the destruction of the social optimism of the 19th century, and so were skeptical of rationality and optimism and progress. They had lost older brothers and cousins and fathers and uncles and had “nobody above them …” No wonder they didn’t believe that elementary particles etc. have properties which are independent of our interactions with them!

You would think that the fact that Niels Bohr, the father of the Copenhagen interpretation, was not a part of this generation would sink Smolin’s generational explanation from the outset. As would even a cursory knowledge of the history of 19th century thought which is shot through with various forms of idealism, anti-realism and radical empiricism. The phenomenalist philosophy of science of Ernst Mach (1838–1916) is a case in point. At the end of the 19th century, Mach articulated ideas which were later picked up by the thinkers Smolin is criticizing.

Smolin explicitly recognizes that Bohr’s main ideas were formed well before the development of quantum mechanics and that he was influenced by 19th century thinkers, including by Kierkegaard, whom Smolin clearly does not hold in high esteem.

Smolin quotes some of Bohr’s early claims:

“Nothing exists until it is measured.”

“When we measure something we are forcing an undetermined, undefined world to assume an experimental value. We are not measuring the world, we are creating it.”

“Everything we call real is made of things that cannot be regarded as real.”

Heisenberg followed the same general approach:

“The atoms or elementary particles themselves are not real: they form a world of potentialities or possibilities rather than one of things or facts.”

“What we observe is not nature itself but nature exposed to our method of questioning.”

Bohr said: “We must be clear that when it comes to atoms, language can be used only as in poetry. The poet […] is not nearly so concerned [with] describing facts as [with] creating images and establishing mental connections.” What he meant, presumably, is that the normal referential function of natural language cannot be used in relation to the quantum world, and anything we say about that world (using natural language) will necessarily be a creative construct shot through with metaphor and paradox.

Maybe so. Or maybe not. It is not something we can know a priori. It all depends on how our models develop and on the results of experiments. So far, it must be said, QM remains intact and Bohr’s view still looks plausible.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Daniel A. Kaufman

My dialogue with Robert Gressis, on his essay, “Is Philosophy OK?” and on the recent “white paper” on publication ethics.  First aired on Sophia, MeaningofLife.TV, May 30 2019.

Is Philosophy OK? | Daniel Kaufman & Robert Gressis [Sophia] - YouTube
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Paul Austin Murphy

___

There certainly is a specific prose style when it comes to much analytic philosophy. Of course there’s a general academic prose style (or prose styles) too. The analytic philosophy prose style can therefore be taken to be a variation on that.

Academics will of course say – and justifiably so – that this style is required for reasons of objectivity, clarity, the formal requirements of academic research, stylistic uniformity and whatever. However, there’s clearly more to it than that.

One qualification I’ll make is that the better known (or even famous) analytic philosophers become, the more likely they’ll take liberties with that prose style. In parallel, postgraduates and young professional analytic philosophers will take the least liberties with it. (Possibly that’s a good thing too.) This basically means that if you’ve gone through the academic mill and proved your credentials, then you can relax a little in terms of one’s prose style.

One point I’d like to stress is how the academic style is used to hide the philosophical, subjective and even political biases of the academics concerned. That means if you employ the right self-consciously dry academic style, then very few on the outside will detect any (obvious) biases. Indeed such academics are often seen by many laypersons as algorithmic machines devoted to discovering the Truth.

As for the philosophy under the prose.

The Cambridge philosopher Hugh Mellor (D.H. Mellor) once described Jacques Derrida’s work as “trivial” and “willfully obscure”. Mellor did so in his attempt to stop Derrrida receiving an Honorary Degree from the University of Cambridge. Of course a lot of analytic philosophy is also trivial. It’s also the case that some analytic philosophers hide that triviality under prose that is wilfully obscure. Having said that, such analytic philosophy won’t of course be trivial or wilfully obscure in the same way in which Derrida’s work is (if it is). That is, it won’t be poetic, vague and oracular. Instead, analytic triviality is often hidden within a forest of jargon, schema, symbolic letters, footnotes, references, “backward Es” (to quote Hilary Putnam), words like ceteris paribus and the like. In other words, basic analytic academic prose will be used to hide the trivialities and increase the obscurities. So, again, in Derrida’s case it’s a different kind of obscurity. (Though, in the continental tradition, it can be equally academic.)

How To Write a Analytic Philosophy Paper

What postgraduates of analytic philosophy tend to do when they write a paper is focus on an extremely narrow problem and on an extremely-narrow take on that extremely-narrow problem. Then they’ll read everything that’s been written on that subject over the last five or ten years (at least by the big or fashionable players). They’ll then make notes on – and collect quotes from – what they’ve read. Thus the resultant paper will also be chock-a-block with references, footnotes, etc. (though not necessarily chock-a-block with quotes). It will be written in as academic (or dry) a style as possible, indeed, self-consciously so. That will mean that there’s often a gratuitous use of symbols, lots of numbered points, schema, and other stylistic gimmicks that sometimes have the effect of making it look like a physics paper.

In crude and simple terms, what often happens is that analytic postgraduates attempt to write like older academics and the contemporary philosophers they’ve only-just read. In that sense, they’re ingratiating themselves with a professional academic tribe.

In terms specifically of references.

Take William G. Lycan’s medium-length paper “The Continuity of Levels of Nature,” which includes fifty-two references to other philosophers’ texts. We can also cite Jaegwon Kim’s “Supervenience as a Philosophical Concept,” which has fifty-one such references.

Then there’s the sad case of footnotes.

Footnotes often make analytic-philosophy papers very difficult to read because on any given page, they may take up more space than the main text. (Click this link for an example of what I’m talking about.) In addition, if the reader was to read all the footnotes as and when they occur, then he’d loose the “narrative thread” of the central text. (For that reason,  notes are best placed at the end) Doesn’t this excessive use of long and many footnotes verge on academic exhibitionism?

Postgrad students will also focus on the fashionable/up-to-date issues or problems and read the fashionable/up-to-date papers on those issues or problems – even if such things are simply new stylistic versions of what old philosophers have already said. (Though with endless examples of Derrida’s “sign substitutions”: that is, when an old word/concept is given new name.) Indeed it has been said (e.g., by A.J. Ayer way back in the 1950s) that many postgrad students rarely read anything that’s older than twenty years old. And many postgrads are so convinced that what is new is always better than what is old that they don’t feel at all guilty about their fixation with the very-recent academic past.

In terms of the philosophizing itself.

When a postgraduate student (of analytic philosophy) thinks about the nature of an aspect of the philosophy of mind (to take an arbitrary example), primarily, all he does is read and think about what Philosopher X and Philosopher Y (usually very recent philosophers) have said about the nature of that aspect of the philosophy of mind. This often means that he may well be caught in a intertextual trap. (Though, of course, it’s unlikely that any student would rely on only two philosophers of mind.) Indeed all the student’s responses, reactions and commentaries on that aspect of the philosophy of mind will also be largely intertextual in nature.

So in order to get a grip of why I’ve used the word ‘intertextual’ (a word first coined by the Bulgarian-French semiotician and psychoanalyst, Julia Kristeva), here’s a passage from the French literary theorist and semiotician, Roland Barthes:

Any text is a new tissue of past citations. Bits of code, formulae, rhythmic models, fragments of social languages, etc. pass into the text and are redistributed within it, for there is always language before and around the text. Intertextuality, the condition of any text whatsoever, cannot, of course, be reduced to a problem of sources or influences; the intertext is a general field of anonymous formulae whose origin can scarcely ever be located; of unconscious or automatic quotations, given without quotation marks.

Thus when students study philosophy at university, it seems that reading texts often seems far more important than independent thinking or reasoning. Indeed, isn’t that called “research”?

On the other hand, many philosophers (or wannabe philosophers) would like to flatter themselves with the view that their own philosophical ideas have somehow occurred ex nihilo. Yet genuine ex nihilo philosophical thought is as unlikely as ex nihilo mental volition or action (i.e., what philosophers call “origination”). 

In Praise of Style and Clarity

The analytic philosopher Simon Blackburn has little time for those philosophers who glory in the complexity of their own philosophical writings. Blackburn believes that philosophy’s“difficulties were compounded by a certain pride in its difficulty.” It’s ironic, then, that some of the great philosophers were also good writers. Blackburn himself cites Bertrand Russell, Gilbert Ryle and J.L. Austin. (I strongly disagree with Blackburn’s final choice, but there you go.) I would also cite Plato, Descartes, Berkeley, Hume, Schopenhauer, and others. As for the 20th century: Hilary Putnam, Richard Rorty, Thomas Nagel, Jaegwon Kim, John Searle and various other American analytic philosophers (as opposed to English ones).

Bad writing, technicality and sheer pretentiousness, however, shouldn’t imply that all work on the difficult minutia of philosophy should be shunned or limited in any way. Some papers are bound to be complex and difficult,  necessarily because of the subject’s difficulty, but because the issues will be technical in nature and employ a number of unfamiliar terms. Oftentimes, however, technical terms are used gratuitously, though it depends on the philosopher concerned.

Blackburn makes some other interesting points about philosophical prose – at least in its bad guise. He quotes John Searle: “If you can’t say it clearly you don’t understand it yourself.”

This position is backed up by the science writer, Philip Ball (who writes about scientists, not philosophers):

When someone explains something in a complicated way, it’s often a sign that they don’t really understand it. A popular maxim in science used to be that you can’t claim to understand your subject until you can explain it to your grandmother.

Perhaps this is where Searle got his view from.

So when we criticize ourselves for not understanding a particular analytic philosopher’s prose, we should consider whether, perhaps, he may not have understood his own prose. Or, perhaps, he didn’t understand the philosophical ideas he was trying to express. Or perhaps it’s just a matter of the philosopher concerned being a poor writer, regardless of the complexity of his ideas. Or maybe he’s just pretentious!

Certainly, such people don’t follow the Quintilian dictum (as quoted by Blackburn): “Do not write so that you can be understood, but so that you cannot be misunderstood.”

Of course, literally speaking, if one writes “so that you cannot be misunderstood,” then one must also be writing “so that you can be understood.”  The two things go together. Despite that, the philosopher Bernard Williams (also quoted by Blackburn) offered a riposte to this “impossible ideal”:

Williams snapped at that and said it was “an impossible ideal. You can always be misunderstood,” and of course he’s right. But I think the point of Quintilian’s remark isn’t ‘write so as to avoid any possible misunderstanding’ but to remember that it’s difficult and that it’s your job to make it as easy as you can.

It’s interesting to note here that Williams’ impossible-ideal argument can also be used in favor of the idea that there will always be someone in one’s own culture (or even profession) – no matter how rational – who’ll misinterpret at least something you write or say. Indeed perhaps everyone who reads or listens to you will misinterpret you in some small or large way. The idea of a perfect communication of a complete and perfect meaning to a perfect interpreter seems to be a ridiculous ideal. It seems to be almost – or even literally – impossible… and for so many reasons.

So philosophers will always be “misunderstood” by someone in some way. Indeed each person will misunderstand a philosopher in some way, whether that way is large or small. All we have left (as writers or philosophers) is to realise that “it’s [our] job to make it as easy as [we] can.” We can’t be expected to do more than this. We can’t guarantee the perfect communication of our ideas or the perfect understanding of our ideas by other people (as anyone who uses social media already knows). And even if we allow this slack, perhaps, in the end, it simply doesn’t matter that much because communication doesn’t require either completely determinate meanings or completely determinate interpretations. We seem to manage quite well in most situations without perfect languages or other philosophical ideals. So perhaps we can’t (to use a term from Derrida again) “mathematicise” meaning, interpretation and understanding.

Paul Austin Murphy is a writer on philosophy and politics, living in the north of England. He’s had pieces published by Philosophy Now, New English Review, Human Events, and others. His philosophy blog is called Paul Austin Murphy’s Philosophy and can be found here: http://paulaustinmurphypam.blogspot.com/

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Robert Gressis

Lately, I’ve been wondering whether it’s OK for me to be a philosophy professor.

You might wonder, “Why on earth should anyone wonder whether it’s OK to be a philosophy professor?” I have a simple argument. It goes like this:

The Conceptual Claim: Professors do three things as part of their jobs: produce research, teach, and provide service to their departments, colleges, or universities.

The Research Claim: I don’t produce valuable research, so I’m not making the world a better place with what I publish.

The Teaching Claim: My teaching probably doesn’t make much positive difference to most of my students.

The Administrative Claim: I can’t engage in the kind of administrative work that would make an important, positive difference to my university.

The Permissibility Claim: It’s OK to practice a profession only if it makes a positive difference (unless you have no realistic option but to be in that profession).

The Conclusion: Since, insofar as I’m a professor, I don’t make much, if any positive difference, I shouldn’t remain a professor.

That’s the argument, in a nutshell. Let me provide some justification for each of the premises. Well, except for the conceptual claim. So far as I know, no one really contests what it is that professors do, so I’ll leave that undefended.

According to the research claim, I don’t produce valuable research. Is that true? Of course, it depends on what it takes for research to be valuable. But rather than present a disquisition about the nature of value or what it takes for research to count as valuable, I’ll just say this much to motivate the research claim: while it’s true that some scholars have read my research, to the best of my knowledge, my work has served merely as something that may have been of a little help to a few scholars of Kant. In other words, some philosophers who spend their time trying to interpret very precisely what Kant meant may have found some of what I have written on that subject to be a little helpful. If that’s what my life’s work amounts to, can you blame me for feeling like I haven’t really produced valuable research?

Things could change; I could write something that gets lots of attention and helps a wide variety of people think through or notice some difficult problem. Maybe. But that seems pretty unlikely to me. The safe bet is that that’s not the kind of research career I’m going to have.

What about the teaching claim? Why do I think that my teaching has been of little value to most of my students? Here I rely on Bryan Caplan’s book, The Case Against Education. In that book, Caplan cites and summarizes a lot of meta-studies from educational psychology about how much students learn during college. In a nutshell, the story is this: most students don’t learn much during college. Most students don’t retain most of what they learn for long. And most students aren’t able to transfer what they learn in the college classroom to non-college contexts.

I could present to you the studies and numbers that Caplan cites (would it make you feel better if I wrote “97%” or “74%” instead of “most”?), but I think just some reflection on the claims make them prima facie plausible. If you’re a professor at a state school like mine, how impressed are you by your non-majors’ (or for that matter, your majors’) understanding of the material you teach to them? If you teach, say, thirty-six students in a class, how many of them leave your introductory philosophy course understanding philosophy as well as you would like?

If you’re like me, the answer to the question I just asked is “between three and five.” OK, of those three to five, how much will they remember from that class a year later? How about two years later? Four years later? You get the drift.

Finally, even if some small cadre of students retain some valuable core of insights from their philosophy (or whatever) classes, how many of them are good at applying these insights to circumstances outside of the classroom? If you’re a professor, it may help to think of your colleagues: how many of them do you think are extremely good at critical thinking? If you’re like me, your answer will be “most of them, most of the time, except when it comes to particularly freighted ideological issues.” But here’s the thing: anyone who is a professor is, relative to the population of college students, extremely good at academics; moreover, they practice their profession every day for decades. So it’s no wonder that they’re good at critical thinking.

That’s the thing, though: our colleagues practice. How many of our students, even our very good students who retain a lot, practice what they’ve learned every day? In most jobs, they either don’t get the opportunity, or it’s actively discouraged. So, most of our students will forget most of what they learn and won’t be able to or won’t be allowed to apply what they’ve learned to their everyday life.

At this point you might wonder, “well, the problem, Rob, is that maybe you’re not a very good teacher.” For what it’s worth, I get very positive teaching evaluations and I spend a lot of time researching and practicing evidence-based pedagogy. I’ve read a fair number of books like Make It Stick, Cheating Lessons, Small Teaching, The Spark of Learning, What the Best College Teachers Do, and others, along with lots of articles from Teaching Philosophy. I also performed improvisational comedy for seven years, in Detroit, Ann Arbor, and New York City. I’m very comfortable talking to people, I’m funny, and I really want my students to learn.

But when I try these new techniques, many of my students don’t like them. This is not the kind of teaching they’re used to, and they rebel. Moreover, a lot of the best teaching techniques require a ton of grading and very quick feedback on my part. So, my students prefer me not to teach well, and after a while it’s just easier for me not to teach well.

This takes us to the administrative claim, for as Jason Brennan and Phil Magness document in their recent Cracks in the Ivory Tower, the incentives in higher education just don’t support good teaching or, for that matter, good administration. Administrators want to please their customers (students and parents), so they want to make things easier on students, which includes increasing their (students’ and administrators’) power over professors. The most plum university jobs go to those who produce the best research rather than do the best teaching, so professors are incentivized to focus on research rather than teaching well. And, plausibly, the reason why getting a college degree increases students’ income is not so much that it builds students’ skills, but rather that it signals to employers that the holder of the degree is smart enough, conscientious enough, and conformist enough to succeed in most jobs. Thus, most students want the degree more than building skills, so they’ll prefer an easy “A” that leaves no traces on their minds over a “C” that makes them into better thinkers.

I bring this up because the kind of administration I would want to do would be administration focused on disseminating good teaching and study techniques throughout my university. But no one is interested in that, and many people are positively opposed to it. After all, getting professors to employ better teaching techniques is not only directly opposed to their self-interest (which lies in producing research that makes them more valuable), but also intrudes on their autonomy. And not only would it make students have to spend more time on their studies, but it would also require them to really buy in to the idea that learning is more important than a grade. Finally, it would require lots of university-, college-, and department-wide assessment of the “value” that professors “add” to their students. But everyone hates assessment.

To wrap it up into a neat little package, it seems like by being a professor I’m wasting my time and my talents. But maybe that’s OK. After all, it’s not like I’m harming people by being a professor; I’m just not doing particularly much good. Isn’t it OK to do something that doesn’t make a difference, either positive or negative?

This takes us to the last claim of the argument, the permissibility claim. Recall that it goes like this: “It’s OK to practice a profession only if it makes a positive difference (unless you have no realistic option but to be in that profession).” I include that parenthetical because some people have no choice – the only professions they can do are ones that are either actively harmful or not beneficial. About those unfortunates, I have no gripes. Although I get the sense that it’s no longer very fashionable, I still accept the claim that “ought implies can” and its contrapositive, “not can implies not ought.”

But what about me? I think it’s plausible that I could do something else for a living, something that makes more of a positive difference. I don’t mean the Peter Singer style “become an investment banker and then donate most of your money to charity” but simply some profession I enjoy that does more good than being a philosophy professor. I honestly don’t know what that is, but I would bet it’s out there.

Still, assuming that there is a job out there that I would like, and in which I could make more of a positive difference, is it immoral for me to stay in this job? Despite all my gripes, I do enjoy it – I enjoy doing my research even though few read it because it’s enjoyable for me to work my thoughts out. And despite the seeming impotence of my teaching, there are occasions where students swear to me that I’ve made a positive difference in their lives (though I admit that it’s rare that I believe them). So, even though what I do for a living may not be great (and how many people can truly say that they do something great?), surely it’s at least OK?

But I worry about two things. First, it may be that I’m harming my students in ways I don’t see. Laugh if you will, but there is one professor who is clearly making a difference right now: Jordan Peterson. And surely, at least some academics think he is doing exactly what Socrates was accused of doing namely, corrupting the youth. Perhaps I am doing the same, albeit only about as much as I elevate some others.

Second, if academia is as bad as it sometimes seems (to me) that it is, is it wrong for me to be a part of this whole system?

I haven’t come to a conclusion about all this. The only thing I’ve resolved to do as a result of these considerations is take a deep dive into the educational psychology literature; perhaps Caplan has presented a cherry-picked version of the findings. It could be that I’ll learn that things aren’t nearly so bleak as Caplan has made them out to be.

But if they are as bad as they seem, then I should probably stop thinking about this and probably do something instead.

Robert Gressis is a professor of philosophy at California State University, Northridge, where he has been teaching since 2008. He received his Ph.D. from the University of Michigan, Ann Arbor in 2007. His areas of research cover Kant’s ethics and philosophy of religion, Hume’s philosophy of religion, the philosophy of education, metaphilosophy, and the epistemology of disagreement. 

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by E. John Winner

____

Coup d’état

In April 1653, Oliver Cromwell led some twenty of his troops into the House of Commons. Berating the Parliamentarians with the strongest invective yet heard in that House, he had the soldiers boot them out without ceremony. The Rump Parliament had reached an inglorious end, and Cromwell had led what amounts to one of the first military coup d’états in Modern history. The reasons behind this sudden overthrow of government are not as clear as they once seemed. In 1973, Antonia Fraser gave what was the general understanding of the time, in her once standard biography of Cromwell. [1] Since then, the context of the decision hasn’t been radically challenged, but rather its interpretation. The problems that had developed in the years after the execution of Charles I arose out of relations between Parliament and the Army. First, the Army seemed never to be paid on time, and always begrudgingly. Parliamentarians had grown unhappy with maintaining a standing army, no matter how useful it was, and saw it (correctly) as a political danger. The soldiers had acquired a voice in the period leading up to the Revolution and had pronounced ideas on both religion and politics. They wanted the religious pluralism they enjoyed in the New Model to follow them into civilian life and for some sort of religious toleration to be affected by law, much to the distaste of the Presbyterians and conservative members of the Church of England. They wanted reform of the law, of property rights, and of the electoral process so they would not so heavily favor established gentry with large land holdings. Parliament was uneasy with these demands and stalled in its negotiations with the Army. There were rumors that rather than dissolve and face new elections, they would move to a Bill of Recruitment, which would allow them to retain their seats while controlling the expansion of the House so that incoming members would be favorable to those already seated. The standard narrative of what came next is that members of the Rump met with the Army Council of Officers and promised delay on any vote for such a Bill. The very next morning, word came to Cromwell that the Commons was about to vote on one, and he flew into a rage and without change of dress rode to the House with his troops.  But Blair Warden suggests otherwise: that Cromwell learned that the Commons were about to grant concessions to the Army in the matter of elections, with a Bill to that effect, including dissolution and new elections. [2] The elections would almost certainly have brought back Presbyterians and Royalist sympathizers, kicked out of Parliament through Pride’s Purge. Tensions with the Army would have worsened, and Cromwell’s reputation with the Army would have suffered. The clouds of civil war might once again darken English skies.

We’ll never know which of these narratives is exactly on point, unless some decisive report from parliamentarian participants surfaces.  Cromwell confiscated available copies of the bill the Commons was to vote on, and burnt them. Several aspects of Cromwell’s character make Warden’s narrative plausible. To begin with, Cromwell was never the radical that extremist Royalists and their later Tory apologists made him out to be. He was of the gentry, and he knew it. He wanted no major expansion of suffrage, no land reform that would disturb the status quo. He recognized the need for laws to be reformed, for existing laws had become archaic and inaccessible to all but highly trained lawyers. This had served the monarchy well, but boded ill for a young republic trying to earn the trust of the people.  And he certainly wanted legalization of liberty of conscience.  But even in these matters, Cromwell was willing to err on the side of conservatism.  He was never capable of outright lying; but he could be shrewd at dissimulation and was capable of convincing himself that a later post-hoc explanation had been the propter-hoc intent behind his actions. Certainly, by this time Cromwell knew that his political survival depended on the admiration the rank and file soldiers in the Army held for him.

On the other hand, contrary to the impression one can get from rough outlines of history or biography, especially given his performance on the battlefield, of Cromwell as a “man of action,” capable of immediate decisions predicated on long range personal ambitions, the details suggest a different man entirely: a procrastinator who sought advice; who leaned one way and then the other on possible options; who would pause to pray for divine guidance. Once he got his hackles up, however, Cromwell acted without hesitation, often driven by the impulse or emotion of the moment.  (Indeed, his temperament was so mercurial that some have suggested he suffered from manic depression, or perhaps some form of mild chronic malaria.) But as far as his understanding of religion and its place in the lives of the English were concerned, he was a visionary. This one constant thread of faith holds together the many seemingly contradictory opinions and reversals of political action: it forms the consistency of his character.

New Jerusalem

Nowhere is this vision on greater display than in a speech at the opening of what became known as the Barebones Parliament only three months after the sudden end of the Rump. The Barebones (technically, the Nominated Assembly; nicknamed for one of its members, a leather-goods dealer named Praisegod Barebone), although it was to be structured with evident religious intent, had obviously pragmatic origins.  Without some minimally representative body – which after all had been the “Good Old Cause” for which the Civil Wars and the Revolution had been fought – legislation and taxation would have to be undertaken by fiat. This would have been in violation of the constitution as it had developed from Magna Carta, and such violations attempted by Charles I had proved immensely unpopular during the eleven years Charles had ruled without a Parliament.  Further, there is plenty of evidence that Cromwell didn’t want to be in the position of personally authoring legislation.  Even in the most autocratic periods of the Protectorate, beyond occasional temporary ordinances, Cromwell authored no laws, and pursued policy based partly on precedent and existing law, partly on economic or military exigency of the moment.

The Barebones Parliament was supposedly to achieve representation of the people through nomination by the leaders of their local churches.  If one of the hopes of the Protestant faithful was that the government of England could be used to bring about a real reformation, politically, socially and religiously, it would seem to make sense to have churches actively participate in the establishment of government.  This was the announced intention; however, the Army, having so recently rid itself of political opposition in the Rump, was not about to let the roll of the political die produce any new opposition.  Thus, only a handful of the 140 members of the Assembly actually arrived through church nomination; the rest were personally chosen by members of the Army Council of Officers.

Nonetheless, seemingly blind to the implications of such a jiggered nomination process, Cromwell first spoke to the Barebones as if he were addressing a parliament of saints. I suppose the primary reason for this blindness was Cromwell’s profound faith in Providence; in the active participation of God in the affairs of men.  From this perspective, it might seem as if what I’ve referred to as “the roll of the political die” would be the mechanism by which God brought forth the wisest leaders of the nation to serve in the Assembly. Certainly, that is how Cromwell spoke to that Assembly on its opening day.

In his Oliver Cromwell, Barry Coward writes:  “What did Cromwell mean by that vague phrase [‘a godly reformation’]?  Of all the problems that make up the enigma of Cromwell’s whole political career this is the one that is most difficult to resolve, largely because Cromwell never gave a clear and specific definition of what he understood by godly reformation.” [3] Coward is an able historian, yet I find this remark odd.  In his speech to the Nominated Assembly, Cromwell makes it clear what he hoped would be a godly reformation.  It’s a remarkable document, often reading more like a sermon than political oration.  Nonetheless, it has a classical oratorical structure : welcoming introduction, apologia reciting recent history leading up to the Assembly, exhortation to perform expected duties, closing summary and final argument. (The version we have is apparently a transcript written by a Cromwellian pamphleteer, who doesn’t hesitate to embellish the material with parenthetical asides praising Cromwell and ridiculing his critics.)

Before considering the speech directly, let’s pause to consider the way Protestant faithful – or at least confirmed Reformists like Cromwell – understood how the Bible operated as an analogic program operating historically in human affairs. Catholic theology during the Middle Ages had developed a reasoning by analogy that is easy to grasp. Laws and relationships explicated and elaborated in the Bible could be seen as establishing fundamental structures of the universe and of human society: God is to Creation as the sun is to earth, as the king is to his kingdom, as the father is to the family, and so on.  Such an analogic system made the universe easier to understand, while grounding human relationships. It also provided the structure of metaphor in sermons, speeches, poetry.  Although the Canterbury Tales is quite a different poem from the Divine Comedy, they both operate with this structural device for their deployment of tropes and symbolism. It relied on the theory of interpretation in Augustine’s On Christian Doctrine, which developed the structure as the central to the proper reading of the Bible.

By Cromwell’s day, the Reformist Protestants had developed an entirely different analogical code for reading the Bible. The grounding principle was no longer structural, but programmatic. Most of the Reformers held that personal reading of the Bible was a must, although they differed as to the appropriate interpretation or the need for expert guidance in this. A primary theorist for this new reading practice was John Calvin. In his Institutes of Religion, Calvin chalked off Augustinian structural analogic tropology as resulting from a superstitious fear of being overly literal (since some Biblical passages seem almost self-contradictory).  Calvin insisted that the Bible be read to the letter. If there was a passage claiming God simply blinded those He didn’t care for, then, even if this appears rather cruel for a loving God, this is what He does. This suggests that the analogical application of this to contemporary life has to do with people who have been blinded somehow. Was it punishment by the Almighty?  Was it warning to change their ways? If they had changed their ways, surely, they would be worthy of pity, but if not, to hell with them. Literally. Calvin’s God could choose those whose spirits he would move towards their salvation, and those whose spirits He would allow Satan to move. The Bible effectively describes a drama of daily life and historical action that sees predestined, analogical repetition down to this day.  If “Joshua fit the battle of Jericho,” as the American folk hymn has it, then surely Cromwell had likewise “fit” the battle of Worcester, and its Royalist walls “came tumbling down.” This is where the Protestant understanding of the analogical semblance between the Bible and life is interesting: Life becomes a trope for the reality of Biblical historiography; but a lesson for the truth of God’s Word.  Thus, Cromwell could claim that his decisive military success at Worcester was demonstration of the Providence of God, as described in various passages in the Bible concerning triumph in battle by God’s chosen: “And yet what God wrought in Ireland and Scotland you likewise know; until He had finished these Troubles, upon the matter, by His marvellous salvation wrought at Worcester.” [4]

As he begins his exhortations to the members of the Assembly regarding their duty, we find Cromwell tossing aside what little of practical politics he had invested in the first half of his speech:

And I hope, whatever others may think, it may be a matter to us all of rejoicing to have our hearts touched (with reverence be it spoken) as Christ, “being full of the spirit,” was “touched with our infirmities,” that He might be merciful. So should we be; we should be pitiful. Truly, this calls us to be very much touched with the infirmities of the Saints; that we may have a respect unto all, and be pitiful and tender towards all, though of different judgments. And if I did seem to speak something that reflected on those of the Presbyterial judgment,-truly I think if we have not got an interest of love for them too, we shall hardly answer this of being faithful to the Saints. In my pilgrimage, and some exercises I have had abroad, I did read that Scripture often, Forty-first of Isaiah; where God gave me and some of my fellows encouragement ‘as to’ what He would do there and elsewhere; which He hath performed for us. He said, “He would plant in the wilderness the cedar, the shittah-tree, and the myrtle and the oil-tree; and He would set in the desert the fir-tree, and the pine-tree, and the box-tree together.” For what end will the Lord do all this? That they may see, and know and consider, and understand together, That the hand of the Lord hath done “this;”-that it is He who hath wrought all the salvations and deliverances we have received. For what end! To see, and know, and understand together, that He hath done and wrought all this for the good of the Whole Flock (Even so. For “Saints” read “Good Men;” and it is true to the end of the world). Therefore, I beseech you,-but I think I need not,-have a care of the Whole Flock! Love the sheep, love the lambs, love all, tender all, cherish and countenance all, in all things that are good. And if the poorest Christian, the most mistaken Christian, shall desire to live peaceably and quietly under you, I say, if any shall desire but to lead a life of godliness and honesty, let him be protected. I think I need not advise, much less press you, to endeavour the Promoting of the Gospel; to encourage the Ministry; such a Ministry and such Ministers as be faithful in the Land; upon whom the true character is. Men that have received the Spirit, which Christians will be able to discover, and do ‘the will of;’ men that “have received Gifts from Him who is ascended up on high, who hath led captivity captive, to give gifts to men,” even for this same work of the Ministry!

Since Cromwell sings variants of this song for some three thousand words, it seems hard to miss his agenda. Of course, he is asking for religious toleration (at least for all Protestant sects), but the oratory pitch is strung high. The admonition is not simply for some bill of liberty of conscience. Especially given the way religious tolerance is linked to overt ministry, Cromwell is proposing nothing less than a Christian democracy predicated on the understanding that the truly faithful (who share the same desire for God’s grace, no matter what church they attend), through spiritually guided conversation, could be led to conformity of mind, rather than of church membership. Totalitarianism always smacks of fear-induced conformity, so it is easy to forget that it can sometimes begin with a hope that all citizens in the given community share so many of the same values that politics, of any kind, would simply prove unnecessary. The representative government of such a society would not need election, but rather any man skilled in articulating the common need of the moment would do. Cromwell’s England could be transformed into an earthly New Jerusalem, populated entirely with embodied spirits, all seeking to live according to the Word of God (except for a handful of predestinate damned, who would have to be carefully watched).  Pretty heady stuff, which is why the Christian-Utopian theme of the speech might be taken as mere rhetoric. Yet everything we know about Cromwell and the way he interpreted events as signs of Providence suggests that his goal had always been an England united, not by blood, history or economics, but by faith. Such an England (evidenced by some of Cromwell’s musings over foreign policy) could then stand as example to and help construct a supra-national union of like-minded Protestant states (that could then engage a final reckoning with the Antichrist in Rome).  Not apocalypse now, but apocalypse soon.

Paradise Lost

Of course, it didn’t happen. Some minor reforms were achieved, but mostly the members of the Barebones disagreed in a most unsaintly manner, quarrels erupting between radicals and moderates.  By December 1653, the moderates resigned (whether voluntarily or at Cromwell’s instigation is unclear), and the remaining members voted dissolution. Shortly after, the Army, through the newly formed Council of State, established the Instrument of Government as constitution and declared Cromwell the Lord Protector. Based on the Instrument, two Parliaments would be called during Cromwell’s lifetime.  Although his speech was always laced with Biblical references, he would never speak before Parliament with the kind of religious fervor and utopian optimism he had exhibited to the Barebones. His first speech to the First Protectorate Parliament, beginning with complaints against the radicals (thus insisting on the very divisions of faith he had previously sought to heal), then turned its attention to such earthly matters as treaties with Portugal, France, the Dutch, and the Danes; spiraling costs of naval warfare; and deficit spending and rising debt, with increasing need for greater taxation. The “godly reformation” of England was coming to an end. A couple of years later, the English suffered a major military defeat in the Caribbean, which Cromwell naturally took as show of God’s disapproval of England’s sinfulness. This he sought to address with the rule of the Major-Generals in the counties outside of London, as close to a “thought-police” as one could get in the 17th century. It proved such an unpopular failure that Cromwell disavowed ownership of it. Increasing political anxiety can be read in the last years of the Protectorate.  Cromwell’s powerful personality and charisma had held it all together, but with his death in 1658, the Army itself splintered into it opposing factions. The strongest, under General Monk, at last intervened, and invited Charles II to the throne.

In 1655, the Swedish envoy to England reported to his King:  “The country… [feels] it to be a matter of indifference to them by whom they are ruled, if only they be preserved in the free enjoyment of their law and religion.” [5]

What Cromwell didn’t understand – and most visionary politicians don’t – is that the temporal window of opportunity for any visionary politics is very narrow, maybe three or four years. Most people do not want or expect the world to achieve perfection. They want to take care of business, take care of their families, and take care of themselves. Any politics that makes these pursuits easier will appeal to them. They are fond of rhetorical promises of heaven, but soon will lose interest. Few wish to live as saints, who never worry about paying bills, whose orgasms are all symbolic, and who have no children to raise. Cromwell’s “godly reformation” crashed against the same rock that the Reformation itself did: the rock of the every-day; of matters secular and earthbound; the desire to live, not as God commands, but as the human condition dictates. Bread, home, children, community. Too often have cynical politicians tried to use these to manipulate people. But ultimately, they form the bulwark that no utopianism can overcome, and that no cynicism can undo.

Notes

[1] Fraser, Antonia; Cromwell the Lord Protector; Dell, 1975 (originally Oliver Cromwell, Our Chief of Men, Weidenfeld & Nicolson, Great Britain, 1973).  The discussion is in Chapter 15, “A settlement of the nation.”

[2] Warden, Blair; “Oliver Cromwell and Parliament,” olivercromwell.org (2014) http://www.olivercromwell.org/wordpress/wp-content/uploads/2014/10/Cromwell%20and%20Parliament.pdf

[3] Coward, Barry; Oliver Cromwell; Profiles in Power; Longman, 1991, page 105.

[4] http://www.olivercromwell.org/Letters_and_speeches/speeches/Speech_1.pdf

[5] From: Smith, David L.; Oliver Cromwell: Politics and Religion in The English Revolution, 1640-1658; Cambridge UP, 1991, page 39.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Daniel A. Kaufman

____

On a number of occasions, I have defended what I’ve been calling “procedural liberalism” on the grounds that in large pluralistic societies (a) one cannot expect one’s fellow citizens to share a common, substantive conception of the good, and (b) one cannot expect that one’s “community,” in the sense of the word that implies a shared set of values, will always maintain a hold on the levers of state power. [1] It is in everyone’s interest, then, to embrace an essentially formal liberalism, according to which (c) we allow one another significant latitude in the pursuit of our private lives, constrained only by the harm principle, and (d) we rigorously maintain state neutrality with regard to such pursuits.  Such an arrangement permits people to engage with what they find significant and meaningful in life, among their family and friends, and in the broader civil society among the like-minded.  It also makes it possible for them to trust that they will be treated fairly within “political society,” by which I mean those sectors of society that are governed by the formal institutions and powers of the state, such as the courts, the federal, state, and local bureaucracies, and the like.

If we assume (as I think we should) that our ability to pursue what is meaningful to us is a precondition for a satisfying – or even bearable – life, then this procedural liberalism presupposes that a person has access to family, friends, and to an open and free civil society, meaning one in which one’s capacity to associate with people of one’s choosing is largely unrestricted.  The diminishment of any significant number of these elements fosters feelings of emptiness and futility, except among those rare souls whose capacity to find satisfaction in life is consistent with solitude.

It is a common refrain in the developed world today that liberalism is either in trouble or already in the process of dying, and while the reasons commonly given vary widely in terms of their plausibility, the claim – or as in my case, the worry – is a fair one. Not because the arguments for liberalism are any weaker today than they were yesterday (if anything, they are even stronger) and not because anyone has thought up a better arrangement (they haven’t), but because of certain developments in modern industrial and post-industrial societies and especially, Western ones.

For one thing, the presupposition I just discussed – that in our search for meaning, we have access to a network of family, friends, and acquaintances, with whom we can freely engage within the largely unconstrained space of civil society – can no longer be assumed.  Indeed, all the evidence, be it anecdotal or social-scientific, suggests that these critical personal and civil associations are diminished and diminishing. [2] This dimension of liberalism’s troubles has been much remarked upon and is relatively well understood (which is not the same as having a clue how to reverse or otherwise address it), and is at the heart of much of the last century’s discussion of the crisis of the modern individual who, with the great urban migrations effected by the Industrial Revolution, was deprived of the psychic moorings that had been provided previously by extended family-networks, a shared culture, and near-ubiquitous religiosity.  As Carl Jung put it in Modern Man in Search of a Soul (1933):

The modern man has lost all the metaphysical certainties of his medieval brother, and set up in their place the ideals of material security, general welfare, and humaneness.  But it takes more than an ordinary dose of optimism to make it appear as if those ideals are unshaken. [F]or the modern man sees that every step in material progress adds just so much force to the threat of a more stupendous catastrophe.

Jung’s reference to the modern ideals of “material security, general welfare, and humaneness” suggest a second reason for liberalism’s plight, one that is less frequently remarked upon but equally significant: the overwhelming tendency of societies in the advanced stages of capitalism to commodify our relationships and pursuits, our identities, and even happiness itself.  The result is that they have become “kitsch” and we have become consumers of kitsch, which means that they no longer have the power to satisfy us, and we no longer have the capacity to be satisfied.

Kitsch is that mimic of things of depth and substance that is produced so as to allow people, whether out of indolence or incapacity, to purchase spiritual depth without the need for substantial investment, struggle, or sacrifice; a cheap simulacrum futilely taken up for the purpose of sating the unfocused yearnings of a jaded sensibility and a shallow character.  Clement Greenberg, in his landmark essay, “Avant Garde And Kitsch” (1939), restricted his analysis of this socio-cultural development to the artworld, but as Roger Scruton has pointed out, under late capitalism, virtually every dimension of life can be – and is being – kitschified, for kitsch indicates a spiritual, rather than an aesthetic deficiency:

Kitsch reflects our failure not merely to value the human spirit but to perform those sacrificial acts that create it. It is a vivid reminder that the human spirit cannot be taken for granted, that it does not exist in all social conditions, but is an achievement that must be constantly renewed through the demands that we make on others and on ourselves. [3]

One example of this “kitschification” and its effects, not just on art but on whole forms of life, is contemporary, mainline religion, where the rigorous, meticulous demands that religion once made upon our lifestyles and beliefs have been abandoned, so that  religious and spiritual life might be easier and more congruent with popular mores and tastes.  As a result, mainline religion has become generic to the point that one church is largely indistinguishable from the next. (I used to serve on my synagogue’s Beit Din (Jewish court) and would ask prospective converts why they wanted to be Jewish. The answers I got inevitably involved a benign mishmash of progressive platitudes, so I would always ask the same follow-up question “That’s a great reason to become an Episcopalian. What I asked was why you want to be Jewish,” to which I never received anything better than a baffled look.) The predictable result has been the collapse of the mainline churches and an upsurge in fundamentalist religion, the crude harshness of which at least makes it possible for people to feel something in the conduct of their religious lives.

Another example of kitsch that lies well beyond the artworld is our society’s treatment of old-age and retirement – one’s “golden years” in kitsch-speak – whose commodification and its effects was described by Nathanael West in The Day of the Locust (1933), as a prelude to what remains one of the most terrifying depictions of mob violence in American literature:

All their lives they had slaved at some kind of dull, heavy labor, behind desks and counters, in the fields and at tedious machines of all sorts, saving their pennies and dreaming of the leisure that would be theirs when they had enough.  Finally that day came… Where else should they go but California, the land of sunshine and oranges?

Once there they discover that sunshine isn’t enough.  They get tired of oranges, even of avocado pears and passion fruit… They don’t know what to do with their time.  They haven’t the mental equipment for leisure… They watch the waves come in at Venice, [but] after you’ve seen one wave, you’ve seen them all.

Their boredom becomes more and more terrible. They realize that they’ve been tricked and burn with resentment.  Every day of their lives they read the newspapers and went to the movies.  Both fed them on lynchings, murder, sex crimes, explosions, wrecks, love nests, fires, miracles, revolutions, wars…  Oranges can’t titillate their jaded palates.  Nothing can ever be violent enough to make taut their slack minds and bodies.  They have been cheated and betrayed.  They have slaved and saved for nothing.

Today, in the age of social media and advanced communications, it is our relationships and identities that have been the principle objects of commodification and which are being sold to us by Facebook, Twitter, Instagram, and the like as empty simulacra of the things they once were: “friends” for the friendless; “followers” for those with no real influence; “Likes” for those whose statements fail to carry any genuine weight or whose posted images are bereft of any actual interest or appeal. This virtual world of ersatz interactions and relationships is inhabited by equally unreal people, encouraging us all, as it does, to misrepresent ourselves, so as always to appear in the most positive and interesting light. It is small wonder then that those who are most dependent upon these platforms – those for whom social media has essentially replaced civil society – are also the most obsessed with their identities and with the validation of those identities by others, demonstrating a level of personal insecurity and desperation that is simultaneously pitiable and pathetic.

It is this combination of social atomization and kitschification that I am suggesting poses the greatest threat to the liberal consensus, for they undercut the capacity to enjoy a satisfying life in the private and civil spheres, which, as I said earlier, is a fundamental precondition for liberal society.  With that precondition no longer met, our need to feel that our lives are significant in some meaningful sense remains unsatisfied, so we seek fulfillment publicly, politically and by way of the law.  The person who has no real friends enlists the power of the state to compel others to act as if they were his friends. The person who finds himself unfulfilled by the identities he has embraced appeals to the law to force everyone to genuflect before them.  The person who is frustrated by the impotency and ineffectualness that follows from a lack of investment in real people or causes will bolster himself by joining in professional ruination, public ostracizing, and all the other mobbish behaviors that currently fall under the banner of “canceling.”

A successful liberal society consists of people whose lives and relationships and pursuits are substantial and for the most part satisfying, for this is what sustains the live-and-let-live ethos on which liberalism is predicated and which ultimately protects us all. But in a society of shallow, anxious, disconnected, inchoately yearning recluses, the kind of generosity of spirit necessary to create and sustain the liberal consensus is not only absent, it can never be born, and the rational self-interest presupposed by liberal political philosophy can no longer be credibly ascribed.

Special thanks to the students in my Aesthetics course (Spring 2019), a class discussion with whom served as the inspiration for this essay.

Notes

[1] https://theelectricagora.com/2018/03/10/the-liberal-consensus-and-the-orthodox-mind/

[2] See, for example, Robert Putnam, Bowling Alone (New York: Simon and Schuster, 2000).

https://en.wikipedia.org/wiki/Bowling_Alone

[3]  Roger Scruton, “Kitsch and the Modern Predicament,” City Journal (Winter, 1999).

https://www.city-journal.org/html/kitsch-and-modern-predicament-11726.html

http://sites.uci.edu/form/files/2015/01/Greenberg-Clement-Avant-Garde-and-Kitsch-copy.pdf

[4] https://en.wikipedia.org/wiki/The_Day_of_the_Locust

https://ebooks.adelaide.edu.au/w/west/nathanael/day-of-the-locust/index.html

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview