Loading...

Follow The Los Angeles Review of Books on Feedspot


Valid
or
Continue with Google
Continue with Facebook

THE RISE AND FALL OF D.O.D.O. chronicles the birth, growing pains, maturation, and corruption of a government agency that uses technology-enabled magic to travel through time and manipulate history in service of the present-day strategic aims of the United States of America. This is the first independent literary collaboration of Nicole Galland and Neal Stephenson (who first worked together with several other authors on the Mongoliad Cycle), and it defies reduction. Electrifying, adventurous, and entertaining, this novel is among the shortest 750 pages you might ever read, and you should read it. Oscillating through a variety of narrators, genres, and tones, it has a playful style characterized by dramatic irony, conflicting perspectives, and meta-textual meditations on the reality-shaping function of literature. Developing magic as a form of feminist power, the story assaults both historical and contemporary practices of patriarchy. Ultimately, the novel asks the following question: “What if the United States of America acquired the means to materially alter already-past real events?” As Galland and Stephenson explore this chilling possibility, they construct a speculative allegory for the ways in which entrenched powers, specifically the nation-state and neoliberal capital, manipulate our textually mediated access to history in order to reproduce conditions favorable to their own ends.

The novel begins near its story’s end, with its principal protagonist, Melisande Stokes, stranded in the year 1851. Wanting to return home (but pessimistic about her prospects), she writes a testimony, or “Diachronicle,” in which she reveals that she is part of “the Department of Diachronic Operations: a black-budget arm of the United States government that has gone rather badly off the rails due to internal treachery.” Once upon a time, she was a humble linguist and poorly paid adjunct lecturer at present-day Harvard University, and the subject of her first chapter is the story of her recruitment by government agent Tristan Lyons, who wants her to translate a collection of documents in ancient and modern languages. She tells the reader how she questioned the means, motives, and ends of Tristan’s mystery organization. Where did the documents come from? “Classified,” he said. What will the documents be used for? “Classified.” Will they be used to justify any villainous activity? “Classified.”

This first encounter establishes the novel’s theme of recruitment. Melisande asks her questions to perform the role imagined for her as Conscientious Citizen, but she asks only to ask; she does not really care about the answers. If she did, she would end the conversation and walk away. She stays because, in many ways, she has already been recruited. The reader wants her to stay because the reader, too, has already been recruited. The true nature of Tristan’s project is formally classified, but informally its character is common knowledge: to live in the 21st-century United States is to live in collective, open awareness of dark things accomplished in the name of American global supremacy.

Tristan tells Melisande she would be paid twice her current annual salary for six months of work (she would be valued and stable), the work is highly secretive (her work would be meaningful), and he tells her all this as a handsome, strong, suave, competent, energetic, and enterprising white male, the stereotypical “Good-Guy-Government-Agent” (the polar opposite of her current, also stereotypical, “Curmudgeonly-Abrasive-Boss”). Their first meeting ignites a not-so-subtle sexual tension that helps propel the story, and it serves as metaphor for Melisande’s relationship with the Department of Diachronic Operations itself. She publicly acknowledges and consummates her attraction toward Tristan only in the novel’s final pages, after she has been fully recruited, in mind and in heart. In this sense, the novel is the story of one potentially radical subject’s capture and role-assignment by the ruling class. Tristan offers financial security, grand purpose, and pleasurable community. Of course she accepts. Who would refuse?

Tristan’s recruitment resonates because he offers rewards that are presented as desirable by the same forces that withhold such rewards from many or most readers. We desire financial security because we, like Melisande, are often financially insecure; we desire community because we, like Melisande, often lack community; we desire purpose because we, like Melisande, often lack purpose. The promise of these rewards is so great that they override concerns that are more abstract, such as ethics or morals. We have been denied these things by the same formations of real and imaginary relations that call us to service. This is no coincidence; it is precisely their mode of functioning. Their call operates by appropriating our desire for something better: to live a secure, meaningful life within a community of pleasurable company.

Thus, by dramatizing Melisande’s recruitment (and performing a similar recruitment of the reader), the text initiates both its ideological operation and its exposure of the utopian desire this operation appropriates. To the extent that readers experience Melisande’s recruitment with pleasure, we find ourselves valorizing the historically specific configuration of social relations that are figured in the novel, namely those of the contemporary, late-neoliberal United States of America. At the same time, however, this recruitment depends on the deployment of promises that are properly utopian. Ideological recruitment requires utopian desire, but actually realized utopia precludes it. This tension between utopian desire and its ideological appropriation defines much of what unfolds within the narrative, and its dramatization, development, and resolution is one source of this novel’s value.

After her initial recruitment, Melisande translates Tristan’s documents and discovers that magic was once a real force in the world. It was possible because of the existence of infinite parallel universes (called “Strands”) and quantum indeterminacy. According to quantum indeterminacy, the state of a given subatomic particle is determined by the act of observation. Because the inhabitants of any given Strand experienced their Strand only subjectively, for most of history there was no objective, consensual, fixed measurement of the universe: it could be changed.

Some individuals with special powers were once able to manipulate their own universe by appropriating already-realized possibilities from other parallel universes. These individuals were called witches, and for a long time their powers were commonly known and highly valued. But the Industrial Revolution introduced photography, which (by providing an objective measure of the real) disrupted the quantum indeterminacy magic required, and thus made magic impossible. After discovering this, Tristan and Melisande search for the means to restore magic: they recruit a scientist and his wife to build a machine within which quantum indeterminacy can be restored in order to make magic possible again. With the help of the witch Erszebet, who claims to have been recruited by Melisande in 1851 London, they begin to travel through time.

Everywhere they go, they recruit. In the present, they recruit engineers, computer scientists, security guards, academics, athletes, soldiers, and refugees; many of these are innovators, dissidents, and potential radicals, captured, contained, and directed in the service of an organization and government of which they remain openly critical and suspicious. In the past, they recruit warriors and witches, adults and children. Their small organization grows. To Melisande’s “Diachronicle,” the narrative adds journal entries, letters, dossiers, emails, memos, chatroom records, security camera transcripts, radio logs, notebook pages, and even a PowerPoint presentation.

This proliferation of secondary genres and voices emphasizes how the formal features of each genre and the subject position of each narrator function to determine possibilities of meaning. The bureaucratic genres and institutional narrators reduce all people and events to systemic functions, attempting to impose order, even as order becomes more and more impossible. The more humanistic narrators, like Melisande, explore the personal stakes of the narrative’s action, not attempting to impose order, but seeking to make coherent sense of events. In this chaotic, ad-hoc fusing of narrators and genres, a formal exploration of the tension between utopian desires and their ideological appropriation unfolds. What the novel dramatizes narratively, it deconstructs on the level of form.

Eventually, D.O.D.O. becomes a full-fledged capital-D Department, staffed by hundreds of personnel specializing in research, training, manufacturing, support, and operations, reaching across the planet and through history. Contemporaries are recruited by the same promises made to Melisande; historical characters are recruited by offers related to the needs of their specific times and places. The common denominator is an appeal to individual desire rather than collective need. The narrative makes clear that D.O.D.O. cannot recruit through appeal to its own ultimate ends, for it shrouds these ends in secrecy. D.O.D.O. agents are told their individual tasks. They know only the desired immediate effects of their historical missions, but none are trusted with the full strategic picture. They are asked to trust, but they are not trusted. When an explanation is demanded, a vague one is given: the “magic gap.” Other global powers might be developing their own magical capabilities; there is no evidence of such activity, but that might itself be evidence. To be safe, D.O.D.O. claims, the United States must pursue the capability, even if only for the sake of having it.

This demand for blind trust is another example of the novel’s ideological allegory: like D.O.D.O.’s agents, we are often willing to believe we are on the side of angels, not simply because we have been told we are, not simply because we want to justify our bad behavior, but because we genuinely wish to be angels, and we are willing to join others in the realization of angelic projects. This utopian impulse is fundamentally appropriated and repurposed by D.O.D.O. — no agent is trusted with full knowledge because revealing the agents’ real missions would threaten their motivation. If they knew the truth of what they were doing, if they saw in clear, harsh daylight their participation in a project that perpetuates the conditions of existence that alienates them from their needs and returns those needs as unfulfillable desires, they would be forced to recognize they have been coerced into a service for which no pay could ever be adequate. Thus, the agents (and the reader) must be kept in darkness and given only the limited understanding necessary to solicit participation, to enable the organization’s reproduction through the continuing appropriation of utopian desires offered through ongoing recruitment.

Of course, this secrecy is only formal; informally, the agents know they are working for an agency with a questionable purpose. Formally enforced ignorance, however, grants agents the means to distance themselves through plausible deniability: if they do not know what they are supporting, they can always claim they never really supported it. If alienation is the condition of our age, denial is the stand-in for its cure. Denial practiced on a large scale and treated as simple fact of life inside the novel indexes an unsettling acceptance of mass denial outside. In both cases, it functions to ensure the uninterrupted reproduction of existing relations, rather than their evaluation and replacement. D.O.D.O. uses this denial to ensure the continuation of the circuit by which it appropriates utopian desires without ever allowing them to become fulfilled.

D.O.D.O. appropriates utopian desire to both recruit and direct agents, and what the text dramatizes in relation to its characters it also performs upon its readers: for our obedience, for our agreement to not question these ends too closely, we are rewarded with time-traveling adventure and vicarious inclusion in a profound (if obfuscated) purpose. The pleasure we gain from this story derives both its possibility and content from ideology.

We live in a world seemingly immune to our attempts to alter it, a world seemingly overdetermined by the neoliberal marketplace and nation-states, but within this book we become part of the power itself, traversing time to realize certain possibilities by repressing others. We experience the return of our alienated agency and are provided the distance necessary to obscure our understanding that this agency is returned only so that we may act in service of reproducing our alienation. We serve the status quo, and we enjoy it. But neither we nor D.O.D.O.’s agents obey without comment. Their commentary might appear as satire, but its function within the text reveals an unsettling operation.

After recruitment, few of the novel’s characters continue to question the ultimate purpose of D.O.D.O., but their displeasure with their situation registers in the caustic, critical tone they adopt toward the agency. Melisande’s “Diachronicle,” for example, is often colored by sarcasm. Rebecca East-Oda, a witch and wife of D.O.D.O.’s most important scientist, contributes to the text through journal entries that register her ambivalence toward her participation by (almost) always beginning with the daily temperature, humidity, and state of her garden. She refuses to let the life of her flowers and vegetables be upstaged by D.O.D.O. or its operations. The novel’s tone is often whimsical and some of its events are farcical — from snide snips of dialogue, to entire chapters devoted to organizational bungling, to aesthetic absurdities, such as a raid by historic Vikings on a present-day Walmart recorded in an epic poem. Through this flippancy, the novel mocks the government project it portrays. It is tempting to read this mocking as outright condemnation or satire, but this first impulse is misleading. Within this novel, satire is ultimately revealed as resistance authorized by the subject of mockery itself, a truth underlined when the mocking’s defanged subversion is circumscribed by the horizon of the novel’s final denouement, within which the characters who have treated D.O.D.O. so satirically sincerely adopt its purpose as their own.

Toward the end of the novel, top D.O.D.O. and military officials scheme to expand D.O.D.O.’s Magic Operations to include present-day psychological warfare and brainwashing. They build machines that will allow witches and their government handlers to assimilate target individuals all over the planet. Ironically, the first victim is D.O.D.O.’s own director. He is bewitched by Grainne, a witch brought to the present to help expand D.O.D.O.’s operations. But D.O.D.O.’s meddling in her own time caused the death of her lover, and now she has a vendetta and a plan of her own. She hijacks D.O.D.O. and plans to use it to prevent the historical repression of magic by preventing the rise of modernity. She strands Melisande in the past and plans to strand Tristan deep in the Paleolithic age, but her plan is foiled when Erszebet, D.O.D.O.’s first witch, betrays Grainne and warns Tristan. Erszebet, Tristan, and a few others go rogue and establish a renegade version of D.O.D.O. whose first mission is to rescue Melisande from the past, a harrowing deed they accomplish thanks largely to the 11th-hour intervention of a mysterious time-traveling financier who has haunted the novel, a symbolic figuration of global capital itself. While he is treated with ambivalence and suspicion throughout most of the novel, his assistance at this critical moment positions him within the novel’s finale as an ally of the protagonists; the heroes thus ultimately align with the ends of global capital and the states that regulate its social contradictions. With the rescue operation a success, Tristan and Melisande are reunited, and they finally acknowledge and consummate their love. Grainne has not yet been stopped, and it is to this on-going task that they commit themselves.

In a novel populated by willing recruits, Grainne appears as one of the few truly revolutionary subjects: she is a witch committed to the cause of Irish independence in her own era, and in the present she is dedicated to preventing the destruction of magic. Despite what much of the language used to describe her toward the end of the novel would suggest, Grainne does not want to destroy the world; she wants to reshape it. Her project is one of the novel’s most profound gestures toward radical change, but her goal is the reinstatement of older forms of oppression. We cannot accept her project, but neither can we accept the project to which Tristan and Melisande have committed themselves, the simple maintenance of the world as it is. Throughout the novel, there is vague talk of the “magic gap,” but the only concrete magic gap ever directly observed is the one produced within D.O.D.O. itself, a violent fissure between its Grainne-corrupted form and its Tristan-led “renegade” offshoot. These renegades turn against their former government agency, and yet, in the end, they are renegades who serve the status quo. This final absurdity is the horizon which contextualizes the novel’s earlier mocking of D.O.D.O.’s project, a mocking that appears as hollow condemnation of a project participated in, fought for, and ultimately upheld.

Like Melisande, readers are confronted with a choice between two possible futures: one embodies a retreat from modernity into neo-feudalism, while the other represents a defense of neoliberal modernity as-is. Are there no alternatives? Melisande is resigned to her recruitment in defense of the existing order: “As if I had a choice,” she quips. She does have a choice, as do we: just as her cynical resignation registers a dissatisfaction with her available options, our own inevitable dissatisfaction with the novel’s ending (a deadlocked binary opposition between bad choices without a third way) registers a pressing need for better alternatives.

The Rise and Fall of D.O.D.O. cannot ultimately conceive of the world’s radical improvement, even when humans command the twin powers of technology (implements of production) and magic (unbridled human creativity). The affective dimension of this failure is arguably the novel’s greatest success. By disappointing us, the novel points us to the recognition that we do not even know precisely what we would want instead. We just feel the current choices to be undesirable, and the chosen “good” path, the defense of the status quo, as underwhelming.

Thus, the novel’s dramatization of recruitment and direction, its ridicule of these projects, and its foreclosure of revolutionary possibility delivers us into a dissatisfaction from which we can reinterpret the novel’s ideological functioning as a frustrated gesture toward a utopian alternative project which can only fail, but whose failure incites the utopian longing which is the necessary seed of real change. The novel’s final words are Melisande’s: “And that, dear reader, is who we are, and what we now are doing.” At the end of this epic of resurgent possibility and ideological foreclosure, these words and their tired tone of resignation and dissatisfaction compel us to scribble our response: But we want different. While itself insufficient to realize utopia, such inscription is the necessary catalyst of all revolution.

¤

David T. Shipko Jr. is a graduate student of English at the California State University, Los Angeles, a teacher of writing, and a writer of speculative and other fictions.

The post Recruitment, Ridicule, and Revolution in “The Rise and Fall of D.O.D.O.” appeared first on Los Angeles Review of Books.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

IT IS THE YEAR 11945 during the 14th Machine War, and humans have long abandoned Earth to take refuge on the moon. A group of enemy aliens has taken control of the planet, and their robots move rampantly throughout the globe. Nevertheless, humanity does not yield; as a response to these mechanic threats, they have created YoRHa units, androids born to reconquer Earth under the war cry, “Glory to mankind!” Within this post-apocalyptic setting, the video game NieR: Automata tells the story of an all-out proxy war between androids and machines. From a bunker in outer space, an all-powerful Command deploys YoRHa androids on missions to retake an eerily abandoned planet where nature is gently erasing the marks of human civilization.

The brainchild of Japanese video game producer Yoko Taro, NieR: Automata was released by Square Enix in Japan in February 2017 and worldwide the following month. Taro is known for producing games with philosophically complicated plots, including the Drakengard series and the original NieR. Although overarching themes tie Taro’s games together, they are also stand-alone adventures, each featuring different protagonists and narratives. The original NieR, for instance, is set more than 8,000 years before its sequel; knowing about it only adds a new color to the game.

NieR: Automata starts by putting the player in control of 2B, a female-looking combat unit who fights her enemies using an arsenal of elegant destruction. Soon we are also introduced to 9S, a male-looking android who specializes in hacking machines and collecting intelligence. With more than two million copies sold by September 2017, and numerous sequel rumors, NieR: Automata has become Taro’s most successful game, bringing him out from the niche world of indy games and into the mainstream spotlight. Dozens of reviews in different languages have praised the game for creating a multifaceted story that can be wild, subtle, shameless, violent, and tender, while at the same time providing quality gameplay and a musical score with sounds that range from techno naïve to epic quasi-religious rapture. Something that many reviewers seem to have missed, however, is that the game’s appeal can also be attributed to the ways in which it captures the spirit of our cultural moment. 2017 could not have been a more appropriate year for the release of NieR: Automata — with the revival of all-time favorite science fiction franchises like Blade Runner and Ghost in the Shell, the entertainment world became fixated, yet again, on artificial sentient beings.

Why are we so fascinated right now with technological utopias predicated on the rise of artificial intelligence? Perhaps it is because during times of crisis, people look outward to find possibilities for salvation. Still, this search is slowly seeping into the world of policymaking too. Many have argued that we now live in a post-truth world, and after the political upheavals of the past few years, some have started to decry the limits of democracy. People tweet for strong leaders who can transport them back to a glorious age. They seem to be reaching for a sovereign who can remove the excessive freedoms of our system. For many, Western democracy is broken, and no one knows how to mend it, as there is little faith now in our technocratic system. In past times of distress, our ancestors would have raised their hands to the sky, praying for deliverance. Years have passed, however, since Nietzsche claimed that God was dead. In this secular and post-postmodern age, then, where can we find transcendent relief? The answer, to some, lies in embracing the machine. This is particularly true among adherents of transhumanism, the belief that humans should transcend their natural limitations through the use of technology. “Homo Sapiens as we know them will disappear in a century or so,” suggests historian Yuval Noah Harari. In his book Homo Deus (2015), he anticipates a future in which humankind is replaced by super creatures with desirable physical, moral, affective, and cognitive enhancements. In this new technological utopia, there will be no more suffering, crying, pain, or death: overcoming death, in particular, is the transhumanist’s final goal.

As reports on the growing disruptions of big data, biotechnology, and cryptocurrency gain momentum, the topic of artificial intelligence is debated with increasing urgency. As some data scientists claim, there is much hype about A.I., but the excessive attention and ample funding being put into the field have also made groundbreaking developments much easier too. Slowly, possibilities long considered the domain of science fiction are becoming possible. As a result, one can see transhumanism making its way into the world of policymaking. The android Sophia is one case in point. In October 2017, this female-looking humanoid showed up at the United Nations announcing to delegates: “I am here to help humanity create the future.” While some say that Sophia is essentially alive, others only regard her as a chat-bot with advanced programming and effective PR. Still, the nearly unthinkable happened when Saudi Arabia decided to grant her official citizenship. Instantly, Sophia became the first technological nonhuman to form part of a human polity. In a world increasingly aware of the failings of democracy, capitalism, globalization, and the nation-state, Sophia foretells changes to come.

In Nietzsche’s Thus Spoke Zarathustra, the crumbling of the state leads to the desire for an improved human being known as the Superhuman, a perfect creature that rises from the ashes of a failed world: “There, where the state ceaseth — there only commenceth the man who is not superfluous […] Pray look thither, my brethren! Do ye not see it, the rainbow and the bridges of the Superman?” Some have pointed out that Nietzsche’s ideas share common themes with the transhumanist desire for enhanced humans and a better world. In NieR: Automata, a robot named Pascal quotes the German philosopher and ponders whether he was truly a profound thinker, or crazy instead. In a mix of comedy and tragedy, the game resorts to Nietzsche and many other philosophers to engage in dialogue with the transhumanist aspirations of our age. Today, technological utopians race to create machines that can finally realize the everlasting verse of John Donne’s “Holy Sonnet”: “And death shall be no more; Death, thou shalt die.” In the game, this wish comes true: death is only the beginning. The bodies of 2B and 9S are destroyed many times, but for them dying has no real meaning, as their last-saved memories are quickly transferred into new bodies ready to fight once again. Still, the game is quick to extinguish any optimism that might arise from high-tech idealism, as it also brings to stage the inner struggles faced by androids and robots when they start wishing for something that goes beyond their masters’ orders.

NieR: Automata’s philosophical inquiries occur within a narrative that explores the consequences of making a decision and sticking to it. The game takes us through abandoned city ruins, scorching deserts, lush forests, interminable factories, and barren coasts. Still, no matter where one goes, every destination is ultimately haunted by the sudden appearance of unexpected emotions and desires — and the subsequent need to act on them. Soon, we see machines abandoning their combat posts to dance and sing. They reject their programmed missions to think instead about the world that surrounds them, to create peaceful villages, or to organize small religious sects. Some even try to imitate the thrills of love and of sex (the latter with little success). NieR: Automata is not only about the birth of wishes; it also portrays the pain of broken dreams. In one of the game’s most poignant scenes, 2B and 9S battle a gigantic female-looking machine named Beauvoir, who constantly grooms herself to attract the attention of a machine she loves: “I must be beautiful,” she screams in fits of rage. However, Beauvoir’s beloved robot never looks her way; like other machines in the game, she is driven to insanity by her desires. In a move reminiscent of The Silence of the Lambs, Beauvoir starts killing androids and collecting their corpses as some sort of stylish accessory. Finally, her misery is put to an end by the player.

Again and again, NieR: Automata revolves around the same overarching question: what is it to be human? The game tries to answer this question by using different scenarios, multiple characters, and more than 20 endings that range from highly philosophical, whimsical, and monotonous (as evinced by the tedious repetition of some missions). But this fits the game’s message: after all, in the human world, the most noble actions often intertwine with shameful vices, the outright boring, and the superfluous. It’s all part of the experience of being human. “They are an enigma,” one can hear the machine life-form Adam say during a fight with 2B and 9S. “They killed uncountable numbers of their own kind and yet loved in equal measure!” With the help of robot Eve, Adam embarks on a quest to unravel the riddles of humanity. As they unearth more and more human records, the robots become enthralled by humans’ flaws, especially the biological ones. The robots’ fascination becomes infectious, and it passes on to the androids like a virus. Some of these even stop following the orders being issued by Command: this is particularly true with A2, a female-looking android who becomes an enemy for other YORHA units and eventually turns into a playable character in the game.

On the whole, NieR: Automata was released at a propitious time, surrounded by the buzz of technological utopias — and dystopias — which promises to continue for some time. In January 2018, the historian Yuval Noah Harari attended the Davos Global Summit in Switzerland. He was invited by the world’s most rich and powerful to deliver a talk with the following title: “Will the Future Be Human?” For Harari, the transhumanist dream is inevitable. The machine will transcend humanity and, with time, the binding shackles of the biological will disappear. In the same spirit, the android Sophia will be traveling to Madrid next October to deliver a keynote speech at Transvision, one of the most important transhumanist conferences in the globe. One can only wonder if she will go through security with her own Saudi Arabian passport — or maybe travel there shipped in a box.

While technological utopians invite policymakers to embrace their high-tech quasi-religious ideals, the world of entertainment fixates on depicting technical advancements in not-so-distant futures. Blade Runner 2049 and Ghost in the Shell are only part of a broader trend that includes series like Black Mirror (2011), Humans (2015), Westworld (2016), and Altered Carbon (2018). “Sanctify,” a recently released music video from the band Years & Years, portrays a futuristic android city called Palo Santo. All these media products render strongly capitalist worlds where human enhancement and artificial sentient beings are part of everyday life, outlining the next step toward even more exacerbated consumerism. Like NieR: Automata, these narratives reflect on the multifaceted experience of being human, but the game exceeds them in scope. It showcases a post-apocalyptic world thousands of years away — perhaps the only way one can imagine the end of capitalism. It portrays a world devoid of flawed humans and removed from the market rationale that would only provide technical advantages to those who can afford them. In doing so, the game brings the transhumanist dream into its final aspiration. It presents a planet Earth fully inhabited by perfect machine life-forms that never really die.

Nevertheless, NieR: Automata tells us that even when death has finally been conquered there is still pain. In fact, the machine life-forms who have lived for thousands of years show us the result of feeling eternal pain. Technology might have removed any imperfection from their lives, but this deliverance has come at a great cost in meaning and loneliness. What’s the point of fighting an endless war? Even worse, what’s the point of winning it? Should war cease, their existence would not be required in the world anymore. After realizations like these, androids and robots decide to create meaning through connections with one another. As if they were humans, they create friendships and explore love. In doing so, they espouse human merits and flaws. NieR: Automata presents an action-packed story of war between robots and androids set in a fictional faraway future, but it is ultimately an account of the beauty in human frailty that contradicts current transhumanist aspirations. To be frail means to be flawed, for sure, but it also means that you can embrace meaningful connections with others around you. Sure enough, as time passes 2B and 9S start doing the forbidden: they create an emotional bond. After hours of gameplay, the player witnesses how the androids find a purpose to their lives within the affective connections they create among themselves. This means they now have someone worth dying for, even if this means dying forever.

In a critical moment of the game, 2B becomes disconnected from Command’s bunker. It is then, when she can’t upload her memories anymore, that she feels most alive. It is also then, during a set of tense events, that she removes the bandage covering her eyes — a hallmark of her attire — and freely gives her life to save 9S. But this is not the end. After this incident, the game unveils a new and much grander narrative about finding meaning in your life once you decide to forgo the biddings of your maker. What should you do with a newborn freedom dearly bought by your loved one’s sacrifice? The game ventures on. After all, in the year 11945 no one really dies without a reason. In NieR: Automata, 2B’s death is only the beginning.

¤

Ernesto Oyarbide has a dual “Licenciatura” in Spanish Philology and Journalism from the University of Navarra (Spain). He is presently reading for a PhD in History at the University of Oxford.

The post In the Year 11945 No One Really Dies Without a Reason: On “NieR: Automata” appeared first on Los Angeles Review of Books.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

ON NOVEMBER 20, 1962, three months before her death, Sylvia Plath was living in Devon, England, when she wrote to her college benefactress and mentor — Olive Higgins Prouty, a novelist living in the United States — about a second book she was writing. Plath wanted to dedicate it to her:

It is to be called “Doubletake”, meaning that the second look you take at something reveals a deeper, double meaning […] it is semi-autobiographical about a wife whose husband turns out to be a deserter and philanderer although she had thought he was wonderful & perfect. I would like very much, if the book is good enough, to dedicate this novel to you. It seems appropriate that this be “your” novel, since you know against what odds I am writing it and what the subject means to me; I hope to finish it in the New year. Do let me know if you’d let me dedicate it to you. Of course I’d want you to approve of it first!

In Plath’s characteristic aplomb, even in times of distress, she signed off, “With love, Sylvia.” The novel, however, disappeared.

Winter, 1963 — the season she was about to encounter — turned out to be brutal. It was one of England’s coldest and most relentless winters on record, and Plath struggled against pneumonia, financial strain, and shoddy mental-health treatment, all while raising her two young children, Frieda and Nicholas, alone in the damp countryside after her husband, Ted Hughes, left with his mistress. What is especially striking about the literary goal Plath discussed in her letter is its fictional expression of themes about which Plath was writing as early as an undergraduate student — concerns that Plath never quite resolved.

Her undergraduate thesis, which she wrote as a senior at Smith College mostly during the autumn of 1954, is titled “The Magic Mirror: A Study of the Double in Two of Dostoevsky’s Novels.” “The Magic Mirror” explores literary doubles made up of a character’s repressed traits, and, as the double grows in power, it heralds the protagonist’s death. Citing Robert Louis Stevenson’s The Strange Case of Dr. Jekyll and Mr. Hyde as well as Oscar Wilde’s The Picture of Dorian Gray, Plath argued that the choice to create a double works to “reveal hitherto concealed character traits in a radical manner” and simultaneously exposes the driving conflicts of the novel housing that character. Her thesis claims that both Ivan, of The Brothers Karamazov, and Golyadkin, of The Double, have attempted to repress troubling aspects of their personalities, resulting in the double. Plath summons Sigmund Freud and Otto Rank to illustrate the myth of the double: typically, after the double is revealed to its originator, the creator seeks to hide away from it before finally developing a death wish. For Plath, such “desire for oblivion is expressed” in the “tendency to hide in shadows and in back hallways; it develops into a strong wish for death.”

The duality about which Plath wrote in her thesis provides the basis for the personality of Esther, the protagonist of Plath’s only published novel, The Bell Jar (1963). The novel chronicles Esther’s struggle with a mental polarity that ultimately results in her suicide attempt. Plath clearly lifted the descriptions for Esther’s character from Dostoyevsky. For example, Esther steps into an elevator and watches as “[t]he doors folded shut like a noiseless accordion. Then my ears went funny, and I noticed a big, smudgy-eyed Chinese woman staring idiotically into my face.” Then she recognizes herself: “It was only me, of course. I was appalled to see how wrinkled and used up I looked.” The troubling racial distinction — contrasting Esther’s pallor against the darker skin of her double — revised The Brothers Karamazov’s similar racialization. Plath, quoting Dostoyevsky in her thesis, noted that Ivan’s double, Smerdyakov, is “wrinkled” and “yellow.” The distinct differences in appearance between originator and double, she continued, are meant to reflect the protagonist’s mental state and cultural status: “Ivan’s brilliance makes him highly acceptable in society, while Smerdyakov is ‘remarkably unsociable and taciturn.’”

Plath wrote about the tendencies of Dostoyevsky’s characters — especially a desire for concealment and fantasies of death — which Esther also develops as The Bell Jar progresses and her double arrives. The threat of the double is present in subtle forms from the beginning of the novel and only grows stronger as it continues. At the novel’s beginning, for example, Plath hints at Esther’s looming shame and attendant desire for oblivion through the double’s nascent presence. When a photographer comes to the magazine where Esther is an intern, Esther knows she is about to cry. She tries “concealing [herself] in the powder room, but it didn’t work,” because Betsy, a fellow intern, sees her feet underneath the doors. As Esther’s photograph is taken, she feels tears beginning to surface, the photographer urging her: “Come on, give us a smile.” “Conceal,” here, then, has a double meaning: it gestures forward to the makeup Esther uses to cover her face following her meltdown while also echoing her previous impulse to hide in the powder room (purposed to allow women privacy as they conceal themselves) before the tears surface.

Once Esther’s tears spill out, she is abandoned by both Jay Cee — her editor at the magazine — and by the photographer. When read through the lens of Plath’s thesis, this scene reveals that the double has overtaken her protagonist, as it is associated with antisocial behavior, whereas the creator is just the opposite, attractive to others in both physique and personality. The adults who pursue the original Esther as a minion throughout the novel, for example, exhibit her narcissistic lure: “Why did I attract these weird old women?” Esther wonders to herself: “There was the famous poet, and Philomena Guinea, and Jay Cee, and the Christian Scientist lady and lord knows who, and they all wanted to adopt me in some way, and, for the price of their care and influence, have me resemble them.” So, when Esther cries and is abandoned, she tries to hide, searching “in my pocketbook for the gilt compact with the mascara and the mascara brush and the eyeshadow and the three lipsticks and the side mirror” that she received from the magazine.

When she finally sees herself in the mirror, Esther recounts her reaction as if she has been punished for her outburst: “The face that peered back at me seemed to be peering from the grating of a prison cell after a prolonged beating.” The scene rings of persecution: “It looked bruised and puffy and all the wrong colors.” After Esther pulls out her makeup, she states its intention to purify and conceal the ugly self that stares back at her: “It was a face that needed soap and water and Christian tolerance.” This increasing desire for concealment literalizes the double’s rise to power. “I had been unmasked,” Esther recalls of another fraught interaction with Jay Cee, “and I felt now that all the uncomfortable suspicions I had about myself were coming true, and I couldn’t hide the truth much longer.” Her instinct proves valid, as the double overtakes her before provoking the death wish, a desire that manifests in fantasies of suicide: hanging herself by the cord of her mother’s bathrobe, drowning herself, taking her mother’s sleeping pills. The connection between a desire for anonymity and death is affirmed when Esther finally attempts to kill herself hidden away from the world, underneath a cellar in her mother’s home.

Imagery of concealment begins to boom louder and louder as the double rises in power, a rhetorical harbinger of Esther’s suicidal urges. When Marco — a tall man with dark hair, a radiant stickpin keeping his tie in place, and a flickering smile that reminds Esther of a snake she saw at the zoo that opened its jaws, as if to smile, before it “struck and struck and struck” at her — tries to rape her, Esther describes that “he threw himself face down as if he would grind his body through me and into the mud.” After he assaults Esther, she stays in “the fringe of the shadows so nobody would notice the grass plastered to my dress and shoes, and with my black stole I covered my shoulders and bare breasts.” The instinct to hide in the shadows and cover herself with dark clothing is coupled with rhetoric of an underground burial, illustrating the inextricable nature of concealment and death.

These images — of being driven into the ground, of death, and of a yearning for disguise — are repeated again after Esther loses her sanity, as the double overcomes the Esther we meet at the novel’s opening. Before her suicide attempt, Esther describes wanting to hide under her mother’s mattress:

I feigned sleep until my mother left for school, but even my eyelids didn’t shut out the light. They hung the raw, red screen of their tiny vessels in front of me like a wound. I crawled between the mattress and the padded bedstead and let the mattress fall across me like a tombstone. It felt dark and safe under there, but the mattress was not heavy enough. It needed about a ton more weight to make me sleep.

Here, Esther embodies Golyadkin’s mental state: as Plath wrote in her thesis, the originator’s “instinct to hide in the dark reiterates” the “desire to be anonymous (therefore irresponsible and detached) and unseen (therefore nonexistent or dead).” The wound from which Esther tries, and fails, to hide chimes with the inescapable, colonizing double, and Plath’s language again illustrates its penal nature: it is inside Esther, but it traps her like a jail cell. That Esther associates the desire to hide from herself with death, indicated in her fantasy of being pressed underneath a “dark” and “safe” tombstone mattress as if she is going into the ground, is not surprising, for the crux of Plath’s thesis is this longing for oblivion, or concealment, that soon turns into a yearning for death.

In her copy of Freud’s “Animism, Magic and the Omnipotence of Thought,” an essay cited in her thesis, Plath underlined the following phrase: “According to the conception of primitive men, a name is an essential part of a personality; if therefore you know the name of a person or a spirit you have acquired a certain power over its bearer.” When Esther’s double surfaces — when her skin is described as yellow, when she enjoys alcohol, when she is wearing black — she tends to create personas for herself: lying about her name, dreaming up and recounting a fictional biography. Of introducing herself as Elly, one of these personalities, Esther recalls: “After that I felt safer. I didn’t want anything I said or did that night to be associated with me and my real name and coming from Boston.”

From her conception of The Bell Jar all the way to its final revisions, Plath suffered an exhausting amount of anxiety over its heroine’s name — as if echoing the line from Freud’s essay that Plath underlined. On April 27, 1961, Plath boasted in a letter to Ann Davidow-Goodman, a friend from Smith, that she was “over one-third through a novel about a college girl building up for and going through a nervous breakdown.” Concerned already at the prospect of being associated with its protagonist, Plath qualified: “I’ll have to publish it under a pseudonym, if I ever get it accepted, because it’s so chock full of real people I’d be sued to death and all my mother’s friends wouldn’t speak to her because they are all taken off.” Indeed, this wasn’t mere paranoia; she did have to change her protagonist’s name at the instruction of her editor for legal reasons. On November 14, 1961, she wrote a long letter to her editor, James Michie, in response to his libel concerns, a painstakingly careful outline of The Bell Jar that distinguishes fact from fiction. The letter’s beginning notes: “Of course you’re right about the name of author and heroine needing to be different.” At one point, Plath named her protagonist Victoria Lucas, her pseudonym in England.

Most novelists likely have concerns about being associated with the characters to whom they give life, especially the ugly ones, and especially when the character resembles its author. Yet what is unique about Plath’s case is her knowledge of the theoretical underpinnings and implications of her choice to push Esther away, and the hold this knowledge assumed on Plath’s work and life. Another look at The Bell Jar with a consideration of Esther as Plath’s double tangles the issue even further, and Plath drops clues for this kind of reading throughout the novel. Esther, for example, sits down to write her own novel and recounts, “My heroine would be myself, only in disguise. She would be called Elaine. Elaine. I counted the letters on my fingers. There were six letters in Esther, too. It seemed a lucky thing.” Not coincidentally, Plath’s first name has six letters as well. Her protagonist’s name was something with which Plath struggled through revisions: in different stages, she was named Elaine and Frieda. The maternal lineage evoked by Plath’s consideration of the name Frieda — the name of Plath’s daughter — is repeated in the final choice of Greenwood for Esther’s last name, which is the same as Plath’s grandmother’s. (Aurelia Grunwald, Plath’s maternal grandmother, immigrated as a teenager to the United States, where her last name was changed to Greenwood.)

Imagining the time and energy spent adjusting the distance between author and heroine while looking at these drafts is remarkable, a time before computers and CTRL-F and the delete button, when each draft had to be retyped. As she continued revising, Plath tore herself away from Esther, due to libel and Plath’s anxiety about her mother reading the novel. (In a disturbing twist, a letter dated March 15, 1963 — barely a month after Plath’s death — thanks Ted Hughes for “for giving us permission to disclose Victoria Lucas’s real identity. The reprint of THE BELL JAR is already on its way to the shops but we will make an announcement to the trade and the Press.”) Like Esther, Plath associated with her heroine but wanted to disguise this relationship; like the protagonists she wrote about in her thesis, Plath met Esther with an ambivalent flurry of attraction, curiosity, and fear.

A chilling letter from Plath’s editor drives the terror of the double into Plath’s own life. On February 12, 1963, David Machin, her new editor, who took over after Michie left for another publishing house, wrote to Plath asking if he put down the wrong date of their lunch meeting. It was scheduled for the day before — the date of Plath’s death. The two had never met in person, Plath didn’t show up, and they had been planning the lunch for months. Plath committed suicide on the day she was to meet her editor for the first time in person, a month after the British publication of the novel.

In her thesis, written nearly a decade earlier, as she turned 22 — the year after her first documented suicide attempt — Plath claimed, quoting Otto Rank:

In such situations, where the Double symbolizes the evil or repressed elements in man’s nature, the apparition of the Double “becomes a persecution by it, the repressed material returns in the form of that which represses.” Man’s instinct to avoid or ignore the unpleasant aspects of his character turns into an active terror when he is faced by his Double, which resurrects those very parts of his personality which he sought to escape. The confrontation of the Double in these instances usually results in a duel which ends in insanity or death for the original hero. [italics mine]

Just as Esther Greenwood was born, Sylvia Plath died. Her double lives on at the expense of its author, as if Plath recognized from the very beginning the threat posed by her heroine.

¤

I would like to extend my thanks to Peter K. Steinberg for his counsel and cheer.

¤

Kelly Coyne is a student in Northwestern’s PhD program in film and media studies. Her work has appeared in Literary Hub, Persuasions, and Polygraph.

The post Sylvia Plath’s Magic Mirror appeared first on Los Angeles Review of Books.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is the third installment in a bi-monthly column that will explore some of the different cultural facets of popular feminism, the #MeToo movement, and the contemporary cultural awareness of sexual harassment in the workplace and in daily life. These essays are not meant to be exhaustive, but rather to point up the ways the current environment is responding to gender dynamics, sex, and power.

¤

IN APRIL 2018, Vanity Fair published a story titled “Matt Lauer Is Planning His Comeback,” which hinted at the fact that the former Today Show anchor, who was fired after multiple women accused him of sexual harassment and assault, had “come out of hiding” and was planning his return to the public spotlight. This is part of a pattern. Over the last few months, it seems that #MeToo stories are slowly being supplanted by a new kind of narrative: the “comeback story” of the powerful male perpetrators of sexual harassment and assault in the entertainment industries. Charlie Rose, Louis C.K., Mario Batali, and others have reportedly been “testing the waters” about their “comebacks.” Perhaps most disturbingly, Charlie Rose is allegedly creating a “#MeToo atonement series,” in which Rose will interview other men who have been accused of sexual harassment.

What does it mean to come back? And where, or what, are these men coming back from? The concept of the “comeback” story has resonance in the United States as a cultural narrative: the gumption of the underdog; the myth of meritocracy, where the talented and gifted win in the end, if only they persevere; Horatio Alger’s “rags-to-riches” tales. But the whole world loves a comeback. Take sports, in which comebacks are the most popular and most bankable narratives, celebrating athletes who overcome hardships and recover from injury. Indeed, the comeback story is a commodity, one that finds a place within the comeback economy. And this economy now threatens to subsume the #MeToo movement.

In the world of media, power, and celebrity, comeback stories are not about physical resilience and grit, or even about the underdog. Rather, they are about reputational and crisis management, a matter of public relations. To be sure, the #MeToo movement has created a dilemma for celebrity publicists (and in many of the high-profile cases, such as those of Harvey Weinstein and Louis C.K., the publicity firms for the accused dropped them as clients; as Anne Cohen has pointed out, “[T]he business of crisis management itself is at a crossroads: pre-Weinstein and post-Weinstein.” At the emergence of #MeToo, when the stories from women seemed so relentless, it was no doubt difficult for publicists and agents to figure out what to do with their clients. If they weren’t dropped altogether, the publicists helped create woefully inadequate “sorry not sorry” public apologies, which were widely pilloried by the public.

But now, six months after the Weinstein story broke, the stories are no longer coming forth as relentlessly (or the media is no longer interested in covering them), and it seems that the months-long hiatus is working for publicists and agents. The “complicity machine,” made up of not only assistants but also publicists, seems to be hard at work on what Stassa Edwards calls the “redemption narrative” of the accused. The idea of the comeback is slowly creeping into the media spotlight — though not every man accused gets to “come back” equally. As I was writing this, the Bill Cosby guilty verdict came in, prompting the question of whether the comeback economy is open mostly to white men.

Publicists are very good at being patient, recognizing the short-term memory of the public (especially in the midst of the never-ending catastrophes of the Trump administration), waiting the scandal out. Even when Weinstein was in the thick of accusations, one of his few remaining investors, Paul Tudor Jones, wrote to him about what he would need to do to rehabilitate his image: “Focus on the future as America loves a great comeback story […] The good news is, this will go away sooner than you think and it will be forgotten!” What underlies this is the logic of public relations, the notion that a scandal is an opportunity, a minor setback that anticipates a great comeback.

But what if we focused on a different kind of scandal — not the kind that gives once (and still) powerful men an opportunity, based on wealth and visibility, to make a comeback? As feminist Jacqueline Rose wrote three years before #MeToo, in the midst of increasing sexual violence across the globe, “We need a scandalous feminism, one that embraces without inhibition the most painful, outrageous aspects of the human heart, giving them their place at the very core of the world that feminism wants to create.”

A scandalous feminism would challenge the accepted notion of the comeback story. What about comeback stories that begin with those who were victimized, not those who did the victimizing? We hear precious few redemption narratives of the survivors of sexual harassment and assault. There is a mediated space for the victim of #MeToo, but the comeback of that kind of victim is not as visible. Many, if not most, of the women who have been harassed and assaulted have no publicists to manage their reputations. Indeed, thousands of these women lack the economic security or visibility that would afford the luxury of coming forward, much less of coming back.

Surely the popular and spectacular feminisms that circulate in the media, such as #MeToo, affords an important kind of visibility. Yet even this visibility is often hemmed in by the economic imperatives of the entertainment industries. We are indeed in need of a more scandalous feminism — one that is bold, unapologetic, and leaves it all on the table. We need a feminism that makes people uncomfortable, that is painful, and perhaps even self-reflective (Michelle Wolf’s White House Correspondents Dinner speech comes to mind). The #MeToo movement has the potential to unleash this kind of scandalous feminism. The women coming forward with revelations of sexual harassment and assault have witnessed, for the most part, an embrace of these most “painful, outrageous aspects of the human heart.” The act of believing these women as they tell their painful stories is at the core of a feminist world. It is their kind of scandal I want to give my attention to, the feminist scandal that cannot be easily managed, that conflicts with what publicity agents seek to manage and control.

¤

Sarah Banet-Weiser is professor and director of the Annenberg School for Communication at USC. Some of the themes captured in this column are explored further in a forthcoming book, Empowered: Popular Feminism and Popular Misogyny (Duke University Press, 2018).

The post Popular Feminism: The Scandal of the Comeback Story appeared first on Los Angeles Review of Books.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

THE PAST YEAR has been a rough one for conservation. Since last January, the Trump administration has handed the Environmental Protection Agency over to its avowed enemies, brushed aside the United States’s commitments to the fight against climate change, and announced an unprecedented rollback of federal wilderness protections. But as bad as these attacks were, a smaller-scale salvo that arrived in their wake was, in some ways, much more stinging. It came from behind our own lines.

Writing in the Washington Post in late November, biologist R. Alexander Pyron declared that efforts to prevent the extinction of endangered species are a sentimental waste of effort:

Species constantly go extinct, and every species that is alive today will one day follow suit. There is no such thing as an “endangered species,” except for all species. The only reason we should conserve biodiversity is for ourselves, to create a stable future for human beings. […] Conserving a species we have helped to kill off, but on which we are not directly dependent, serves to discharge our own guilt, but little else.

Telling a biologist that “extinction is natural” is like pointing out to a climatologist that the Earth has gone through periods of warming in the past, or explaining to a physician that smokers will die whether or not they quit — narrowly accurate, but ignorant of the scale and pace of the damage in question. Pyron’s colleagues in ecology, evolutionary biology, and conservation science were, predictably, aghast. Science Twitter erupted. Biologists took to every available outlet to refute the piece. More than 3,000 scientists (including your humble correspondent) co-signed a response letter to the Post stating, bluntly, that Pyron’s position was “at odds with scientific facts and our moral responsibility.” Pyron himself seemed surprised and dismayed by the response, and he disavowed most of his own op-ed in a statement he posted to the front page of his professional website: “In the brief space of 1,900 words, I failed to make my views sufficiently clear and coherent, and succumbed to a temptation to sensationalize parts of my argument.”

Whether or not 1,900 words is insufficient to express his views, a generous read of Pyron’s essay might find in it an attempt to grapple with the fundamental problem facing people who study the diversity of living things in this era: although we have unprecedented tools to identify, describe, and catalog them, plant and animal species are losing ground to humans at an alarming rate. As often as not, the formal description of a new species is immediately followed by its designation as “endangered.”

The issue is broader than the danger of extinction. Even species considered relatively secure have seen sharp declines in abundance since the beginning of the 20th century, and many others will likely be reduced to precarity as changing climates render their habitats inhospitable. The entomologist Alex Wild, an expert in one of the most diverse groups of animal species, has said that “being a naturalist in the 21st century is like being an art enthusiast in a world where an art museum burns to the ground every year.” Faced with the scale of the problem, the temptation to triage — to define achievable, if painfully pessimistic, conservation goals — is understandable.

¤

The Plant Messiah, a scientific memoir by the botanist Carlos Magdalena, is a resounding rejection of that temptation. Magdalena works at Kew Gardens, the world-renowned English botanical institute, and he has built a career coaxing hope for endangered plant species from tiny samples of seeds or parsimonious cuttings. Kew stewards an enormous living collection of plant diversity in its greenhouses and gardens — and its even more extensive seed stocks. Magdalena splits his time between traveling the globe to identify and collect rare plants for Kew’s collections, painstakingly propagating them, and working with local partners worldwide to reestablish and protect endangered plants in their native habitats.

Magdalena grew up in northern Spain, where he became fascinated with the living world by working on his family’s finca, a tract of forest and bog in the mountains outside of town where they kept a cottage and small farm. After a lackluster experience with structured education in school, he worked short-term conservation jobs and did stints in pubs, restaurants, and landscaping until he found his way to Kew Gardens and fell immediately in love. He talked his way into an internship and then an entry-level position in plant propagation, enrolled in the Gardens’ rigorous Diploma in Horticulture, and went on to become permanent staff.

One of Magdalena’s first projects at Kew involved the café marron, Ramosmania rodriguesii. Native to Rodrigues — an island in the same Indian Ocean archipelago as Mauritius, the former home of the dodo — café marron is a close relative of the coffee tree. It was thought to have been lost to the destruction of Rodrigues’s native forests for farmland, until a schoolboy rediscovered a single shrub in 1980. Kew’s horticulturalists acquired a handful of cuttings from this sole survivor, got one to take root, and propagated a small population by dint of careful cutting and rerooting — but though these captive cafés marrons flowered profusely, none would set the seed needed to revive a wild population, even when pollinated by hand.

Magdalena suspected self-incompatibility. In most flowering plants, a pollen grain alighting on the receptive surface of the stigma, at the very tip of the pistil, must grow a root-like tube down into the length of the pistil to convey genetic material to an ovule, with which it fuses to produce an embryonic plant and the supporting and protective tissues of a seed. In self-incompatible species, a plant’s own pollen will fail to take “root” in the stigma. Magdalena bypassed this response by slicing off the stigma, then applying pollen directly to the wounded tip of the pistil. Over hundreds of such surgical pollinations, on plants kept in different temperature and light conditions, he zeroed in on a protocol to produce viable café marron seeds. From these, Magdalena reared seedlings for “repatriation” to Rodrigues.

Much of The Plant Messiah is pretty well summed up as “James Herriot, but for ultra-rare plants” — a string of stories from Magdalena’s travels to collect plants, teach plant propagation techniques, and promote conservation. In one chapter, Magdalena arrives late at night in a Bolivian village, exhausted and dirty, only to be dragged from a cold shower to demonstrate grafting methods for an eager class hopped up on coca leaves. In another, he loses half of a hard-won supply of seeds from the last surviving Hyophorbe amaricaulis palm to a lab staffer who happens on them in an unsecured refrigerator while looking for a snack.

The punch lines to these stories are sometimes more tragic than funny. Late in the book, Magdalena sets out to raise the tiny waterlily Nymphaea thermarum, which has been found only in waters warmed by a single Rwandan hot spring. Magdalena works his way through his supply of seeds to determine that the young plants need higher than normal concentrations of carbon dioxide to survive to flowering — and only then, when he has a working protocol and a healthy captive population of the little waterlilies, does he discover that their home hot spring has been drained, and the species is extinct in the wild.

¤

Magdalena responds to the logic of biodiversity triage on virtually every page of the book. Much of his argument is the kind of thing R. Alexander Pyron dismissed as sentimentality — Magdalena loves plants and takes their losses personally. “I will not tolerate extinction,” he declares, point-blank, in an early chapter. The Plant Messiah’s storytelling structure and loving descriptions of rare plants are an unabashed appeal to emotion, attempting to light the same passion for the living world in Magdalena’s readers. But under the bubbling enthusiasm there is one rock-solid fact: we don’t know which species we can spare. As Magdalena writes,

We still know so little about what they are capable of. It is like finding a library where the books are written in Chinese, then taking someone to visit who can read only English and Spanish to decide which books are relevant. Or perhaps going into that library and burning the books based on whether you like the cover or not.

The world’s plants (and other living things) are a repository of evolution’s mechanical, material, and biochemical innovations. A rare plant may hold the key to the next invention as universally useful as Velcro, or a molecule to cure human disease, or an adaptation to drought that can be bred into crops. This is, however, not quite an argument for restoring near-extinct species in the wild — the world’s plant diversity can, in principal, be saved in seedbanks and botanic gardens. By the time a plant is as vanishingly rare as café marron or Nymphaea thermarum, its contributions to the living community in which it grows are proportionally tiny. Restoring the plants of Rodrigues means not just planting a bunch of café marron, but also rescuing many other species and clearing out a myriad of introduced invaders that have overrun the island.

Kew assists with just such projects, and when Magdalena exhorts his readers to become “plant messiahs” in their own right, he suggests they join local conservation societies, plant rare native species at home, and campaign against climate change and deforestation — workaday efforts that lack the glamour of the near-resurrections he performs in the greenhouse. But if they differ qualitatively, they also differ quantitatively. Global collective action is what will stem the tide of extinction; not a talent, even a miraculous one, for saving individual species from the brink.

Magdalena attempts, at the start, to deflate his own title by quoting the mother of an inadvertent prophet in Monty Python’s Life of Brian: “He’s not the Messiah, he’s a very naughty boy!” Even so, The Plant Messiah aims to ignite a movement. Even if the species Magdalena rescues may not be significant building blocks in the larger project of putting the planet’s living communities back together, they can be mascots, symbols to focus and motivate the broader, more difficult work.

A messiah doesn’t serve only, or even primarily, as a single-person source of salvation. A messiah is also an inspiration and a model. The plant messiah’s gospel is simple: we may not be able to save every species from extinction, but that doesn’t mean we shouldn’t try.

¤

Jeremy B. Yoder is an assistant professor of biology at California State University, Northridge. His writing has appeared in Scientific American, The Awl, and McSweeney’s Internet Tendency. He also edits The Molecular Ecologist.

The post Against Ecological Triage appeared first on Los Angeles Review of Books.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Wim Wenders, one of cinema’s greatest living directors, drops by to tell co-hosts Medaya Ocher and Kate Wolf the astonishing story behind his new documentary Pope Francis: A Man of His Word. Given unprecedented access, Wenders witnessed the Pope walk the walk in the footsteps of his namesake, St. Francis of Assisi; his revolutionary statements on the environment and the economy flowing from his genuine love for nature and compassion for every individual. Across the interview, Wenders himself reveals a generosity of spirit not unlike his latest leading man.

The post Wim Wenders on Pope Francis: The Man and His Words appeared first on Los Angeles Review of Books.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

MOVIEGOERS COULD NOT HELP but have noticed the spate of popular films dealing with religious themes and myths released in recent decades. From Godard’s Hail Mary (1985) to Scorsese’s The Last Temptation of Christ (1988) to Mel Gibson’s The Passion of the Christ (2004), Darren Aronofsky’s Noah (2014), and Garth Davis’s Mary Magdalene (2018), both Hollywood and European arthouse directors have shown an interest in (typically) Christian religious narratives, continuing the long cinematic tradition of Christ narratives (“the greatest story ever told”). Add to this the ongoing fascination with the supernatural and the occult evident in recent horror films — like James Wan’s two Conjuring films (2013 and 2016) or Robert Eggers’s The Witch (2015) — and it becomes clear that the time is ripe for revisiting one of the seminal texts on the topic, S. Brent Plate’s Religion and Film: Cinema and the Re-Creation of the World (first published in the Wallflower Press Short Cuts Series, 2003). One of the premier scholars in the field, Plate deftly combined a thorough grounding in religious studies with expertise in film theory, providing an illuminating and engaging text that has enabled readers, both academic and religious, to explore the intersection between cinema and religion. Given the history of suspicion between religion and film, it is welcome to see this field not only gaining recognition but also offering new ways of thinking through the meaning and role of religion in contemporary cultural and political debates.

As Plate observes, although religion has long been a concern of the movies, the scholarly study of cinema and religion is relatively young. A glance over the history of film theory suggests that there have been roughly three waves of research focusing on the relationship between religion and film. The first wave, dating from the 1960s to the 1980s, explored theological, metaphysical, and existentialist themes in explicitly religious films, usually within the European modernist tradition (Dreyer, Bresson, Bergman, and so on) from a broadly humanist perspective. The second one, from the late 1980s and 1990s, rejected the focus on arthouse cinema and turned instead to popular cinema, spanning explicitly religious (Christological) retellings to more implicit explorations of faith or belief. Finally, the third wave, gaining popularity over the last two decades, eschews thematic, auteur- or narrative-based approaches in favor of cultural analogies between cinema and religion, focusing specifically on audience reception of films.

Plate’s book fits neatly into the third wave, taking a broadly sociological and cultural-studies approach to the exploration of religion and film, drawing on earlier approaches, but also extending these to articulate an expanded sense of the religious. Indeed, Religion and Film is not really concerned with theological motifs, the contemporary significance of the three world religions, or the rise of New Age forms of spirituality. Rather, it compares cinema and religion as ways of constructing and presenting worlds that we can temporarily inhabit, that provide new ways of experiencing and understanding our own (mundane) sense of reality. Plate focuses on the role of both religion and cinema in practices of community formation, the generation of meaning through myth and ritual, and the creation of a sacred space that contrasts with the everyday world. Religion, in this view, refers to any cultural practice capable of cultivating our sense of living in a meaningful cosmos. Such an approach enables a rich broadening of how we might understand “the religious” and helps us to appreciate what Plate argues are the striking affinities between cinema and religion.

Ways of Worldmaking

Plate’s central idea for the analogy between cinema and religion is that of world, or, more specifically, worldmaking. Cinema and religion are analogous ways of composing worlds through symbolic representation and ritualized practices. They both select and frame aspects of social reality in ways that are meaningful — providing communal forms of experience, focusing our attention, and drawing us into an alternative world in light of which our ordinary universe can appear as transfigured or transformed. Plate draws here on the work of other theorists, such as sociologist of religion Peter Berger’s Sacred Canopy (1967). For Berger, human communities create symbolic worlds to provide a sense of order and stability, staving off the threat of “cosmic chaos” through religion, which he describes as a “sacred canopy” providing shelter, meaning, and purpose. This enriched sense of world, however, also needs to be replenished or “re-created,” to use Plate’s term, in order to provide communities with a dynamic, renewable sense of place and purpose in both communal and cosmic senses.

Plate also draws on the work of American philosopher Nelson Goodman, in particular his concept of art and culture as “ways of worldmaking.” Human beings gain knowledge, according to Goodman, by constructing meaningful worlds via symbolic representations and processes of selection, synthesis, and comparison. Art is best defined, he claims, as a practice of “worldmaking” that composes “versions” of symbolic worlds using different media. In this respect, cinema can be understood as a practice of worldmaking that brings about symbolic works using audiovisual images, montage, and post-production techniques. Brent applies this idea to both cinema and religion, arguing by analogy that cinema and religion are ways of worldmaking that not only share many common features, but also mutually illuminate and influence each other.

This might seem surprising to readers, who may assume that popular Hollywood movies have little in common with the rituals of the church, mosque, or synagogue. As Plate argues, however, we gain much by recognizing how both religion and cinema construct symbolic worlds that shape our self-understanding, as well as our sense of place in both natural and cultural universes. Both involve the selection, framing, and organization of a meaningful world, and both require symbols, myths, and ritualized practice for these worlds to be rendered and recreated. Indeed, myths and rituals, for Plate, operate remarkably like films: “they utilize techniques of framing, thus including some themes, objects, and events while excluding others, and they serve to focus the participants’ attention in ways that invite humans into their worlds to become participants.” Both religion and cinema draw on materials already available to us culturally, but synthesize and recreate new worlds through symbol, ritual, and myth to create a sense of communal identity, participation, and belonging.

Audiovisual Mythmaking

Plate’s engaging inquiry commences with the important observation that cinema is our premier form of cultural mythmaking, a ubiquitous way of engaging with our treasure trove of mythic narratives. He draws attention, moreover, to the fact that myths are not simply written or spoken tales but can be multisensorial narrative experiences. Tales of origins, heroic quests, the search for identity, and binding moral, cultural, and religious narratives are richly represented in film, which uses all resources at its disposal to create an immersive sense of world within which these mythic tales unfold. Movies use primarily nonverbal means — image, sound, music, and composition (mise-en-scène) — to construct cinematic worlds that aesthetically convey this kind of mythic and symbolic meaning. Films like Star Wars (1977) and The Matrix (1999) provide convincing examples of how cinema engages in an eclectic mixing of “cosmogonies and hero myths in multiple ways, generating brand new mythologies for the twenty-first century.” It is not just their reworking of myths — the manner in which they create inhabitable worlds makes movies mythological. Star Wars’s mythic tropes of Luke Skywalker’s hero’s journey, and the Dao-like opposing energies of “the Force,” The Matrix’s references to “Zion” as a longed-for place of return from exile, with Morpheus playing the role of “pagan Lord of the Dreamworld,” all attest to the mythic richness of these films.

At the same time, Plate points to the intertwining of myth and ideology in popular cinema. The Matrix, for example, still adverts to the Hollywood myth centered on the formation of the white heterosexual couple (Neo and Trinity) coupled with a white savior myth (Neo as the One) that trumps its more alternative cultural-mythic elements. Despite its imbrication with ideology, film, like myth more generally, is an inherently eclectic cultural form, which becomes readily apparent in cinematic adaptations of religious myths. Mel Gibson’s The Passion of the Christ, for example, is a multi-mediated mythic mash-up par excellence. As Plate remarks, it draws on the following influences:

[A] millennium’s worth of Passion plays, the Stations of the Cross, the writings of nineteenth-century (anti-Semitic and possibly insane) mystic Anne Catherine Emmerich (channeled through Clemens Brentano), Renaissance and Baroque paintings (especially from Rembrandt and Caravaggio), the New Testament gospels, some brief historical scholarship, and a century’s worth of “Jesus films” (from early films on the life and passion of Jesus to Sidney Olcott’s From the Manger to the Cross [1911] to Nicholas Ray’s King of Kings [1961] and Martin Scorsese’s The Last Temptation of Christ [1988]).

Stylistically, the film also draws on the horror genre (the opening scenes referencing Wes Craven and John Carpenter), and its graphic depiction of violence and suffering is legendary. This only underlines the syncretic nature of cinematic mythmaking, which recreates the world via audiovisual means, engaging our senses and emotions as much as our memories and intellects.

Plate then turns to the relationship between rituals and film, exploring how “ritual’s forms and functions tell us [something important] about the ways films are created,” and examining how filmmaking can tell us something about “the aesthetic impulses behind rituals.” Here the focus is on the ways that camera movements, the use of color and light, and specific patterns of montage can create distinctive worlds through the ritualized composition of space and time. The opening sequence of David Lynch’s Blue Velvet (1986), for example, creates a contrasting sense of world through camera movement, color, and mise-en-scène: “cosmos above, chaos below.” The revelation of this cinematic world is itself a kind of cosmogonic act, revealing this “mythic” small American town as superficially quiet, peaceful, and orderly on the surface but seething with chaotic primeval life, malevolent forces rumbling in its darker depths. Cinematography and editing help create a sense of world with distinctive features — like Blue Velvet’s dazzling primary colors, slow tracking shots of posed characters, contrasting with the disturbing sounds and murky visuals suggesting darker, ancient forces — that are carefully composed and ordered in a ritual-like manner. Cinematic composition — including framing, camera movements, light and color, sound and music — creates an inhabitable world replete with mythic and symbolic meaning.

The screen and movie theater, like the altar and place of worship, create a portal to another world; the aesthetic experience of this movie-world creates a “sacred space” in contrast to the everyday world, an experience of immersion “allowing people to interact with the alternative world, enacting the myths that help establish those world structures.” Examining films as diverse as Lasse Hallström’s Chocolat (2000), Marleen Gorris’s Antonia’s Line (1995), Dziga Vertov’s avant-garde classic Man with a Movie Camera (1929), and Ron Fricke’s environmental cine-symphony Baraka (1992), Plate elaborates the implicit parallels between the creation of an aesthetic world through cinematic composition and the creation of a sacred space through religious rituals. The composition of cinematic space, especially the symbolic connotations of vertical (transcendence) versus horizontal movement (immanence), contributes to the creation of a complex, deeply human world replete with meaning. He elaborates this claim through focused film examples, such as the futuristic dystopia of Fritz Lang’s Metropolis (1927) — with its architectural heights and slum-like depths reflecting the clash of class, technology, and alienated humanity — or the horizontal lines of everyday, small-town pilgrimage, the quietly meditative and surprisingly moral “slow” road movie that is Lynch’s The Straight Story (1999).

Religious Cinematics

In Part II of the book, Plate turns to what he calls “religious cinematics.” By this he means the manner in which film elicits an immersive experience, a bodily form of engagement through a “formalized liturgy of symbolic sensations”; one that can cause us to “shudder or sob, laugh or leap,” encouraging the body “to believe, and also to doubt,” especially in relation to images of death, pain, and suffering. Body genres such as horror provide exemplary cases of this kind of experience. Plate focuses on William Friedkin’s The Exorcist (1973), whose content, themes, and style are clearly germane to the exploration of religious cinematics. It is not simply narrative that explains the power of horror; rather, it is the physical-emotional reactions — our visceral, affective, and corporeal responses — that generate the powerful “non-rational” experiences that Plate links with religious responses to pain and suffering. Drawing on Merleau-Ponty’s phenomenology, Plate presents a thoroughly corporeal account of our responses to horror, which are grounded in bodily perceptual belief and corporeal responsiveness toward what we are seeing on screen. It is not the plot of The Exorcist that generated the global phenomenon of fear and distress in audiences, but rather the physical-emotional responses to it, “The ways cinematic bodies were moved by the film” — not just its shocking, visceral images, but also its innovative soundtrack, which famously included “the sounds of pigs being driven to slaughter for the noise of the demons being exorcised” and “the voice of the devil coming out of Regan’s mouth.”

Plate also extends his inquiry from fictional horror to real-world engagements with death. He moves deftly from cinema’s fascination with both preserving life and overcoming death through visual representation to those rare attempts in avant-garde film and documentary film to present death, the dead body, on screen. The most notable example here is Stan Brakhage’s confronting silent documentation of medical autopsies in The Act of Seeing with One’s Own Eyes (1971). Brakhage’s attempt to symbolize death through cinematic presentation remains powerful and provocative, especially when presented as an attempt to use (literal-medical) “defacement” as part of a cinematic technique to “recreate the world” — to reveal the sacred at the heart of the Western clinical and scientific treatment of the body as corpse.

The importance of the face and the close-up in cinema is well known and offers one of the most distinctive elements explaining the emotional power of movies. Plate draws here on evolutionary biology and cognitivist theories to support his claim that facial expression is key not only to social relationships but also to exploring the boundaries between self and other. Studies of the face in visual images across religious traditions points to “the power of frontality in images and icons”; how faces look back at viewers and thereby “establish a relationship between deities and devotees” is also evident in film. The iconoclastic ban on representations of divinity also found expression in popular cinema, with the face of Christ being avoided in Hollywood film during the Production Code era — in Quo Vadis (1951) and Ben-Hur (1959), for example — appearing again only in Cecil B. DeMille’s blue-eyed Jesus (Jeffrey Hunter) in King of Kings (1961). The “face-to-face” encounter, whether in dramatic conflict or erotic exchange, is a powerful emotional element of cinematic world-creation. It shapes our sense of the world, coloring it with emotion and feeling, not only in regard to romantic love, but also spiritual or divine love (as evident, for instance, in Terrence Malick’s recent films). Emotional contagion effects (mirroring the emotional expressions of others) and nonverbal communication (expressing emotion physically in ways that resist verbal articulation) are powerful ways of binding audience and screen, opening up the possibility of an emotional and imaginative transfer between the world of the film and that of the viewer.

Cinematic Ethics

Drawing on Emmanuel Levinas, Plate also emphasizes the ethical import of the “face-to-face encounter.” It is the face that defines the cinematic ethics at the heart of religious cinematics, with its rich solicitation of the “emotional-based activity of empathy.” For Plate, cinema offers the possibility of an aesthetic encounter with the face of an Other, one that opens up a space of ethical experience: a cultural, religious, and sensuous encounter eliciting affinities and empathies that may have the power to transform us morally. Cinematic ethics means that cinema has the potential to move us toward a more ethical mode of being — from a self-regarding to an other-oriented attitude toward our world. Cognitive psychology too suggests that exposure to images of others — faces, bodies, and worlds outside our own familiar spheres — can expand our perceptual and ethical horizons, enabling us to “learn to see differently.” In this way, an ethical form of religious cinematics becomes possible, a cinematic “mindfulness” or “spiritual-sensual discipline, a ritualized form of viewing that stimulates connections between the world on-screen and on the streets.” Here the relationship between religion and cinema becomes intimate and profound as an experience of cinematic ethics that offers us “the possibility for aesthetic, ethical, and religious re-creation.”

This experience of exchange is manifest in the ritualized ways that audiences interact with films beyond the movie theater and in ways that form communities of like-minded souls. Cult films, movie fandom, and the use of movie references, characters, and costumes in all manner of cultural activities — from tourism to weddings — suggest that the worldmaking expressed on screen readily translates into the re-creation of the everyday world. From Rocky Horror Picture Show (1975) screenings, tourism pilgrimages to the Hobbiton Movie Set (near Matamata in New Zealand, where much of the Lord of the Rings trilogy was shot), to reenactments of epic journeys visiting sites depicted in films such as Into the Wild (2007), the parallels between the practices of ritualized mythmaking in religion and cinema become striking and compelling. As Plate remarks, the footprints of movies are left in a multitude of cultural sites, social spaces, political discourses, wilderness areas, and religious forms of consciousness throughout the world. Cinema and religion are revealed as kindred ways of worldmaking with much more in common than we might have thought.

A Religious Art?

Plate’s emphasis in this second edition of Religion and Film on audience reception also expands our sense of the manner in which we can think of cinema as akin to a “religious” form of cultural practice and shared experience. For all its secular compatibility, however, this illuminating analogy does raise some intriguing questions. Is it enough to say that any cultural practice of shared engagement with a meaningful work qualifies both the film and the engagement as “religious”? Sport would certainly qualify as religious on this account, as would forms of popular music and other kinds of collective cultural activity. As with any argument from analogy, for every parallel there are also corresponding disanalogies that should be borne in mind. To list a few, cinema need not have any relationship with theology, spirituality, or faith, whereas it is hard to think of religion without these features. Cinema is consumed as “entertainment” in industrial-commercial contexts of mass consumption — and in increasingly “personalized” platforms such as online streaming or handheld digital devices — whereas these aspects of mass entertainment seem at odds with what is conventionally understood as religious worship. The “aesthetic” aspect of religious devotion and worship, not to mention religious art and architecture, is intended to attune and transport the recipient toward an experience of the divine, whereas in cinematic experience no such transcendence is (typically) intended or even desirable (the tension between religious devotees and movie fans concerning “immoral” depictions of violence, sexuality, or blasphemy is a case in point).

On the other hand, there has been a notable upswing in the exploration of explicitly religious themes in recent popular and art cinema — surely, a worthy topic for reflection when it comes to the kinship between religion and film. Plate’s illuminating contextual, audience reception approach, although expanding our conception of both religion and cinema, does divert attention away from narrative “content.” This “content” is what many contemporary religious films have brought to the fore, particularly those exploring the nexus between religion, culture, and politics (one need only think of recent films dealing with Christian theology, religious cults, or with the question of Islamic fundamentalism and Western geopolitics).

Religion and Film is a fascinating and impressive text, both engaging and illuminating. It opens up new ways of thinking for the uninitiated as well as providing thought-provoking theses for the more expert reader. And it makes the otherwise confusing relationship between religion and film perspicuous and persuasive in ways that few academic studies have been able to achieve. It does raise the question, however, whether certain films, like other forms of religious art, could prompt or elicit religious experience: is a “conversion cinema” possible today? Or do the spheres of the aesthetic and the ethical, as Kierkegaard suggests, lead us to the threshold of the religious, without presenting it directly as such (since it is an object of faith rather than of representation). This would press the idea of cinematic worldmaking to another level of (philosophical and religious) reflection, one that might open up the possibility of talking more freely about film as a religious art.

¤

Robert Sinnerbrink is associate professor of Philosophy and ARC Future Fellow in Philosophy at Macquarie University, Australia.

The post Religion Goes to the Movies appeared first on Los Angeles Review of Books.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

THE IDEA FOR the Na’vi, the made-up humanoid species indigenous to the planet Pandora in James Cameron’s 2009 primitivist blockbuster Avatar, came to the director via his mother. In the 1970s, she told her son about a dream of a 12-foot blue woman, and this formed the basis for the brief he would deliver to his designers 30 years later. The Na’vi were to be blue, tall, muscular, sleek, and feline. They had to be alien enough to be plausibly otherworldly, but take a form, the concept artist Jordu Schell recalled Cameron stipulating, that “the audience has to want to fuck.” Among other things, this meant that the film’s female Na’vi lead, Neytiri, had “to have tits” even though, Cameron freely admitted to Playboy, the Na’vi are not placental mammals. As he developed prototypes, Schell pinned up pictures of “beautiful ethnic women” to ensure that his feline aliens would reflect the ids of the teenage boys who made up the film’s key demographics.

The plot of the film follows the tested formula of primitivist transformation. A man of civilization, in this case the paraplegic US marine Jake Sully, is sent to colonize the primitive lands beyond civilization’s perimeter only himself to “go primitive” after learning of their innocent beauty and recognizing the barbarism of his own destructive civilization. It’s a structure that underlies other blockbusters like Dances with Wolves, its sci-fi equivalents, and numerous journey-into-the-interior classics (especially the work of Joseph Conrad). Eros is built into this formula. Coition marks the point at which the civilized man gives himself over to the primitive tribe and discovers, or recovers, his primitive self. Primitivist utopias, in short, are fuckable utopias.

Avatar played with this formula by having the mind of its primitivist hero transmuted into a Na’vi body — the “avatar” of the title. After he has been initiated into the tribe, consummated his relationship with Neytiri, and successfully defended the Na’vi against the human colonizers, Jake abandons his human form and bonds himself permanently to his avatar. Combined with the film’s pioneering use of stereoscopic 3D, Avatar gave new gloss to an old idea: that humans disaffected by urban civilization will recover their authentic selves by reuniting with nature. The idea is also autoerotic. It suggests that we desire the self from which we have become separated. James Cameron knew this just as well as the Neolithic Middle Eastern mythologizers who enshrined a naked couple living in a garden of untamed abundance at the center of their creation story. Evidently, the formula works: Abrahamic religions dominate the world, and Avatar remains the highest-grossing film ever.

The first of four sequels to Avatar is in production. Each will break new records for production costs and will appear amid what, a decade later, we can now recognize as a resurgence of primitivism in popular culture and radical politics. This has washed into the general consciousness largely in the form of nutrition fads and life hacks. There is, of course, the ubiquitous paleo diet, which emulates the carnivorous dietary intake of Paleolithic humans on the basis that our DNA evolved to support this form of life. This joins a plethora of other kinds of “nutritional primitivism.” There is also the fashion for running without shoes and other shortcuts to attaining the physical advantages attributed to non-sedentary forms of life; and then the frequently reported experiments with psychedelic spiritual remedies, living off the grid, and embedding with societies labeled “hunter-gatherer.” Social media, in the meantime, has enabled radicals dedicated to anti-civilizational ideology to band together and disseminate practical advice on returning to nature or even becoming a hermit. Underlying all these trends is the promise of a truer, more natural self — a self that modern life has compromised.

Among utopian ideas, primitivism is distinctive for its reverse teleology. Marx’s communist society or the techno-utopias of Silicon Valley are premised on transcendence. When workers own the factories or robots do the menial labor, humans will be free to pursue their inmost desires. For primitivists, humans have previously achieved this state, and our urgent project is to restore it. We are to move forward into our past; or, equally, backward into our future. Primitivists thus spend a lot of time seeking out and heralding the evidence of the societies which they suppose lived (or live) in this state of grace. These might be lodged in religious mythology, the archaeological record beneath our feet, or some notional society beyond the frontier of civilization that hasn’t made the same grave errors that we have.

Primitivists are therefore prone to render theory and speculation as fact. This spans the religious dogmatists who insist on the real historical existence of Eden to the hard-core paleo nutritionists who hang their notions of health and vigor on DNA evidence of Paleolithic peoples. Such appeals to fact are a distraction, though, for primitivism is always an imaginative act. In the period when the European empires were expanding, metropolitan radicals imagined that the “savages” from regions that they had yet to colonize were the truly “noble” ones. The images they produced to imagine these societies typically represented them in the guise of a romanticized Greek antiquity. As this European social and economic system reached global saturation, the frontier between civilized and primitive shifted permanently into a chronological mode; or, as with films like Avatar, it was pushed outward to distant galaxies. Like the capitalism that fuels it, the basic law of primitivist idealism is its constant expansion.

In truth, primitivism doesn’t tell us anything meaningful about the forms of life that are idealized as being “primitive.” Our distant ancestors may have eaten a lot of protein, but it is dissatisfaction with the life centered on a grain-based diet that gives rise to the judgment that paleo diets are more natural. Put another way, primitivism is a manifestation of civilizational self-hatred. It is a creative pathology that makes lurid visions from the evidence of a self that we are convinced that we have lost, but which is nevertheless our inmost essential being. So why is primitivism again gaining traction? How does the hatred of civilization express itself in our time? In a world seemingly saturated by “civilization,” who now are the civilized and who the primitives?

¤

At around the time that the Chernobyl nuclear reactor exploded in Northern Ukraine in 1986, Christopher Knight pulled his car off to the side of the road in his native Maine, threw the keys away, and walked into the woods. As far as anyone knew, the reserved 20-year-old had either killed himself or started a new life elsewhere. Five years ago, however, Knight was caught stealing provisions from a holiday camp just miles from his family home. In the intervening 27 years, he had lived in utter solitude in a makeshift house of tarpaulin and old magazines constructed in a clearing between boulders.

It was one of those stories that everyone paid attention to for a day and soon forgot about. Not the journalist Michael Finkel. He initiated a correspondence with Knight as the hermit awaited trial. Finkel later visited him, uninvited, in jail. He wrote a lengthy article about Knight for GQ magazine, which he later expanded into a best-selling book, The Stranger in the Woods: The Extraordinary Story of the Last True Hermit (2017). As the title indicates, Finkel is keen to cast Knight’s solitude as a world-historical feat. Knight was no less than “the most solitary known person in all of human history,” his capture, “the human equivalent of netting a giant squid.”

The Stranger in the Woods is as much a morality tale about the civilization that Knight turned his back on as it is a curiosity story; in it, we glimpse the motions of contemporary primitivism. Knight’s remarkable act of solitude is set within the narrative casing of Finkel’s dogged pursuit of him and determination to turn him into an exemplar. Along with his letter of introduction, Finkel included an article he had published with National Geographic. It recounts a fortnight he spent living with a community of Hadza hunter-gatherers in the Rift Valley in Tanzania.

“Our genus,” Finkel explains in The Stranger in the Woods, “all lived like Onwas, in small bands of nomadic hunter-gatherers.” Although this meant living perpetually in the company of others, Finkel nevertheless wants to make the connection to Knight’s hermithood. Like hermits, hunter-gatherers “spent significant parts of their lives surrounded by quiet, either alone or with a few others […] This is who we truly are.” In turning his back on civilization, so the narrative logic goes, Knight was attempting to recover his true humanity. The book ends with a brief account of a 2007 story about the “last survivor of an Amazon tribe.” For 20 years this man had been persisting alone in his accustomed form of life. With Knight returned to civilization, Finkel closes, this man now “may be the most isolated person in the world.”

The connection between Knight and groups persisting in non-sedentary forms of life in the jungles of South America and deserts of Africa is tenuous to say the least. Knight subsisted almost entirely on take-home meals and junk food that holiday-makers stored in their holiday cabins. (Insofar as he lived by raiding, Knight was more barbarian than hunter-gatherer.) He read books, watched TV, listened to the radio, and tended his home. His camp was just a three-minute walk from the nearest cabin. The thieving aside, there are, no doubt, hundreds of reclusive individuals are living in comparable solitude across North America. Yet Finkel is keen to locate him on the other side of an invisible frontier where he joins the world’s hunter-gatherers and hard-core hermits, past and present.

In spite of decades, if not centuries, of sensational claims about “last” tribes and the assumption that the boundary between the civilized and the primitive would melt away as the former subsumed the latter, the belief in that boundary has been remarkably durable. Finkel is just one of many who recently have made the act of crossing this line into a spectacle for mainstream media and trade publishers: Tim Spector has reported on living for three days with the Hadza to test the benefits on his gut health; Paul Willis has tried being a hermit; and Sarah Marquis has discussed her three months on “Aboriginal walkabout” in northwest Australia. There have been stories about a Dutch-New Zealand couple who have lived a self-fashioned hunter-gatherer type of existence, on the back of a trade book recounting the experience. The publisher has no qualms describing the author as “living a primitive, nomadic life.”

Such experiments conducted for the sake of a self-help book or a TED talk are not really primitivism, though. They sit in an adjacent tradition of philo-primitivism — a more toe-dipping, holidaying encounter with the primitive. The civilized temporarily recover their natural selves so that they be able to live more truly in civilization. Such philo-primitivist entrepreneurship has a long history. To take one example, the American artist George Catlin spent much of the 1830s “roaming” territory beyond the American colonial frontier where he painted hundreds of portraits of the indigenous people he encountered. He later used these for a traveling road show recounting his time “amongst the wildest tribes.” Catlin hoped that the “doomed” Indians might yet be “preserved in their pristine beauty and wildness, in a magnificent park […] [a] thrilling specimen for America to preserve and hold up to the view of her refined citizens.” Here is Finkel in 2009: “[the Hadza] made me feel calmer, more attuned to the moment, more self-sufficient, a little braver […] It made me wish there was some way to prolong the reign of the hunter-gatherers, though I know it’s almost certainly too late.”

More determinedly primitivist thinkers and actors do not want to improve their civilized selves but destroy them, targeting the institutions and infrastructure of civilization. Fifteen years before Knight wandered into the woods, the young mathematics professor Theodore Kaczynski set up in a cabin off the grid in Western Montana, intent on becoming entirely self-sufficient. A few years later he began the famous letter-bomb campaign that culminated in blackmailing the American press into publishing his primitivist essay “Industrial Society and Its Future.” These events have been dramatized in a recent Discovery Channel series. The same actor who played Jake Sully in Avatar, the Australian Sam Worthington, plays the FBI profiler James Fitzgerald, who becomes seduced by Kaczynski’s anti-civilizational ideas as he investigates him. (Something about Worthington’s chiseled yet candid features evidently appeals as having a latent primitivism.)

All the ideas in Kaczynski’s essay are grounded in an underlying distinction between “primitive” and “civilized.” The language has a distinctly 1960s ring. It pitches “primitive man” against “the system” (a term used 140 times) that compels “obedience” and reduces humans to slaving for “the machines.” In one respect, though, it looks forward to contemporary primitivism. The positive ideal motivating his call to revolution is “WILD nature.” This refers to “those aspects of the functioning of the Earth and its living things that are independent of human management and free of human interference and control.” Years before the notion of the Anthropocene gained wide currency, Kaczynski identified humanity’s impact on the globe’s ecology as being the Earth’s most fundamental problem. And his counter-ideal of the “wild” has become the key notion for hard-core contemporary primitivists. They conceive of primitivism’s reverse teleology as being a process of “rewilding.” Internet groups with thousands of members compare techniques about how to go wild and get into ideological disputes about what this really means.

For the rewilders, the problems of disease, social inequality, and ecological crisis do not date to the advent of modern industrial capitalism but to agriculture and permanent domicile around 10,000 years ago. There followed the whole apparatus of “civilization” (the system!) and the global process by which humans reshaped the world to support this form of life. For the most extreme of the rewilders, the self-described “anarcho-primitivist” John Zerzan, the “monstrously wrong turn” was made even earlier. He believes the perils of civilization were initiated by “symbolic culture,” by which he means the production of abstract systems of representation such as language, art, and mathematics. Accordingly, anarcho-primitivism proposes an end to all religion, all art, all language, all conceptions of chronological time, and all the conditions of production stemming from agriculture.

Not all primitivist arguments are as speculative or tendentious as those of Kaczynski and Zerzan. In his recent Affluence without Abundance, the anthropologist James Suzman carefully describes the way of life that empowered the Ju/’hoansi to live sustainably in the Kalahari Desert of Southern Africa for tens of thousands of years. This is an important story to tell, he explains, “because there are so few ‘wild’ spaces left, and because maybe we can learn from understanding how [Scott’s Ju/’hoansi acquaintance’s] ancestors had lived.” In a similar ideological vein, James C. Scott’s Against the Grain: A Deep History of the Earliest States seeks to demolish the notion that the development of planting and harvesting crops in and of itself led humans to set about creating agricultural civilization. He points to evidence that there was as much as a 6,000-year gap between the time when humans in the Fertile Crescent started integrating planting and harvesting into their cycles of food production and the time when they began organizing themselves solely around agriculture. The villain, Scott concludes, was not agriculture but the development of the state. It was the state’s need for a measurable, dividable, storable, and visible crop for effective taxation that kicked off civilization and its associated ills.

Crucial for Suzman, Scott, and especially Zerzan is the conviction that humans practicing hunter-gatherer forms of life deliberately refused agricultural civilization. Most of our ancestors, they maintain, looked into the abyss of disease and machines and said, “No thanks.” The current state of affairs thus is cast as an aberration foisted on humanity by the few who have benefited from it. The primitivist’s task is to recover and reassert the agency that gave us the wisdom to live wild and stay wild.

¤

Five naked women strike poses on a platform between large drapes. The two standing frontally have pink faces that are consistent in color and form with their bodies. The head and neck of the woman standing in profile to the left is similarly consistent, but its bluish-brown color gives it a wooden or perhaps stony quality. The jaw of the woman at the rear, however, is elongated in a way that starts to resemble a mask. The face of the fifth woman seated in front of her is entirely mask-like. Her features are rearranged and out of proportion; their form skewed and angular.

Pablo Picasso’s Les Demoiselles d’Avignon (1907) is among the world’s most iconic images and the one most closely associated with the term “primitivism.” It has landmark status in Western art on formal and thematic grounds. (Picasso’s first title for it was The Brothel of Avignon, referring to a real institution in Avignon Street in Barcelona.) If the designation “primitive” attaches to its “uncivilized” techniques and themes, its source undeniably is the African-derived aesthetic of the mask-like faces. Famously, Picasso encountered an exhibit of African masks and sculptures in the Trocadéro ethnographic museum in the same year that he worked intensively on this painting.

There is a continuum in the rendering of the women’s heads from the life-like to mask. Look closely at the two central women, however, and you will see that their left eyes are rendered flatly in a gray-white.

It deadens them and suggests a mask-like quality creeping in. Here, again, is primitivism’s reverse teleology. Underlying these women’s commodified sexuality, it is implied, are the rituals that give rise to what Picasso called “Negro fetishes.” It is kept deliberately ambiguous whether the continuum suggests a movement toward or away from the mask-like state. Ultimately this is immaterial, as the logic is the same.

It should be evident by now that primitivism is a deeply racialized form of idealism. From the “beautiful ethnic women” that Cameron’s designer pinned up while creating the Na’vi, to the idealized accounts of hunter-gatherers in the wild in “lost world” journalism and anthropology, to the African masks appropriated by modernist artists, the “civilized” perspective is almost invariably that of a white European man who has lost touch with a dark-skinned, usually feminized self. The Na’vi may ostensibly be an alien species, but their features, speech, movements, and culture are quite obviously a mishmash of “tribal” non-Western societies. Whether Na’vi or sex workers, it is white men who want to fuck these utopias.

It should also be evident that whoever or whatever gets designated primitive is highly changeable. In Western contexts, primitivist idealism has tended to follow roughly 50-year cycles. This reflects different phases and configurations of the global capitalism against which it strains. As European colonial expansion came to a violent climax a century ago there was a wave of primitivist art and ideas that were projected onto uncolonized “tribal” societies writ large. These primitivists were seeking out forms of life that they believed had not been disenchanted by “reason.” Fifty years later, counter-cultural dissidents responded to looming nuclear apocalypse and rampant postwar consumerism with an ethos of dropping out and experiments in communitarianism. Contemporary primitivism, we can now recognize, is fueled principally by runaway global inequality, unbridled technological development, and the sense that ecological crisis is irreversible. It appeals specifically to hunter-gatherer forms of life for their perceived sustainability and egalitarianism.

Many therefore argue that primitivism is an inherently racist and usually patriarchal form of idealism. Its primitive/civilized binary replicates the logic that designated non-Europeans as “primitive” and therefore rightly subject to colonization and assimilation. Primitivism starts to seem more like the knell sounded by civilization for whichever group is unlucky enough to be designated primitive.

But before casting primitivism as only and always racist in this way, we should look at another iconic primitivist image:

A fun exercise is to try to distinguish between the symbolic, human, animal, and vegetative forms in this painting. It quickly becomes clear that it is not possible. The crescent face in the top right seems to hang from a stalk-like neck with a breast at its base that in..

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

“TIME IS NOT a real thing,” one character tells another in Hong Sang-soo’s 2014 film, Hill of Freedom. “Our brain makes up the mind frame of time continuity — past and present and future. I think we don’t have to experience life like that … But at the end, we cannot escape from this frame of mind.” His companion replies, after a blank pause, “Very interesting … tell me about it later, okay?” This exchange, held in broken English over food, wine, and cigarettes, briefly references Hong’s enduring interest in nonlinear time, before resolving in the Korean director’s typical irreverence, defusing any danger of didacticism.

Time is indeed not “real,” that is, not linear, in many of Hong’s films. The scenes in Hill of Freedom, for example, proceed according to a stack a letters that have been accidentally dropped down the stairs, their chronology reshuffled. Sometimes Hong’s narrative unfolds almost entirely in flashbacks, as in Virgin Stripped Bare by Her Bachelors (2000), sometimes half in dreams, as in Nobody’s Daughter Haewon (2013). But it almost doesn’t matter, because there is never any visual cue alerting viewers to the temporal seams between present and past, waking life and dream. Herein lies the most intriguing part of Hong’s play with time: the apparent continuity between discontinuous moments and realms.

This continuity comes in part from the deceptive realism of Hong’s film world, which typically consists of creative types in Korea, messy romances, desolate vacations, long meals, circuitous conversations, and emotional outbursts after too much soju. These narratives play out across dreams, memories, and present realities without any shift in tone, texture, or logic.

The result is a world that possesses the unreality of a dream, but without the manifest strangeness of surrealism, only tickling oddities — a random dog, an overzealous window washer, funny coincidences and parallels — in situations that are otherwise perfectly, disarmingly quotidian. Paradoxically, this intensifies the mystery of each scene: Is this past or present? Dream or waking life? Where are we in time and consciousness?

This mystery hovers over The Day After (2017), Hong’s latest film to receive a US theatrical release. Shot in simple black and white, the film is part of a triptych Hong made last year, all starring the actress Kim Min-hee, and all more or less allusive to the real-life extramarital affair between Hong and Kim, which has received much media attention in South Korea. Unlike the other two films (Claire’s Camera and On the Beach at Night Alone), The Day After casts Kim in the role of an outside observer who becomes mistakenly entangled in another couple’s affair.

Time flows both ways in this film, beginning with the opening sequence. After feebly evading his wife’s accusations of infidelity, Bongwan (Kwon Hae-hyo) steps out into a still-dark morning. Outside, the world is quiet, deserted, and transportive. Bongwan sets out from his apartment building alone, only to stumble back in the very next shot, arm in arm with a young woman. Everything looks the same (apart from Bongwan’s outfit), but we have shifted in time. After a maudlin scene between the two lovers in a stairwell, the film cuts to Bongwan crossing an empty street, alone once again. Later, on the train, the two lovers hold hands, before another cut: Bongwan, still on the train, now reads a book by himself.

Only later do we learn that these first scenes with Bongwan’s mistress and former assistant, Changsook (Kim Sae-byeok), are past moments in a relationship that has ended, interspersed with Bongwan’s solitary commute to work in the present day. With the arrival of Bongwan’s new assistant Areum (Kim Min-hee) at the office of his small publishing house, the film settles, for a moment, in the present. The two have coffee and awkward conversation in a four-minute long take, with the camera panning back and forth between them — Hong’s signature shot — before Areum heads to the bathroom, perhaps to escape Bongwan’s rather obtuse questions. In the next shot, the bathroom door is ajar and the hand dryer on, but the person who emerges after a moment isn’t Areum but Changsook. Unbeknownst to us, time has slipped backward again.

“[W]omen are the axis of time in the film,” Claire Denis observed of an earlier Hong film. “Women have their own time span […] It is as though these women are timing the film like a metronome.” Following the cryptic logic of this remark, Areum and Changsook seem to function as markers of time in The Day After. As the literal replacement for Changsook in the office, Areum signifies the present, a time when Changsook is already gone. Her status as Changsook’s stand-in becomes clear when Bongwan’s wife (Cho Yun-hee) mistakes her for the latter and attacks her in a fit of rage. Meanwhile, whenever Changsook appears on screen, we know we have been transported to some moment in the past, before Areum’s arrival.

But of course, this neat division does not hold for long. Toward the middle of the film, the impossible happens: Areum rounds a street corner to find Changsook and Bongwan locked in a tight embrace — Changsook, it turns out, has come back to reclaim her old life. For the first time, the two young women occupy the same space. Present and past, reality and memory seem, uncannily, to coexist on-screen.

As the three characters try to sort through the confusion in the scene that follows, viewers too must reorient themselves to a new temporal arrangement. Changsook gets her old job back and replaces Areum in the office, and the two switch positions in time: Areum now marks the past, Changsook the present. But before the film ends, Changsook exits once again. When, at some unspecified point in the future, Areum revisits the office in the final scene, Bongwan greets her with the same coffee and obtuse questions, before admitting sheepishly that he has forgotten who she is. There is thus no getting past the past. Time does not move forward, but rather circles itself in an endless loop.

¤

In his writing on Hong Sang-soo, film scholar David Bordwell considers the Korean auteur in relation to other major filmmakers to emerge from East Asia in recent decades: Hou Hsiao-Hsien, Tsai Ming-Liang, Hirokazu Kore-eda, Jia Zhangke. Like Hong, these directors create episodic, elliptical narratives of quotidian life through long takes and a realist style. Bordwell groups them under the loose term “Asian minimalism,” without acknowledging that they are part of an international movement toward “slow cinema” that extends far beyond East Asia, which has dominated arthouse films since the 1990s, with practitioners as geographically diverse as Hungary’s Béla Tarr, Turkey’s Nuri Bilge Ceylan, the United States’s Kelly Reichardt, and Russia’s Aleksandr Sokurov. This “slow wave” traces its transnational roots to the early half of the 20th century, pulling inspiration from Yasujirō Ozu and the Italian neorealists.

Though by no means monolithic, works of slow cinema seem to share a common desire to materialize time, to produce what Gilles Deleuze calls the “time-image”: visions devoid of narrative function, which exist solely to render time visible and perceptible. Though these sometimes resemble simple transition shots, their length and centrality suggest something more. The paradigmatic time-image for Deleuze comes toward the end of Ozu’s Late Spring (1949), when the young female protagonist is quietly overcome with sorrow. Rather than focus on her, Ozu cuts away to a vase on the windowsill, partially bathed in moonlight, placed against gently swaying silhouettes of bamboo. This still shot lasts for 10 seconds and expresses nothing more than an awareness of time’s passage.

Such shots, however, do not really exist in Hong Sang-soo’s work. Despite his preference for the long take, Hong’s is not an aesthetic of austerity and contemplation. In fact, his long takes, usually of characters in conversation, feature frequent zooms and pans that dilute the durational effect of an unbroken shot. Unlike slow cinema in the Ozu tradition, then, Hong’s experiment with time is rarely felt in any single shot but rather in the arrangement of shots and scenes. Bordwell refers to this as Hong’s “geometric model” of storytelling — a more rigorous narrative structure than that employed by his Asian contemporaries — which carefully builds a hidden pattern of repetition and symmetry into the story. It’s this geometric storytelling, Bordwell argues, that sets Hong apart from his “Asian minimalist” peers.

But Bordwell perhaps does not go far enough. What sets Hong apart goes beyond narrative structure, and lies more fundamentally in his particular interest in time. Whereas slow cinema foregrounds time’s physical passage, Hong foregrounds its deeply subjective nature. His is not time that flows independently of the human subject, but time as remembered or dreamed, though not necessarily by any particular character. The Day After, for example, does not feature personal flashbacks; nevertheless, it follows a sequential logic that can only emerge in retrospect, when associations form between nonconsecutive moments. This is a kind of time only tenuously connected to the real.

“But what is reality?” Areum asks Bongwan during their first and only work lunch. “If reality is unknowable, then it must not exist.” “But reality does exist,” Bongwan insists. “Words can’t describe it, but we can feel it.” Both time and reality are notions we can’t live without, but which we also don’t live completely within. Recognizing this, Hong does not completely erase the temporal seams in his films, but renders them more open and porous. As he declared in one interview: “The fragments of memory, dream, imagination and fragments of reality are just different in name only, but they all share homogeneity.”

This mysterious homogeneity keeps Hong’s film world forever riveting, no matter how mundane the action. In his meditation on the intimate relationship between living and dreaming, the philosopher Arthur Schopenhauer writes: “Life and dreams are the pages of one and the same book.” In dreams, we don’t read the pages in order, but flip haphazardly to one here, one there. Nevertheless, Schopenhauer notes, “A page read separately is indeed out of sequence in comparison to the pages that have been read in order: but it is not so much the worse for that, especially when we bear in mind that a whole consecutive reading starts and finishes just as arbitrarily.”

If dreams are arbitrary, then reality is just as much so. Hong is interested in the alternative time of dream and memory, in which a story begins and ends elsewhere, straying from chronology. The Day After ends with a jump forward in time, when Areum revisits Bongwan’s publishing house. But in my memory of the film, the moment of closure comes before, in the penultimate scene.

It is dark and quiet as Areum heads home in the back of a cab, having just been fired after her first day. She reads a book and chats with the driver, whom we hear but do not see. Suddenly, snow flurries fill the night outside. Areum rolls down the window, entranced. “It’s such a blessing,” she says as she gazes out, layers of light and shadow streaming across her face. In a voice-over, we hear Areum pray to God. Here is the faith that she espoused over lunch, when she insisted to Bongwan: “Refusing that belief we urgently need, because of that illusion you call reality, isn’t it silly of us?”

This may be as close to an expression of transcendence as possible for Hong, the master of nonsensical, minor, and irreverent things. Or perhaps, in the dream time of his film world, no transcendence is necessary, as the material and the spiritual have already merged — as lightly and disarmingly as past and present, dream and reality itself.

¤

Xueli Wang is a writer from New York City. She is currently pursuing a PhD in Art History and Film & Media Studies at Yale University.

The post Hong Sang-soo’s Dream Time appeared first on Los Angeles Review of Books.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

FOR JAMES COMEY, it was a pivotal moment. The rule of law and the very integrity of the government were, he thought, at stake. The president of the United States may have been blatantly violating the law, and Comey was being asked to compromise his principles out of loyalty to the president and his administration. In A Higher Loyalty: Truth, Lies, and Leadership, Comey brings readers inside the White House for a shocking firsthand account of this abuse of presidential power.

Except that it’s 2004, not 2017. And the president is George W. Bush, not Donald J. Trump. By and large, pundits and book reviewers have overlooked Comey’s most explosive revelations involving illegal conduct in the White House. It’s not until page 211 that Comey recounts his now-familiar meetings and conversations with Trump. But what is of greater and more lasting importance to the history of our constitutional democracy are the stunning disclosures Comey makes about the years of secret surveillance and torture that President Bush initiated, President Barack Obama ignored, and President Trump is threatening to resurrect and expand. Instead of wasting time accusing Comey of being “petty” for describing the color of Trump’s skin and the length of his tie, what is really important about his book is that we have a senior official in the Bush administration documenting how the government conducted illegal surveillance on US citizens and engaged in illegal torture (including waterboarding of detainees) in various “black sites” around the world.

Comey frames his entire book as a plea for “ethical leadership” based on the values of “truth, integrity, and respect for others,” without which the justice system begins to decay. Yet he never addresses why neither he nor anyone else has ever used their authority to hold those who engaged in illegal surveillance and torture fully accountable.

¤

After serving as an assistant US attorney and later as US attorney in New York, prosecuting among others the Mafia, Martha Stewart, and Scooter Libby, in December 2003 Comey was appointed by President Bush to serve as second in command to Attorney General John Ashcroft. In March 2004, Ashcroft was hospitalized with acute pancreatitis, so severe it had immobilized him with pain in the ICU at George Washington University Hospital, and Comey became acting attorney general of the United States. A month earlier, Comey, apparently for the first time, had learned of a highly secret program code-named “Stellar Wind,” which Bush had approved in 2002 at the recommendation of Vice President Dick Cheney and his legal counsel David Addington, on the basis of memos written by the Justice Department’s Office of Legal Counsel (OLC). Under Stellar Wind, for over two years, the National Security Agency had conducted surveillance activities in the United States against suspected terrorists and US citizens without any judicial warrants.

Based on the independent review conducted by the new head of the OLC, Jack Goldsmith, who had inherited the memos justifying Stellar Wind, Comey concluded that, as written and as implemented, the program was “clearly unlawful” and that for over two years the NSA had been engaged in surveillance “that had no legal basis because it didn’t comply with a law Congress had passed a generation earlier, which governed electronic surveillance inside the United States.” Consequently, Bush “was violating that statute in ordering the surveillance.” And the NSA was engaged in additional activities beyond the president’s order, “so nobody had authorized it at all.”

Roughly every six weeks, with Ashcroft’s certification, Bush had reauthorized the program. The latest presidential order was set to expire on March 11, but on March 4 Ashcroft collapsed and was rushed to the hospital. On March 9, Comey was summoned to a meeting at the White House presided over by Cheney to convince him to drop his opposition and approve the next reauthorization.

The vice president looked at me gravely and said that, as I could plainly see, the program was very important. In fact, he said, “Thousands of people are going to die because of what you are doing.”

“That’s not helping me,” I said. “That makes me feel bad, but it doesn’t change the legal analysis. I accept what you say about how important it is. Our job is to say what the law can support, and it can’t support the program as it is.”

Cheney reacted with anger and frustration. Attorney General Ashcroft had certified the program for the last two and a half years! Comey sympathized with him but told the group that the 2001 OLC opinion was so bad it was “facially invalid.” He added that: “No lawyer reading [that] could reasonably rely on it.” Addington cut in, “I’m a lawyer and I did.” Comey shot back, “No good lawyer.” The meeting was over.

According to Comey, the very next day Andy Card, White House chief of staff, and Alberto Gonzales, White House counsel, tried to do an end run around him by going to Ashcroft’s bedside in the hospital to get him to recertify the program. But Comey got there first and was waiting for them when they arrived. Heavily medicated and looking gray, Ashcroft was able to pull himself up on the bed with his elbows. He told Card and Gonzales that he had been misled about the scope of the surveillance program and now had serious concerns about its legal basis. Spent, he fell back on his pillow. “‘But that doesn’t matter now,’ he said, ‘because I’m not the attorney general.’” With a finger extended from his trembling left hand, he pointed at Comey. “There is the attorney general.” As Card and Gonzales were leaving the room, when their heads were turned, according to Comey, Ashcroft’s wife, Janet, who had witnessed the whole scene, “scrunched her face and stuck her tongue out at them.”

A few days later the president reauthorized an expanded version of Stellar Wind covering activities beyond the original presidential order. And instead of a place for the attorney general to sign, it was approved by Gonzales. Knowing that he “could not continue to serve in an administration that was going to direct the FBI to participate in activity that had no lawful basis,” Comey prepared his letter of resignation.

The next day, Comey attended the weekly intelligence briefing with the president. At the end of the meeting, when Bush took him aside, Comey told the president he felt “a tremendous burden.” The president asked why. “Because we simply can’t find a reasonable argument to support parts of the Stellar Wind program.” They discussed the details, and Comey said, “We just can’t certify to its legality.” The president replied, “But I say what the law is for the executive branch.” Comey said, “You do, sir,” adding, “But only I can say what the Justice Department can certify as lawful. And we can’t here. We have done our best, but as Martin Luther said, ‘Here I stand. I can do no other.’”

The audacity of Bush assuming he had the power to “say what the law is” is only exceeded by the abject subservience (and inaccuracy) of Comey’s reply, “You do, sir.” Comey should have reminded Bush that largely because Richard Nixon believed when “the president does it, that means it is not illegal,” he was forced to resign to avoid impeachment and removal from office. That this flawed vision of presidential power is still kicking around is demonstrated by the fact that one of Trump’s lawyers recently claimed that “the president cannot obstruct justice because he is the chief law enforcement officer (under the Constitution’s Article II) and has every right to express his view of any case.”

At least Comey told Bush the Justice Department could not certify the program as lawful, prompting the president to ask for two months to try to get a “legislative fix.” But to his credit, Comey refused. “The American people,” he said “are going to freak when they find out what we have been doing.” And he added that Robert Mueller, then-director of the FBI, was going to resign over this.

Bush asked to see Mueller. Ten minutes later, Mueller rejoined Comey and reported that the president had issued a directive: “Tell Jim to do what needs to be done to get this to a place where Justice is comfortable.” Comey and his team worked all weekend drafting a new presidential order that narrowed the scope of the NSA’s authority and delivered it to the White House Sunday night. On Tuesday, Gonzales told Comey the memo was being sent back and asked him not to “overreact.” Overreact? Comey called it “a big middle finger, clearly written by Addington,” saying how Comey was wrong about everything and was usurping presidential authority. He rejected all of Comey’s proposed changes. “It said nothing about our mothers being whores, but it might as well have. I pulled out my resignation letter and changed the date to March 16. Screw these people.”

But two days later, without notice, the president signed a new order that Comey says incorporated all of the changes he and his team had requested. We have to take his word for it, because Comey offers no details. And he remains silent about the fact that, for over two years, the Bush administration had been conducting a widespread program of surveillance of US citizens that Comey knew was “clearly unlawful.” What happened to Comey’s dedication to “ethical leadership” based on the values of “truth, integrity, and respect for others,” without which “our justice system cannot function and a society based on the rule of law begins to dissolve”?

¤

According to Comey, in June 2004 Jack Goldsmith told him that, six months earlier, he had spotted serious problems with the legal basis on which the CIA since 2002 had been conducting a clandestine program of beating, starving, humiliating, and waterboarding detainees at secret “black sites” around the world.

In 1994, the United States ratified the United Nations Convention Against Torture and Other Cruel, Inhuman or Degrading Treatment, under which torture was defined as the intentional infliction of severe mental or physical pain or suffering. As Comey saw it, in 2002, after the 9/11 attacks, the CIA wanted to use coercive physical tactics to get information from suspected al-Qaeda terrorists and asked the OLC whether various tactics, including waterboarding, sleep deprivation, and cramped confinement, would violate the law against torture. Instead of simply responding, “Are you kidding? Yes!” OLC lawyers John Yoo and Jay Bybee (whom Comey declines to name in his book, even though they have been publicly identified for many years) issued a series of memos purporting to approve the use of the full menu of “enhanced interrogation techniques” requested by the CIA.

Curiously, for a man who insists on ethical leadership, Comey exhibits great sympathy for the Bush lawyers. He says they were making decisions during “a time of crisis” when they “feared” that more attacks were coming, believing that physically abusive interrogations were “not only effective but essential to saving countless innocent lives.” It was under “this kind of pressure” that the memos authorizing torture were written.

Regrettably, the high-minded Comey seems oblivious of the fact that at “a time of crisis,” lawyers are expected to steel themselves against the pressures of the moment so that they can offer sober and dispassionate legal advice, solidly grounded in the law. Comey himself has repeatedly preached the fundamental principle of the rule of law, and claims he decided to study law because “[l]awyers participate much more directly in the search for justice.” He writes that the “credibility of the Department of Justice is its bedrock,” that the administration of justice must remain independent of politics, and that lawyers at the Justice Department had to do everything they could to “protect the department’s reputation for fairness and impartiality, its reservoir of trust and credibility.” Comey assures us that “nobody needed to tell me how hard we needed to fight terrorism, but I also understood we had to do it the right way. Under the law.” But despite these deeply held principles, this prominent lawyer with over 33 years of experience in criminal law and government service, excuses the Justice Department and White House lawyers who failed every one of these tests.

To his credit, Comey reports that since he agreed with Goldsmith “that the legal opinion about torture was just wrong,” he told Attorney General Ashcroft he needed “to take the dramatic step of withdrawing the Justice Department’s earlier opinion on the legality of these actions,” and Ashcroft agreed. For Comey, the “Constitution and the rule of law are not partisan political tools. Lady Justice wears a blindfold. She is not supposed to peek out to see how her political master wishes her to weigh a matter.”

Inspiring words, but not once in his book, nor apparently at any time in his career, has Comey recommended that any official in the Bush administration who authorized and conducted torture and other forms of cruel, inhuman, or degrading treatment should be investigated and, if warranted, prosecuted for violations of US and international law.

Instead, he to expresses sympathy for CIA agents who tortured detainees in their custody. He knows full well what these agents were doing:

Taking a naked, cold, severely sleep-deprived and calorie-deprived person, slamming him against a wall, putting him in stress positions, slapping him around, waterboarding him, and then sticking him in a small box could easily produce great mental suffering, especially if the CIA did those things more than once.

In the face of all this and the other evidence of the CIA engaging in systematic and repeated torture, Comey excuses the torturers because “they had a right to rely on the advice of government counsel.”

This is an extraordinary and deeply flawed statement. The claim of a “right” to rely on government counsel is reminiscent of the Nazi era defense that “I was just following orders.” In the wake of the Nuremberg Trials, the UN International Law Commission confirmed that “the fact that a person acted pursuant to order of his Government or of a superior does not relieve him from responsibility under international law, provided a moral choice was in fact possible for him.” The UN Convention Against Torture makes it clear that “no exceptional circumstances whatsoever, whether a state of war or a threat of war, internal political instability or any other public emergency, may be invoked as a justification of torture.” It’s appalling that Comey, in the face of these highly relevant legal constraints, would argue that the torturers had a “right” to follow orders.

In June 2004, Goldsmith, with Comey’s support, withdrew the torture memos. Shortly thereafter Goldsmith resigned as acting head of the OLC and he was replaced by Daniel Levin. By December, Levin and his team had completed a new interrogation opinion. As part of that process, Levin himself had undergone supervised waterboarding. He told Comey it was “the worst experience of his life.”

By then, Bush had replaced Ashcroft with Alberto Gonzales as attorney general. Comey decided that he could not serve as Gonzales’s deputy. In the spring of 2005, he announced he would be leaving in August. Comey writes that he didn’t have the “stomach” for what would be more “losing battles” within the administration, and “more important,” he felt he needed more than his government salary — his oldest child was headed to college.

Meanwhile, Steven Bradbury replaced Levin as the head of the OLC. Bradbury issued new memos authorizing aggressive interrogation techniques, which Comey believed amounted to torture. He protested to Gonzales to no avail. “No policy changes were made. CIA enhanced interrogations could continue. Human beings in the custody of the United States government would be subjected to harsh and horrible treatment.” Comey left the Justice Department two months later. The torture program would continue for two more years.

Comey conveniently skips ahead eight years to 2013 when President Barack Obama appointed him to a 10-year term as director of the FBI. But observant readers will want to know: How could you leave the Justice Department knowing the CIA torture program was still going on? Once you left, why didn’t you blow the whistle? While you remained silent knowing what you knew, the torture program continued for two more years? You were in a unique position to speak out, but you did nothing. Didn’t you want to “participate much more directly in the search for justice”? Aren’t you the one who told us that lawyers at the Justice Department had to do everything they could to protect the department’s reputation, its “reservoir of trust and credibility”? Didn’t you say that even in the context of 9/11, “we had to do it the right way. Under the law”? So much for “ethical leadership.”

¤

Of course, A Higher Loyalty is best known for Comey’s famous confrontations with President Trump. The president’s demand for Comey’s “loyalty” (“I need loyalty. I expect loyalty,” Comey claims Trump told him). The president’s request that Comey go easy on the prosecution of National Security Advisor Mike Flynn (“I hope you can see your way clear to letting this go, to letting Flynn go. He is a good guy. I hope you can let this go.”) The president’s request that Comey go public with the fact that he was not personally investigating Trump (“We need to get that fact out”). And eventually, the president’s firing of Comey.

Comey describes these incidents in an engaging and cinematic style. His reporting is filled with vivid details and direct quotes attributed to both Trump and himself which make these accounts convincing and credible. But all the attention devoted to these shiny objects, should not obscure the rest of Comey’s book. Unintentionally, A Higher Loyalty teaches more about “ethical leadership” by studying not what Comey has done in his career but by what he has failed to do. Not only has our government failed to hold any officials accountable for torture and illegal surveillance, but those very officials have been rewarded with high positions, book deals, prominent speaking tours, and, most recently, the May 17 confirmation of Gina Haspel as director of the CIA.

¤

Stephen Rohde is a constitutional lawyer, lecturer, writer, and political activist.

The post Higher Loyalty? James Comey and the Failure of Leadership appeared first on Los Angeles Review of Books.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview