The Social Media Collective (SMC) is a network of social science and humanistic researchers, part of the Microsoft Research labs in New England and New York. It includes full-time researchers, postdocs, interns, and visitors. Our primary purpose is to provide rich contextual understanding of the social and cultural dynamics that underpin social media technologies.
This year our Microsoft Research, New England lab is seeking to fill an open postdoctoral line – for which social media candidates are eligible. We strongly encourage applicants with expertise that complements those of the Social Media Collective, and that bridges between our interests and other areas of our lab (economics, ICT4D, machine learning and statistics, cryptography, algorithmic game theory, bioinfomatics). We would be extremely happy to see a stellar junior SMC scholar rise to the top of this candidate pool! This is a particularly exciting position, as it is designed to recognize applicants who can demonstrate interdisciplinary connections to other areas of the MSRNE lab. So, please forward this to students and colleagues who you think might be interested, and of interest.
An application must include (a) your CV, (b) research statement (4pg max, including a 1pg outline of your dissertation), (c) names of three people willing to provide letters of recommendation, and (d) two publications / writing samples. If you have questions about this position or about the application process, please feel free to email Nancy Baym and include “SMC / General Postdoc call” in the subject heading.
Microsoft Research New England (MSRNE) is looking for advanced PhD students to join the Social Media Collective (SMC) for its 12-week Internship program. The Social Media Collective (in New England, we are Nancy Baym, Tarleton Gillespie, and Mary Gray, with current postdoc Elena Maris) bring together empirical and critical perspectives to understand the political and cultural dynamics that underpin social media technologies. Learn more about us here.
The Social Media Collective (SMC) is a network of social science and humanistic researchers, part of the Microsoft Research labs in New England and New York. It includes full-time researchers, postdocs, interns, and visitors. Our primary purpose is to provide rich contextual understanding into the social and cultural dynamics that underpin social media technologies. Our work spans several disciplines: anthropology, communication, economics, information, law, media studies, women’s studies, science & technology studies, and sociology.
The Social Media Collective is comprised of full-time researchers, postdocs, visiting faculty, Ph.D. interns, and research assistants. Current projects in New England include:
How does the use of social media affect relationships between artists and audiences in creative industries, and what does that tell us about the future of work? (Nancy Baym)
How are social media platforms, through their algorithmic design and user policies, taking up the role of custodians of public discourse? (Tarleton Gillespie)
What are the cultural, political, and economic implications of crowdsourcing as a new form of semi-automated, globally-distributed digital labor? (Mary L. Gray)
• How and why do industries seek out qualitative understandings of users, technology, big data, metrics and analytics, and who does this kind of ‘soft data’ work? (Elena Maris)
The ideal candidate may be trained in any number of disciplines (including anthropology, communication, information studies, media studies, sociology, science and technology studies, or a related field), but should have a strong social scientific or humanistic methodological, analytical, and theoretical foundation, be interested in questions related to media or communication technologies and society or culture, and be interested in working in a highly interdisciplinary environment that includes computer scientists, mathematicians, and economists.
Primary mentors for this year will be Nancy Baym, Mary L. Gray, and Tarleton Gillespie, with additional guidance offered by other members of the SMC. We are looking for applicants working in one or more of the following areas:
Personal relationships and digital media
Audiences and the shifting landscapes of producer/consumer relations
Affective, immaterial, and other frameworks for understanding digital labor
How platforms, through their design and policies, shape public discourse
The politics of algorithms, metrics, and big data for a computational culture
The political economies of on-demand labor
The difference between traditional cooperatively-managed markets and Commons and online platform cooperatives
The ethics of dataset creation and uses of large-scale social data for qualitative research
Interns are also expected to give short presentations on their project, contribute to the SMC blog, attend the weekly lab colloquia, and contribute to the life of the community through weekly lunches with fellow PhD interns and the broader lab community. There are also natural opportunities for collaboration with SMC researchers and visitors, and with others currently working at MSRNE, including computer scientists, economists, and mathematicians. PhD interns are expected to be on-site for the duration of their internship.
Some of the compensation and benefits of this position include:
highly competitive salary
travel to/from internship location from your university location (including the intern and all eligible dependents)
housing costs: interns can select one of two housing options
fully furnished corporate housing covered by Microsoft, or
a lump sum for finding and securing your own housing
local transportation allowance for commuting
health insurance is not provided; most interns stay covered under their university insurance, but interns are eligible to enroll in a Microsoft sponsored medical plan
Applicants must have advanced to candidacy in their PhD program by the time they start their internship. (Unfortunately, there are no opportunities for Master’s students or early PhD students at this time). Applicants from historically marginalized communities, underrepresented in higher education, and students from universities outside of the United States are encouraged to apply.
Your application needs to include:
A short description (no more than 2 pages, single spaced) of 1 or 2 projects that you propose to do while interning at MSRNE, independently and/or in collaboration with current SMC researchers. The project proposals can be related to, but must be distinct from your dissertation research. Be specific and tell us:
What is the research question animating your proposed project?
What methods would you use to address your question?
How does your research question speak to the interests of the SMC?
Who do you hope to reach (who are you engaging) with this proposed research?
A brief description of your dissertation project(no more than 1 page, single spaced).
An academic article-length manuscript (~7,000 or more) that you have authored or co-authored (published or unpublished) that demonstrates your writing skills.
A copy of your CV.
if available, pointers to your website or other online presence (this is not required).
In addition to those qualifications, you’ll need submit the names of three reference letter for this position (one must be your dissertation advisor). After you submit your application, a request for letters may be sent to your list of references on your behalf. Note that reference letters cannot be requested until after you have submitted your application, and furthermore, that they might not be automatically requested for all candidates. You may wish to alert your letter writers in advance, so they will be ready to submit your letter.
If you have any questions about the application process, please contact Tarleton Gillespie at firstname.lastname@example.org and include “SMC PhD Internship” in the subject line.
Due to the volume of applications, late submissions (including submissions with late letters of reference) will not be considered. We will not be able to provide specific feedback on individual applications. Finalists will be contacted in February to arrange a Skype interview. Applicants chosen for the internship will be informed in March and announced on the socialmediacollective.org blog.
PREVIOUS INTERN TESTIMONIALS
“The Microsoft Internship is a life-changing experience. The program offers structure and space for emerging scholars to find their own voice while also engaging in interdisciplinary conversations. For social scientists especially the exposure to various forms of thinking, measuring, and problem-solving is unparalleled. I continue to call on the relationships I made at MSRE and always make space to talk to a former or current intern. Those kinds of relationships have a long tail.” — Tressie McMillan Cottom, Sociology, Virginia Commonwealth University
“My internship experience at MSRNE was eye-opening, mind-expanding and happy-making. If you are looking to level up as a scholar – reach new depth in your focus area, while broadening your scope in directions you would never dream up on your own; and you’d like to do that with the brightest, most inspiring and supportive group of scholars and humans – then you definitely want to apply.” — Kat Tiidenberg, Communication and Culture, Aarhus University, Denmark
“Coming right after the exhausting, enriching ordeal of general/qualifying exams, it was exactly what I needed to step back, plunge my hands into a research project, and set the stage for my dissertation… PhD interns are given substantial intellectual freedom to pursue the questions they care about. As a consequence, the onus is mostly on the intern to develop their research project, justify it to their mentors, and do the work. While my mentors asked me good, supportive, and often helpfully hard, critical questions, but my relationship with them was not the relationship of an RA to a PI– instead it was the relationship of a junior colleague to senior ones.” — J. Nathan Matias, Psychology, Princeton University (read more here)
“My summer at Microsoft Research with the Social Media Collective was nothing short of transformative. My theoretical and methodological horizons broadened, and the relationships I forged continue to shape my development as a scholar.” — Shannon MacGregor, Communication, University of Utah
“It might be hard to believe that a twelve-week internship could be so integral to your professional and personal growth, but that’s exactly how I felt at that end of my time at MSRNE. I learned more about writing, critical thinking, public speaking, collegiality, and self-belief than I thought possible within such a short space of time, and I gained a group of forever friends and mentors in the process. The internship also provides you with a rare opportunity to work in a truly interdisciplinary environment and allows you to take your research proposal in a direction you might not have planned for. MSRNE was, and will continue to be, the perfect intellectual home for me.” — Ysabel Gerrard, Digital Media and Society, University of Sheffield, UK
“The internship at Microsoft Research was all of the things I wanted it to be – personally productive, intellectually rich, quiet enough to focus, noisy enough to avoid complete hermit-like cave dwelling behavior, and full of opportunities to begin ongoing professional relationships with other scholars who I might not have run into elsewhere.” — Laura Noren, Center for Data Science, New York University
“If I could design my own graduate school experience, it would feel a lot like my summer at Microsoft Research. I had the chance to undertake a project that I’d wanted to do for a long time, surrounded by really supportive and engaging thinkers who could provide guidance on things to read and concepts to consider, but who could also provoke interesting questions on the ethics of ethnographic work or the complexities of building an identity as a social sciences researcher. Overall, it was a terrific experience for me as a researcher as well as a thinker.” — Jessica Lingel, Communication, University of Pennsylvania
“The Social Media Collective was instrumental throughout the process in giving me timely, sharp, and helpful feedback for my research project. These conversations further inspired new thinking that has shaped for my overall research agenda. I also felt supported by the process at Microsoft Research, to take on what may seem intimidating, especially for social science and humanities students: tackling a research project in 12 short weeks. Socially, the Social Media Collective and other interns at Microsoft Research New England were all amazingly nice and fun people, with whom I made great memories. Overall, the internship was an invaluable experience for my intellectual and professional development.”— Penny Trieu, Information, University of Michigan
“There are four main reasons why I consider the summer I spent as an intern with the Social Media Collective to be a formative experience in my career. 1. was the opportunity to work one-on-one with the senior scholars on my own project, and the chance to see “behind the scenes” on how they approach their own work. 2. The environment created by the SMC is one of openness and kindness, where scholars encourage and help each other do their best work. 3. hearing from the interdisciplinary members of the larger MSR community, and presenting work to them, required learning how to engage people in other fields. And finally, 4. the lasting effect: Between senior scholars and fellow interns, you become a part of a community of researchers and create friendships that extend well beyond the period of your internship.” — Stacy Blasiola, Facebook UX Research
“This internship provided me with the opportunity to challenge myself beyond what I thought was possible within three months. With the SMC’s guidance, support, and encouragement, I was able to reflect deeply about my work while also exploring broader research possibilities by learning about the SMC’s diverse projects and exchanging ideas with visiting scholars. This experience will shape my research career and, indeed, my life for years to come.” — Stefanie Duguay, Communication Studies, Concordia University, Canada
“My internship with Microsoft Research was a crash course in what a thriving academic career looks like. The weekly meetings with the research group provided structure and accountability, the stream of interdisciplinary lectures sparked intellectual stimulation, and the social activities built community. I forged relationships with peers and mentors that I would never have met in my graduate training.” — Kate Zyskowski, Facebook UX Research
“It has been an extraordinary experience for me to be an intern at Social Media Collective. Coming from a computer science background, communicating and collaborating with so many renowned social science and media scholars teaches me, as a researcher and designer of socio-technical systems, to always think of these systems in their cultural, political and economic context and consider the ethical and policy challenges they raise. Being surrounded by these smart, open and insightful people who are always willing to discuss with me when I met problems in the project, provide unique perspectives to think through the problems and share the excitements when I got promising results is simply fascinating. And being able to conduct a mixed-method research that combines qualitative insights with quantitative methodology makes the internship just the kind of research experience that I have dreamed for.” — Ming Yin, Computer Science, Purdue University
“Spending the summer as an intern at MSR was an extremely rewarding learning experience. Having the opportunity to develop and work on your own projects as well as collaborate and workshop ideas with prestigious and extremely talented researchers was invaluable. It was amazing how all of the members of the Social Media Collective came together to create this motivating environment that was open, supportive, and collaborative. Being able to observe how renowned researchers streamline ideas, develop projects, conduct research, and manage the writing process was a uniquely helpful experience – and not only being able to observe and ask questions, but to contribute to some of these stages was amazing and unexpected.” — Germaine Halegoua, Film & Media Studies, University of Kansas
“Not only was I able to work with so many smart people, but the thoughtfulness and care they took when they engaged with my research can’t be stressed enough. The ability to truly listen to someone is so important. You have these researchers doing multiple, fascinating projects, but they still make time to help out interns in whatever way they can. I always felt I had everyone’s attention when I spoke about my project or other issues I had, and everyone was always willing to discuss any questions I had, or even if I just wanted clarification on a comment someone had made at an earlier point. Another favorite aspect of mine was learning about other interns’ projects and connecting with people outside my discipline.” — Jolie Matthews, Learning Sciences, Northwestern University
As I hope you’ve heard by now, the SMC is publishing books like mad. Tarleton Gillespie’s Custodians of the Internet is blazing a trail through the content moderation debate, Mary Gray and Sid Suri’s Ghost Work will be out in May, and my own Playing to the Crowd has hit the road seeking readers.
In that vein, here are some upcoming public events where I will be talking about my book in NYC and its environs:
Monday October 1 @ 3-4 pm: A small book session for people who have read the book (pre-registration required) at Data & Society
There will be a few more talks coming up elsewhere (University of Illinois Chicago 11/29, University of Michigan 12/4, London in January, Oslo in February). If you’re interested in inviting me to talk with your folks, shoot me an email.
We’re thrilled to announce our newest postdoc in the Social Media Collective, based in the New England lab of Microsoft Research!
Elena Maris, University of Pennsylvania
Elena received her Ph.D. in Communication from the Annenberg School of Communication at the University of Pennsylvania. Her research examines the ways media/tech industries and audiences work to influence each other, with a focus on their technological tactics and the roles of gender and sexuality in their interactions. She also studies how identity is represented and experienced in popular culture and online. Her dissertation explored how online audience groups construct media industries and opportunities for influencing media content, a concept she called the “imagined industry.” Elena returns to MSRNE after interning with the SMC in 2017, and will continue working on the project she began then, on industries’ use of metrics to measure fandom. She is also starting a new project about the increased demand for qualitative understandings of technology, big data, metrics and analytics in the tech industries, and the gendering of such ‘soft’ data work. Elena’s work has been published in Critical Studies in Media Communication, the European Journal of Cultural Studies, and Feminist Media Studies.
It’s of course hard to celebrate the choice of one, when we also had to say no to so many superb candidates. We are so honored and humbled by the quality and range of scholars who want to come work with us, and offer our best wishes to those we couldn’t bring in as well.
In my new book Networked Press Freedom: Creating Infrastructures for a Public Right to Hear [MIT Press | Amazon] I critically examine what press freedom means today. I argue that, as news production, circulation, and interpretation are increasingly distributed across a new and unstable set of humans and nonhumans—from journalists and algorithms to platform designers and bots—it is increasingly difficult to say exactly what press freedom means. What is the press trying to be free from? To what ends and for which versions of the public? How do we recognize a free versus an unfree press?
I define networked press freedom as a system of separations and dependencies among humans and nonhumans that helps to ensure not only journalists’ right to speak but publics’ rights to hear. Engaging with a wide range of literature and analyzing a 7-year corpus of digital news examples, I argue that the networked press earns its freedom to the extent that it creates defensible publics. Instead of only seeing press freedom as journalists’ right to pursue their visions of the public free from governments, markets, and technologies, the book tells a nuanced and historically grounded story that helps readers ask: what kind of public, what kind of freedom, and what kind of press? Below is an excerpt. (This excerpt was first posted at the Nieman Lab.)
What, exactly, is press freedom, and why does it matter? In the popular discourse of the United States, we do not ask this question very often or very deeply. The answers are obvious and almost cliché: the public has a right to know, journalists are the people’s watchdogs, they afflict the comfortable and comfort the afflicted, democracy dies in darkness, and voters need objective information to be good citizens. Popular histories of modern U.S. journalism celebrate heroes who spoke truth to power and brought down institutions—Ida B. Wells, Nellie Bly, Ida Tarbell, Edward R. Murrow, I. F. Stone, Bob Woodward, Carl Bernstein, Walter Cronkite. They often are remembered as most effective when they were left alone to pursue their visions of what they thought the public needed. These virtuous, creative, public-spirited, hard-working storytellers occupy powerful positions within the modern mythology of press freedom. If we just get out of the way of good journalists and let them tell truth to power, they will produce the information that vibrant democracies need.
This myth is somewhat true, and these heroes were indeed expert storytellers who challenged each era’s norms. But when we think about press freedom only or even mostly as the freedom of journalists from constraints, it becomes a narrow and almost magical phenomenon that depends on individuals and heroism. It says that journalists already know what the public needs, and just need freedom from the state, marketplaces, and audiences to pursue self-evident things like truth and the public interest. These brave journalists and publishers show their commitment to the public and the power of their independence by going to court and sometimes jail to protect sources and fight censorship. If journalists and publishers can get truth to the public, then individual readers and viewers will be able to make informed decisions about how to think and vote. Ultimately, the press wants to be left alone so that you can be left alone. The kind of democracy that dominates this common image of press freedom relies on a lot of independences—a lot of freedoms from.
This book tries to challenge this mythology. I want to complicate the idea of press freedom and show that it emerges not from individual heroes but from social, technological, institutional, and normative forces that vie for power, imagine publics, and implicitly fight for visions of democracy. I see press freedom as a concept to think with—a generative and constructive tool for looking at any given era of the press and public life and asking, “Is this version of press freedom giving us the kind of publics we need? If not, how do we revise the institutional arrangements underpinning press freedom and make a different thing that we agree to call ‘the press’?” Alternatively, how do we adjust our normative expectations about what publics should be, creating a different image of freedom that we then might demand from institutions that make up the press? If we see press freedom not as heroic isolations—journalists breaking free to tell truths to the publics they imagine—but as a subtler system of separations and dependencies that make publics, then we might see each era’s types of press freedom as bellwethers for particular visions of the public. Ideas of press freedom become evidence of thinking about publics. Rethinking press freedom can be a way to see how press power flows, a prompt to ask which flows produce which publics, and a challenge: what types of news, publics, or presses are we not seeing because our vision of press freedom is so narrow?
If you think press freedom is a particular thing, you will likely look for that thing when you want to see whether a democracy is healthy or whether journalists are doing their jobs. Assumptions about press freedom can shut down conversations about the press and democracy: “We have a free press, so the election result is what it should be” or “We have a free press, and corruption is still rampant!” or “If we had a free press, then we’d have a different government” or “A free marketplace is a free press because truth comes from competing viewpoints.” Statements like these—coming from journalists, audiences, politicians, advertisers, publishers—assume that we already know what we mean by a free press and that our problem is just implementing it.
But if we can liberate the idea of press freedom from these assumptions and assumptions that equate it with whatever journalists say publics need, then press freedom becomes a generative and expansive tool—a way to think about publics, self-governance, and democracy. Because, as Edwin Baker puts it, different democracies need different media, we can complicate democracy by thinking more creatively about press freedom.
Given this moment, when media systems are in a fundamental flux, this book offers a way to think about press freedom as sociotechnical forces with separations and dependencies that help to make publics. I aim to engage with and use this moment of fundamental change to show what press freedom could mean. Contrary to the dominant historical myth in the United States, I argue that press freedom should not be seen simply as journalists’ freedom to write and publish. Rather, press freedom is a normative and institutional product of any given era: it is what people think press freedom should mean and how people have arranged people and power to achieve that vision.
Most simply, press freedom is the right and responsibility to create separations and dependencies that enable democratic self-governance. It is the power and obligation to know and defend the publics that its separations and dependences create. Today these separations and dependencies live in distributed, technological infrastructures with new actors and often invisible forces, so for the networked press to claim its autonomy, it needs to show how and why it arranges people and machines in particular ways. It needs to understand how its humans and nonhumans align or clash to create some publics but not others. It needs to be able to defend why it creates such meetings, and when necessary for a particular image of the public, it needs to develop new types of sociotechnical power that let it make new types of publics.
Rather than abandoning or collapsing the idea of press freedom—seeing it as naive or anachronistic—my aim is to revive and redeploy it. I trace the idea of press freedom through theories of democratic self-governance, situate it within the press’s institutional history, argue that each era of sociotechnical change creates a particular meaning of press freedom, and ask how the contemporary, networked press might claim its freedom and make new publics. Instead of being seen as a holdover from a time that no longer exists, press freedom could be viewed as a powerful framework for arguing why and how the networked press could change.
Interspersed with this tour of institutional forces, I try to deploy my framework and use this new notion of press freedom to argue for a particular normative value—a public right to hear. I claim that the dominant, historical, professionalized image of press freedom—as whatever journalists say they need to be free from to pursue self-evident public interest—privileges an individual right to speak over a public right to hear. It confuses journalists’ freedom to publish with publics’ rights to hear what they need to hear in order to sustain themselves as publics—to realize the inextricably shared conditions under which they live, discover and debate their similarities and differences, devise solutions to predicaments, insulate themselves from harmful forces and nurture contrarian viewpoints, recognize the resources that hold them together, and reinvent themselves through means other than the rational, informational models of citizenship that dominate the traditional mythology of U.S. press freedom. For publics to be anything other than what unconstrained journalists imagine them to be, press freedom can be defensible only if it can be shown that the press’s institutional arrangements produce expansive, dynamic, diverse publics.
In an era when many assumptions about communication and information are being reconsidered, it is difficult to say exactly what journalists can or should be free from. A better question to ask might be, “How is the networked press—journalists, software engineers, algorithms, relational databases, social media platforms, and quantified audiences—creating separations and dependencies that enable a public right to hear, make some publics more likely than others, and move beyond an image of the public as whatever journalists assume it to be?”
Three stories can help illustrate the phenomenon. First, in September 2008, high in Google News’s list of results for a search on “United Airlines” was a story in the South Florida Sun Sentinel on United’s recent bankruptcy filing. The story detailed how United had lost significant revenue, could not meet market forecasts, and needed protection from creditors and time to restructure. A Miami investment adviser responsible for publishing news alerts through Bloomberg News Service saw the story and added it to Bloomberg’s newsletter; United’s stock dropped 75 percent in one day before trading was halted. Unfortunately for United, the Sentinel’s website displayed the current date (2008) at the top of its page; it did not include the story’s original date of publication (2002). Google’s Web crawler mistook the old story for a current story, creating a perfect storm of misinformation: the Sentinel displayed dates in a confusing manner; Google’s crawler read the only date it saw and made an assumption; the investment adviser assumed that Google highly ranked recent information; Bloomberg subscribers and high-frequency traders assumed that the newsletter contained timely and actionable information; and the stock market assumed that its behavior was rational and based on true information. This is a story of networked press freedom because although the Sentinel may have tipped the first domino, the failure is the fault of no single actor. A sociotechnical failure of data, algorithms, individuals, and institutions together led to the creation of false news that drove action.
Second, in 2008, the Pocono Record published an online story about Brenda Enterline’s sexual harassment lawsuit against Pocono Medical Center. In comments left by readers under the story, several people anonymously said that they had personal knowledge of incidents relevant to the lawsuit. When Enterline’s attorneys subpoenaed the newspaper for access to the commenters, the paper refused, claiming that it had a right and obligation to protect the commenters’ First Amendment rights to anonymity. The Pennsylvania district court agreed, essentially extending a de facto shield law around the Pocono Record’s reporters and commenters. In contrast, also in September 2008, a grand jury in Illinois successfully subpoenaed the Alton Telegraph for the names, home addresses, and IP addresses of anonymous commenters who left responses to an online story the paper had run about a murder investigation. The paper argued that “the Illinois reporter’s shield law protects the identities of the anonymous commenters as ‘sources,’” but the court disagreed, saying that such a shield covers only reporters and not commenters. Such cases have continued, with an Idaho judge ruling in 2012 that the Spokesman-Review had to reveal the identity of an anonymous commenter accused of libel, and a 2014 U.S. federal court ruling that the NOLA Media Group had to reveal names, addresses, and phone numbers of its anonymous commenters. Even though the First Amendment protects Americans’ right to speak anonymously and several states have shield laws designed to protect newspapers from releasing information against their will (Digital Media Law Project, 2013), it is unclear exactly where newspapers stop and audiences begin. The press may sometimes be free from compelled testimony, but there is little clarity on what exactly the press is and therefore who can claim its freedoms.
Finally, in 2016, Norwegian writer Tom Egeland posted to his Facebook account a story that included Nick Ut’s Pulitzer Prize–winning photo of Vietnamese children running away from a U.S. military napalm attack. One nine-year-old victim was a naked girl. Facebook removed the post because it contained “fully nude genitalia” and “fully nude female breast,” in violation of the company’s community standards. When Egeland appealed the removal, his account was suspended. The Norwegian newspaper Aftenposten then posted the image and a story on the censorship to its company’s Facebook site—and its post also was censored. The leader of Norway’s conservative party then posted the image and a protest against the censorship—and her post was censored. Facebook initially defended its decisions saying that although it recognized the photo’s iconic status, “it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.” It relented only after the Norwegian prime minister also posted the image with her own protest. Facebook eventually stated: “Because of its status as an iconic image of historical importance, the value of permitting sharing outweighs the value of protecting the community by removal, so we have decided to reinstate the image”.
This is a story of networked press freedom. A Facebook user posts an image that has been recognized with one of journalism’s highest awards. It triggers a review by Facebook’s vast content-moderation operation tasked with policing millions of pieces of media in near real time. The user is suspended for appealing the decision. The incident attracts the attention of a news organization, political elites, and worldwide audiences. Eventually, Facebook relents after deciding for itself that the image is iconic, historically important, and worthy of sharing. In this incident, the journalist’s right to publish and the public right to hear are not housed within any one organization or profession. They instead are distributed across an image with agreed-on historical significance, platform algorithms surfacing content, social media companies with proprietary community standards, vast populations of piecework censors implementing standards quickly, editorial protests of professional journalists and elite politicians, and an eventual reversal by a private corporation only after it thinks that an image should be shared. Here, press autonomy is not just the freedom of Nick Ut, Tom Egeland, or the Aftenposten to publish. It is the product of a network of humans and nonhumans that make it more or less likely that a public will encounter media and debate its meaning and significance.
There are many more such stories. This book is about putting them in context—to show how these seemingly idiosyncratic incidents are indicative of the larger challenge of figuring out what democratic self-governance requires, what kind of free press should help to secure it, and how such freedom is distributed across a network of humans and machines that together create publics. If nothing else, my hope is that readers will take away from this book both a skepticism about the idea of press freedom and a sense of its promise as a tool for interrogating the networked press. If someone says “We need a free press,” my hope is that this book will nudge you to ask, “What kind of freedom, what kind of press, and for what kind of public?” Inspired by Michael Schudson’s question “autonomy from what?,” I try to ask “autonomy of what and for what?”
My aim in this book is not to dismiss earlier theories of press freedom but to argue that they tell only part of the story. That the press is a product of multiple forces and many different kinds of power is nothing new. But if we want to understand the networked press’s potential to create new publics, we might use the idea of networked press freedom as a kind of diagnostic. If we do not like the publics the networked press creates, we should examine its infrastructure and make changes. If we do not like the networked press’s infrastructure, we need to show why it leads to unacceptable publics. If a new element of the networked press appears, we need to be able to say quickly and thoughtfully what its relationships are and how they create new publics. And if we have an idea for a new element that we think should be part of the networked press, we must be able to say why we need the new public it might help create.
I’m thrilled to say that my new book, Custodians of the Internet, is now available for purchase from Yale University Press, and your favorite book retailer. Those of you who know me know that I’ve been working on this book for a long time, and have cared about the issues it addresses for a while now. So I’m particularly excited that it is now no longer mine, but yours if you want it. I hope it’ll be of some value to those of you who are interested in interrogating and transforming the information landscape in which we find ourselves.
By way of introduction, I thought I would explain the book’s title, particularly my choice of the word “custodians.” This title came unnervingly late in the writing process, and after many, many conversations with my extremely patient friend and colleague Dylan Mulvin. “Custodians of the Internet” captured, better than many, many alternatives, the aspirations of social media platforms, the position they find themselves in, and my notion for how they should move forward.
moderators are the web’s “custodians,” quietly cleaning up the mess: The book begins with a quote from one of my earliest interviews, with a member of YouTube’s content policy team. As they put it, “In the ideal world, I think that our job in terms of a moderating function would be really to be able to just turn the lights on and off and sweep the floors . . . but there are always the edge cases, that are gray.” The image invoked is a custodian in the janitorial sense, doing the simple, mundane, and uncontroversial task of sweeping the floors. In this turn of phrase, content moderation was offered up as simple maintenance. It is not imagined to be difficult to know what needs scrubbing, and the process is routine. As with content moderation, there is labor involved, but largely invisible, just as actual janitorial staff are often instructed to “disappear,” working at night or with as little intrusion as possible. yet even then, years before Gamergate or ISIS beheadings or white nationalists or fake news, it was clear that moderation is not so simple.
platforms have taken “custody” of the Internet: Content moderation at the major platforms matters because those platforms have achieved such prominence in the intervening years.As I was writing the book, one news item in 2015 stuck with me: in a survey on people’s new media use, more people said that they used Facebook than said they used the Internet. Facebook, which by then had become one of the most popular online destinations in the world and had expanded to the mobile environment, did not “seem” like the Internet anymore. Rather than being part of the Internet, it had somehow surpassed it. This was not true, of course; Facebook and the other major platforms had in fact woven themselves deeper into the Internet, by distributing cookies, offering secure login mechanisms for other sites and platforms, expanding advertising networks, collecting reams of user data from third-party sites, and even exploring Internet architecture projects. In both the perception of users and in material ways, Facebook and the major social media platforms have taken “custody” of the Internet. This should change our calculus as to whether platform moderation is or is not “censorship,” and the responsibilities of platforms bear when they decide what to remove and who to exclude.
platforms should be better “custodians,” committed guardians of our struggles over value: In the book, I propose that these responsibilities have expanded. Users have become more acutely aware, of both the harms they encounter on these platforms, and the costs of being wronged by content moderation decisions. What’s more, social media platforms have become the place where a variety of speech coalitions do battle: activists, trolls, white nationalists, advertisers, abusers, even the President. And the implications of content moderation have expanded, from individual concerns to public ones. If a platform fails to moderate, everyone can be affected, even those who aren’t party to the circulation of the offensive, the fraudulent, or the hateful — even those who aren’t on social media at all.
What would it mean for platforms to play host not just to our content, but to our best intentions? The major platforms I discuss here have, for years, tried to position themselves as open and impartial conduits of information, defenders of their user’s right to speak, and legally shielded from any obligations for how they police their sites. As most platform managers see it, moderation should be theirs to do, conducted on their own terms, on our behalf, and behind the scenes. But that arrangement is crumbling, as critics begin to examine the responsibilities social media platforms have to the public they serve.
In the book, I propose that platforms become “custodians” of the public discourse they facilitate — not in the janitorial sense, but something more akin to legal guardianship. The custodian, given charge over a property, a company, a person, or a valuable resource, does not take it for their own or impose their will over it; they accept responsibility for ensuring that it is governed properly. This is akin to Jack Balkin’s suggestion that platforms act as “information fiduciaries,” with a greater obligation to protect our data. But I don’t just mean that platforms should be custodians of our content; platforms should be custodians of the deliberative process we all must engage in, that makes us a functioning public. Users need to be more accountable for making the hard decisions about what does and does not belong; platforms could facilitate that deliberation, and then faithfully enact the conclusions users reach. Safeguarding public discourse requires ensuring that it is governed by those to whom it belongs, making sure it survives, that its value is sustained in a fair and equitable way. Platforms could be not the police of our reckless chatter, but the trusted agents of our own interest in forming more democratic publics.
If you end up reading the book, you have my gratitude. And I’m eager to hear from anyone who has thoughts, comments, praise, criticism, and suggestions. You can find me on Twitter at @TarletonG.
Information & Culture just published (paywall; or free pre-print) an article I wrote about “night modes,” in which I try to untangle the history of light, screens, sleep loss, and circadian research. If we navigate our lives enmeshed with technologies and their attendant harms, I wanted to know how we make sense of our orientation to the things that prevent harm. To think, in other words, of the constellation of people and things that are meant to ward off, prevent, stave off, or otherwise mitigate the endemic effects of using technology.
If you’re not familiar with “night modes”: in recent years, hardware manufacturers and software companies have introduced new device modes that shift the color temperature of screens during evening hours. To put it another way: your phone turns orange at night now. Perhaps you already use f.lux, or Apple’s “Night Shift,” or “Twilight” for Android.
All of these software interventions come as responses to the belief that untimely light exposure closer to bedtime will result in less sleep or a less restful sleep. Research into human circadian rhythms has had a powerful influence on how we think and talk about healthy technology use. And recent discoveries in the human response to light, as you’ll learn in the article, are based on a tiny subset of blind persons who lack rods and cones. As such, it’s part of a longer history of using research on persons with disabilities to shape and optimize communication technologies – a historical pattern that the media and disability studies scholar, Mara Mills, has documented throughout her career.
For decades, shift workers were seen as those most vulnerable to untimely light exposure but today everyone is potentially at-risk. Over the past twenty years, the spread of screens to every nook of personal space has produced nothing less than a new geometry and luminosity of personal space (the crooked elbow, the hunched neck, the glow). Accompanying this new configuration of people, things and lights, are corresponding harms, fears, and tools for preventing and warding off harm.
We are accustomed to thinking about media and their effects: we talk about the effect of media content on unsuspecting or vulnerable audiences; we talk about the physically damaging effects of too much or the wrong kind of exposure—hearing damage from concerts, queasiness from 3D movies, repetitive strain injuries from keyboard use. And we often try to position our bodies in relation to media technologies, and away from their harms.
My argument is that we are also re-positioned, constantly, towards technologies that prevent harms. These intermediaries—prophylactics—structure our space (are you at a standing desk?) and our time (will you sleep better if you read on an orange screen?). Increasingly, the site of prophylaxis is also the site of potential harm. A driving app that won’t let you type when your GPS coordinates indicate you are in a car—until you affirmatively tell it you are a passenger—unites the source of potential tragedy (distracted driving) with its very own mitigation. By thinking through the arrangement of people and things as an orientation towards prophylaxis—and not just an aversion to harm–what do we learn about compulsory technology use? What does the conspicuous prevention of harm tell us about the legibility of pain and suffering?
New prophylactics can also be the entry point for addressing the uneven distribution of harm. Ultimately, this is what I think is most important about night modes. Chronic sleep loss and fatigue are unevenly distributed problems. Those with the flexibility to control when and for how long they sleep also tend to be those with other forms of power, prestige, and control over their work environments. Night modes are just one artifact of renewed focus on a widespread social phenomenon. In essence, these new device settings individualize the responsibility to control one’s exposure to light, while simultaneously highlighting the fact that very few people have the freedom to completely switch off.
If you read it, I’d love to hear your thoughts: or find me on Twitter.
I began this project when I arrived at Microsoft Research in July 2016 and it benefitted immeasurably from the input of the Social Media Collective and our many guests. These guests include: Cait McKinney, Sharif Mowlabocus, Joan Donovan, Amy Hasinoff, Jonathan Sterne, Nick Couldry, Meryl Alper and the participants of our workshop on Disability Studies and Technology.
Here is an early response to Zuckerberg’s testimony before the US Senate today. If you want my overall score, as of 3:30 ET I think Zuckerberg is doing quite well, but some of the things being discussed need a lot of unpacking.
Those Poor Fools “whose privacy settings allowed it.”
In the beginning of his testimony, Zuck described what people are so upset about:
Zuckerberg: [The Kogan personality quiz app that shared data with Cambridge Analytica] was installed by around 300,000 people who agreed to share some of their Facebook information as well as some information from their friends whose privacy settings allowed it.
Huh — this phrasing is so careful to be technically accurate but it is right up against the limit of truth. Then I think it goes past that. I looked up what the Facebook privacy settings screen looked like in 2015. It looked like this:
If we follow the research findings in the security area, most users probably never saw this screen at all: people tend not to know about their own security settings.
But if you did find this screen, anyone who clicked “Friends” for any column surely could not have taken this to mean that their “privacy settings allowed” (Zuckerberg’s phrase) the harvesting of their data by an app that they never authorized and were not aware of.
This is presumably why Facebook disallowed this use of third-party data by apps well before this scandal. There is no third-party consent. So Zuckerberg’s claim that the Kogan/Cambridge Analytica app took information from people “whose privacy settings allowed it” seems a bridge too far.
The American Dream
A closing thought: It is hyperbole, I know, but I was struck by Sen. John Thune’s (R-SD) remark that “Facebook represents the American dream.” Didn’t The Social Network cover this ground? I don’t remember the plot that way. Did Thune just mean that Zuck got rich?
Zuck’s the Scorpion and We Are The Frog?
Zuckerberg: [investments] in security…will significantly impact our profitability going forward. But I want to be clear about what our priority is: protecting our community is more important than maximizing our profits.
This is a nice quote by Zuck because it highlights the key problem with Facebook’s position. The issue isn’t really “security” though. It’s the fact that Facebook is fundamentally in the business of harvesting user data and that negative, polarizing (and even inaccurate) ads and status updates are good for the platform. They promote engagement through outrage.
To crib from Marshall McLuhan, what is in the public interest is not necessarily what the public is interested in. Gory road accidents turn heads. But is that what our media should be showing?
Zuck’s comment is also highlighting that by asking Facebook to fix these problems, we are asking advertising-supported media to behave in a way that makes no sense for them and is opposite to their nature.
Let’s take a look at some of the ads placed by fake accounts controlled by Агентство интернет-исследований they are extremely polarizing (american.veterans is a beard or sock puppet account):
Political ad spending is also a windfall that old media (radio and television stations) depended on, but there was no “click engagement” dimension with old media—old media left people with little to do. It seems possible that the new media political ad environment can create feedbacks with negative ads that might be much more significant than the old ways of doing things.
Another thing that struck me in early Q&A is the concern raised about a paid Facebook model. This was floated yesterday in a media interview and now Sen. Bill Nelson (D-FL) is asking Zuckerberg if it is true users would have to, as he put it, “pay for privacy.”
Nelson seems outraged. On the one hand, this outrage makes no sense. If Facebook were switched completely away from an advertising model, it would be great for users as it would redefine the company’s incentives completely.
However, I think what is being proposed is a half-pay, half-free system (or opt-in payments). If that’s the plan, the outrage is justified. Pay-for-privacy makes social media even more regressive.
Privacy is already regressive in the sense that only those people who have time to learn about risks and fiddle with endless (and endlessly changing) settings pages have any hope of protecting themselves. The current system rewards computer skill and free time. And even with those things users still may not be able to protect themselves, because the options just aren’t available.
But an opt-in payment system makes privacy even worse by taking these intangible regressive dimensions and, in addition, putting a payment step on top of them. It’s not that people in opt-in privacy use either time or money to obtain privacy, rather it will be the case that people who both have time to follow this topic closely enough to know that they need privacy in the first place and can afford to pay for it will have privacy.
Big Social Won’t Be Fixed
I need to sign off because I can’t spend the day watching this. My summary so far: “Big Social” won’t be fixed by anything that was said here. The business models, institutions, and habits are too well-established and have too much inertia for a meaningful reconfiguration to come from the things I’ve heard so far.
Another stellar crop of applicants poured in for the SMC internships this year, and another three emerged as the best of the best. Thanks to everyone who applied, it was painful not to accept more of you! For summer 2018, we’re thrilled to have these three remarkable students joining us in the Microsoft Research lab in New England, to conduct their own original research and to be part of the SMC community. (Remember that we offer these internships every summer: if you’re an advanced graduate student in the areas of communication, the anthropology or sociology of new media, information science, and related fields, watch this page for the necessary information.)
Robyn Caplan is a doctoral candidate at Rutgers University’s School of Communication and Information under the supervision of Professor Philip Napoli. For the last three years, she has also been a Researcher at the Data & Society Research Institute, working on projects related to platform accountability, media manipulation, and data and civil rights. Her most recent research explores how platforms and news media associations navigate content moderation decisions regarding trustworthy and credible content, and how current concerns regarding the rise of disinformation across borders are impacting platform governance, and national media and information policy. Previously she was a Fellow at the GovLab at NYU, where she worked on issues related to open data policy and use. She holds an MA from New York University in Media, Culture, and Communication, and a Bachelor of Science from the University of Toronto.
Michaelanne Dye is a Ph.D. candidate in Human-Centered Computing in the School of Interactive Computing at Georgia Tech. She also holds an M.A. in Cultural Anthropology. Michaelanne uses ethnographic methods to explore human-computer interaction and development (HCID) issues within social computing systems, paying attention to the complex factors that afford and constrain meaningful engagements with the internet in resource-constrained communities. Through fieldwork in Havana, Cuba, Michaelanne’s dissertation work examines how new internet infrastructures interact with cultural values and local constraints. Moreover, her research explores community-led information networks that have evolved in absence of access to the world wide web – in order to explore ways to design more meaningful and sustainable engagements for users in both “developing” and “developed” contexts. Michaelanne’s work has been published in the conference proceedings of Human Factors in Computing Systems (CHI) and Computer-Supported Cooperative Work and Social Computing (CSCW).
Penny Trieu is a PhD candidate in the School of Information at the University of Michigan. She is a member of the Social Media Research Lab, where she is primarily advised by Nicole Ellison. Her research concerns how people can use communication technologies, particularly social media, to better support their interpersonal relationships. She also looks at identity processes, notably self-presentation and impression management, on social media. Her research has appeared in venues such as Information, Communication, and Society;Social Media + Society, and at the International Communication Association conference. At the Social Media Collective, she will work on the dynamics of interpersonal feedback and self-presentation around ephemeral sharing via Instagram and Snapchat Stories.
What do we expect of content moderation? And what do we expect of platforms?
There is an undeniable need, now more than ever, to reconsider the public responsibilities of social media platforms. For too long, platforms have enjoyed generous legal protections and an equally generous cultural allowance, to be “mere conduits” not liable for what users post to them. in the shadow of this protection, they have constructed baroque moderation mechanisms: flagging, review teams, crowdworkers, automatic detection tools, age barriers, suspensions, verification status, external consultants, blocking tools. They all engage in content moderation, but are not obligated to; they do it largely out of sight of public scrutiny, and are held to no official standards as to how they do so. This needs to change, and it is beginning to.
But in this crucial moment, one that affords such a clear opportunity to fundamentally reimagine how platforms work and what we can expect of them, we might want to get our stories straight about what those expectations should be.
The latest controversy involves Logan Paul, a twenty-two year old YouTube star with 15 million plus subscribers. His videos, a relentless barrage of boasts, pranks, and stunts, have garnered him legions of adoring fans. But he faced public backlash this week after posting a video in which he and his buddies ventured into the Aokigahara forest of Japan, only to find the body of a young man who had recently committed suicide. Rather than turning off the camera, Logan continued his antics, pinballing between awe and irreverence, showing the body up close and then turning the attention back to his own reaction. The video lingers of the body, including close ups of his swollen hand, and Paul’s reactions were self-centered and cruel. After a blistering wave of criticism in the video comments and on Twitter, Paul removed the video and issued a written apology, which was itself criticized for not striking the right tone. A somewhat more heartfelt video apology followed. He later announced he would be taking a break from YouTube.
There is no question that Paul’s video was profoundly insensitive, an abject lapse in judgment. But amidst the reaction, I am struck by the press coverage of and commentary about the incident: the willingness both to lump this controversy in with an array of other concerns about what’s online, as somehow all part of the “content moderation” problem; paired with a persistent and unjustified optimism for what content moderation should be able to handle.
Content moderation, and different kinds of responsibility
But what do these incidents have in common, besides the platform? Journalists and commentators are eager to lump them together: part of a single condemnation of YouTube, its failure to moderate effectively, and its complicity with the profits made by producers of salacious or reprehensible content. But these incidents represent different kinds of problems, they implicate YouTube and content moderation in different ways — and, when lumped together, they suggest a contradictory set of expectations we have for platforms and their public responsibility.
Platforms assert a set of normative standards, guidelines by which users are expected to comport themselves. It is difficult to convince every user to honor these standards, in part because the platforms have spent years promising users an open and unfettered playing field, inviting users to do or say whatever they want. And it is difficult to enforce these standards, in part because the platforms have few of the traditional mechanism of governance: they can’t fire us, we are not salaried producers. All they have are the terms of service and the right to delete content and suspend users. And, there are competing economic incentives for platforms to be more permissive than they claim to be, and to treat high value producers differently than the rest.
Incidents like the exploitative videos of children, or the misleading amateur cartoons, take advantage of this system. They live amidst this enormous range of videos, some subset of which YouTube must remove. Some come from users who don’t know or care about the rules, or find what they’re making perfectly acceptable. Others are deliberately designed to slip past moderators, either by going unnoticed or by walking right up to but not across the community guidelines. They sometimes require hard decisions about speech, community, norms, and the right to intervene.
Logan Paul’s video, or PewDiePie’s racist outbursts, are of a different sort. As was clear in the news coverage and the public outrage, critics were troubled by Logan Paul’s failure to consider his responsibility to his audience, to show more dignity as a videomaker, to choose sensitivity over sensationalism. The fact that he has 15 million subscribers, many of them young, was reason for many to claim that he (and by implication, YouTube) have a greater responsibility. These sound more like traditional media concerns: the effects on audiences, the responsibilities of producers, the liability of providers. This could just as easily be a discussion about Ashton Kutcher and an episode of Punk’d. What would Kutcher’s, his production team’s, and MTV’s responsibility be if he had similarly crossed the line with one of his pranks?
But MTV was in a structurally different position than YouTube. We expect MTV to be accountable for a number of reasons: they had the opportunity to review the episode before broadcasting it; they employed Kutcher and his team, affording them specific power to impose standards; and they chose to hand him the megaphone in the first place. While YouTube also affords Logan Paul a way to reach millions, and he and YouTube share advertising revenue from popular videos, these offers are in principle made to all YouTube users. YouTube is a distribution platform, not a distribution bottleneck — or it is a bottleneck of a very different shape. This does not mean we cannot or should not hold YouTube accountable. We could decide as a society that we want YouTube to meet exactly the same responsibilities as MTV, or more. But we must take into account that these structural differences change not only what YouTube can do, but how and why we can expect it of them.
Moreover, is content moderation the right mechanism to manage this responsibility? Or to put it another way, what would the critics of Logan’s video have wanted YouTube to do? Some argued that YouTube should have removed the video, before Paul did. (It seems the video was reviewed, and was not removed, but Paul received a “strike” on his account, a kind of warning — we know this only based on this evidence. If you want to see the true range of disagreement about what YouTube should have done, just read down the lengthy thread of comments that followed this tweet.) In its PR response to the incident, a YouTube representative said it should have taken the video down, for being “shocking, sensational or disrespectful”. But it is not self-evident that Paul’s video violates YouTube’s policies. And from the comments from critics, it was Paul’s blithe, self-absorbed commentary, the tenor he took about the suicide victim he found, as much as showing the body itself, that was so troubling. Showing the body, lingering on its details, was part of Paul’s casual indifference, but so were his thoughtless jokes and exaggerated reactions. Is it so certain that YouTube should have removed this video on our behalf? I do not mean to imply that the answer is no, or that it is yes. I’m only noting that this is not an easy case to adjudicate — which is precisely why I we shouldn’t expect YouTube to already have a clean and settled policy towards it.
There’s no simple answer as to where such lines should be drawn. Every bright line rule YouTube might draw will be plagued with “what abouts”. Is it that corpses should not be shown in a video? What about news footage from a battlefield? What about public funerals? Should the prohibition be specific to suicide victims, out of respect? It would be reasonable to argue that YouTube should allow a tasteful documentary about the Aokigahara forest, concerned about the high rates of suicide among Japanese men. Such a video might even, for educational or provocative reasons, include images of the body of a suicide victim, or evidence of their deaths. In fact, YouTube already has some, of a variety of qualities (see 1, 2, 3, 4).
So what we critics may be implying is that YouTube should be responsible to distinguish the insensitive versions from the sensitive ones. Again, this sounds more like the kinds of expectations we had for television networks — which is fine if that’s what we want, but we should admit that this would be asking much more from YouTube than we might think.
As a society, we’ve already struggled with this very question, in traditional media: should the news show the coffins of U.S. soldiers as their returned from war? should the news show the grisly details of crime scenes? When is the typically too graphic video acceptable because it is newsworthy, educational, or historically relevant? Not only is the answer far from clear, and differs across cultures and periods. As a society, we need to engage in the debate; it cannot be answered for us by YouTube alone.
These moments of violation serve as the spark for that debate. It may be that all this condemnation of Logan Paul, in the comment threads on YouTube, on Twitter, and in the press coverage, is the closest we get to a real, public consideration of what’s appropriate for public consumption. And maybe the focus among critics on Paul’s irresponsibility, as opposed to YouTube’s, is indicative that this is not a moderation question, or a growing public sense that we cannot rely on YouTube’s moderation, that we need to cultivate a clearer sensibility of what public culture should look like, and teach creators to take their public responsibility more seriously. (Though even if it is, there will always be a new wave of twenty-year-olds waiting in the wings, who will jump at the chance social media offers to show off for a crowd, way before they ever grapple with social norms we may have worked out. This is why we need to keep having this debate.)
How exactly YouTube is complicit in the choices of its stars
This is not to suggest that platforms bear no responsibility for the content that they help circulate. Far from it. YouTube is implicated, in that they afford the opportunity for Logan to broadcast his tasteless video, help him gather millions of viewers who will have it instantly delivered to their feed, design and tune the recommendation algorithms that amplify its circulation, and profit enormously from the advertising revenue it accrues.
Some critics are doing the important work of putting platforms under scrutiny, to better understand the way producers and platforms are intertwined. But it is awfully tempting to draw too simple a line between the phenomenon and the provider, to paint platforms with too broad a brush. The press loves villains, and YouTube is one right now. But we err when we draw these lines of complicity too cleanly. Yes, YouTube benefits financially from Logan Paul’s success. That by itself does assert complicity; it needs to be a feature of our discussion about complicity. We might want revenue sharing to come with greater obligations on the part of the platform; or, we might want platforms to be shielded from liability or obligation no matter what the financial arrangement; or, we might want equal obligations whether there is revenue shared or not; or we might want obligations to attend to popularity rather than revenue. These are all possible structures of accountability.
It is also easy to say that YouTube drives vloggers like Logan Paul to be more and more outrageous. If video makers are rewarded based on the number of views, whether that reward is financial or just reputational, it stands to reason that some videomakers will look for ways to increase those numbers, including going bigger. But it is not clear that metrics of popularity necessarily or only lead to being over more outrageous, and there’s nothing about this tactic that is unique to social media. Media scholars have long noted that being outrageous is one tactic producers use to cut through the clutter and grab viewers, whether its blaring newspaper headlines, trashy daytime talk shows, or sexualized pop star performances. That is hardly unique to YouTube. And YouTube videomakers are pursuing a number of strategies to seek popularity and the rewards therein, outrageousness being just one. many more seem to depend on repetition, building a sense of community or following, interacting with individual subscribers, and the attempt to be first. While over-caffeinated pranksters like Logan Paul might try to one-up themselves and their fellow bloggers, that is not the primary tactic for unboxing vidders or Minecraft world builders or fashion advisers or lip syncers or television recappers or music remixers. Others see Paul as part of a “toxic YouTube prank culture” that migrated from Vine, another way to frame YouTube’s responsibility. But a genre may develop, and a provider profiting from it may look the other way or even encourage it; that does answer the question of what responsibility they have for it, it only opens it.
To draw too straight a line between YouTube’s financial arrangements and Logan Paul’s increasingly outrageous shenanigans misunderstands both of the economic pressures of media and the complexity of popular culture. It ignores the lessons of media sociology, which makes clear that the relationship between the pressures imposed by industry and the creative choices of producers is much more complex and dynamic. And it does prove that content moderation is the right way to address this complicity.
* * *
Let me say again: Paul’s video was in poor, poor taste, and he deserves all of the criticism he received. And I find this genre of boffo, entitled, show-off masculinity morally problematic and just plain tiresome. And while it may sound like I am defending YouTube, I am definitely not. Along with the other major social media platforms, YouTube has a greater responsibility for the content they circulate than they have thus far acknowledge; they have built a content moderation mechanism that is too reactive, too dismissive, and too opaque, and they are due for a public reckoning. In the last few years, the workings of content moderation and its fundamental limitations have come to the light, and this is good news. Content moderation should be more transparent, and platforms should be more accountable, not only for what traverses their system, but the ways in which they are complicit in its production, circulation, and impact. But it also seems we are too eager to blame all things on content moderation, and to expect platforms to maintain a perfectly honed moral outlook every time we are troubled by something we find there. Acknowledging that YouTube is not a mere conduit does not imply that it is exclusively responsible for everything available there.
As Davey Alba at Buzzfeed argued, “YouTube, after a decade of being the pioneer of internet video, is at an inflection point as it struggles to control the vast stream of content flowing across its platform, balancing the need for moderation with an aversion toward censorship.” This is true. But we are also at an inflection point of our own. After a decade of embracing social media platforms as key venues for entertainment, news, and public exchange, and in light of our growing disappointment in their preponderance of harassment, hate, and obscenity, we too are struggling: to modulate exactly what we expect of them and why, to balance how to improve the public sphere with what role intermediaries can reasonably be asked to take.
This essay is cross-posted at Culture Digitally. Many thanks to Dylan Mulvin for helping me think this through.