Loading...

Follow Facebook Newsroom on Feedspot

Continue with Google
Continue with Facebook
or

Valid

By Sreethu Thulasi, Product Manager

Whether you’re new to Facebook or have been using it for years, you should be able to easily understand and adjust how your information influences the ads you see.

That’s why we introduced tools like “Why am I seeing this ad?” and Ad Preferences over four years ago — to provide greater transparency and control. We made updates to these tools recently, but we heard feedback from people that they can still be hard to understand and difficult to navigate. Today we’re making two additional changes to address those concerns.

First, we’ll show people more reasons why they’re seeing an ad on Facebook. In the past, “Why am I seeing this ad?” highlighted one or two of the most relevant reasons, such as demographic information or that you may have visited a website. Now, you’ll see more detailed targeting, including the interests or categories that matched you with a specific ad. It will also be clearer where that information came from (e.g. the website you may have visited or Page you may have liked), and we’ll highlight controls you can use to easily adjust your experience.

We’re also updating Ad Preferences to show you more about businesses that upload lists with your information, such as an email address or phone number. You’ll now see a tab with two sections:

  • Advertisers who uploaded a list with your information and advertised to it. This section includes advertisers that uploaded a list with your information and used that list to run at least one ad in the past seven days. For example, a fitness studio that uploaded a list of client emails and used that for advertising could show up in this section.
  • Businesses who uploaded and shared a list with your information. This section aims to help you understand the third parties and businesses who have uploaded and shared lists with your information. In this section, you’ll see the business that initially uploaded a list, along with any advertiser who used that list to serve you an ad within the last 90 days.

We’re continuing to work to make ads more transparent and easier for you to control. To learn more about “Why am I seeing this ad?” visit our Help Center. To view and use your controls, visit Ad Preferences.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Maxine Williams, Global Chief Diversity Officer

Over the last six years, we have worked hard to make our commitment to diversity and inclusion more than just sound bites. Our company has grown a lot. So has our approach.

We are more focused than ever on creating a diverse workforce and supporting our people. They are the ones building better products and serving the communities on our platforms. Lauryn, Director of Education Partnerships, brings people together to learn more about computer science and programming through Facebook-led initiatives, such as TechPrep and CodeFwd, and partnerships with organizations like CodePath.org, the United Negro College Fund (UNCF) and historically Black colleges and universities (HBCUs). Jason, Head of Supplier Diversity, ensures that our supply chain, from cafe produce to data center construction and global event production, is both diverse and inclusive.

Our design choices are important, too. Designing for inclusivity leads to better decisions and better products. Ian, Director of Design at Instagram, has relied on diverse teams to build the best products throughout his career in the US and now in Japan. It’s also vital to be increasingly intentional not just about what we design, but how. We are committed to ethical design and responsible innovation in tech. Margaret, VP of Product Design, insists on diverse perspectives and a broad view of social and political contexts informing how we design.

These people, their work, and our work as a company, are making a difference. Facebook Resource Groups are building community and supporting professional development while programs like Managing Bias, Managing Inclusion, Be the Ally Circles, Managing a Respectful Workplace, and Efficacy Training build everyone’s skills.

Today, there are more people of diverse backgrounds and experiences, more people of color, more women in both technical and business roles, and more underrepresented people in leadership here. Most notably, we’ve achieved higher representation of women in leadership by focusing on hiring and growing female leaders within the company. Over the last several years, the majority of new female leaders were internally promoted. Also, since 2014, we have increased the number of Black women at Facebook by 25X and the number of Black men by 10X. And importantly, even as we have grown, we have worked very hard on making Facebook a more welcoming, respectful workplace.

We are advancing our efforts with veterans by offering programs including a Military Skills Translator, which helps veterans navigate Facebook opportunities, and a mentorship program through our Vets and Allies Facebook Resource Group. We’re proud that the number of veterans at Facebook continues to grow, and now makes up 2.2% of our workforce.

We are also committed to disability inclusion and are particularly proud of our top score and naming as a “Best Place to Work for Disability Inclusion” by the Disability Equality Index. We have expanded our disability and inclusion recruiting efforts globally to provide more opportunity for job satisfaction and advancement, and we have launched new initiatives. In Brazil, Israel and Japan, among other countries, we partner directly with local community organizations, universities and government agencies to connect qualified candidates with disabilities to open roles.

Our commitment to and support of the LGBTQ+ community is unwavering. We’re proud to have earned 100% on the Human Rights Campaign (HRC) 2019 Corporate Equality Index (CEI) and the designation as a Best Place to Work for LGBTQ+ Equality. This is the fifth year in a row that we have received the best score. The HRC recognition reflects the hard work that this community and its allies across the company do to make LGBTQ+ inclusion a priority for the people at Facebook and on our platform around the world. About 8% of US-based Facebook employees identify as LGBTQ+, based on a voluntary survey.

Our work creating an inclusive environment where people from all backgrounds can thrive is ongoing. We are proud of our partnerships and investments such as:

  • The Align Program: We made a $4.2-million investment in the program which increases the number of women and underrepresented people pursuing careers in computer science by providing students who did not study computer science with the opportunity to earn a master’s degree in computer science.
  • ROAR: With Facebook Research’s arm (ROAR) we are recruiting, retaining and advancing diverse researchers and computer science professionals and developing customized inclusion programs across research areas. Partnerships include Black in AILatinX in AIWomen in Machine Learning, Data Science AfricaAfrican Master’s in Machine Intelligence, and AI4Good.
  • Women LEAD and LEAP: These internal leadership programs help women who work at Facebook build community and work on their most relevant challenges, from supporting other women to scaling impact while maintaining balance.
  • Community Summits: These employee-led events build community and professional development among our Black@, Latin@, Asian and Pacific Islanders@, and Pride@ Facebook Resource Groups around the world.

Imagine what’s possible when we get this right.

We envision a company where in the next five years, at least 50% of our workforce will be women, people who are Black, Hispanic, Native American, Pacific Islanders, people with two or more ethnicities, people with disabilities, and veterans. In doing this, we aim to double our number of women globally and Black and Hispanic employees in the US. It will be a company that reflects and better serves the people on our platforms, services and products. It will be a more welcoming community advancing our mission and living up to the responsibility that comes with it.

These are ambitious goals and incredibly important ones. They add to our tangible ways of tracking our progress and measuring success. And they also create accountability, which is absolutely key to progress.

Getting it right is critical to Facebook and to the communities and countries where we work and live. We are dedicated and willing to try new things, and we’ll get there. A proverb says, “If you want to go fast, go alone. If you want to go far, go together.” That’s the only way we’ll reach our goal – by going together. Here is the full 2019 Diversity Report.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Travis Yeh, Product Manager 

People come together on Facebook to talk about, advocate for, and connect around things like nutrition, fitness and health issues. But in order to help people get accurate health information and the support they need, it’s imperative that we minimize health content that is sensational or misleading. 

In our ongoing efforts to improve the quality of information in News Feed, we consider ranking changes based on how they affect people, publishers and our community as a whole. We know that people don’t like posts that are sensational or spammy, and misleading health content is particularly bad for our community. So, last month we made two ranking updates to reduce (1) posts with exaggerated or sensational health claims and (2) posts attempting to sell products or services based on health-related claims. 

  • For the first update, we consider if a post about health exaggerates or misleads — for example, making a sensational claim about a miracle cure. 
  • For the second update, we consider if a post promotes a product or service based on a health-related claim — for example, promoting a medication or pill claiming to help you lose weight. 

We handled this in a similar way to how we’ve previously reduced low-quality content like clickbait: by identifying phrases that were commonly used in these posts to predict which posts might include sensational health claims or promotion of products with health-related claims, and then showing these lower in News Feed. 

We’ll continue working to minimize low-quality health content on Facebook. 

Will This Impact My Page?

We anticipate that most Pages won’t see any significant changes to their distribution in News Feed as a result of this update.

Posts with sensational health claims or solicitation using health-related claims will have reduced distribution. Pages should avoid posts about health that exaggerate or mislead people and posts that try to sell products using health-related claims. If a Page stops posting this content, their posts will no longer be affected by this change.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Sheryl Sandberg, COO, Facebook

Civil rights are the foundation of a free and just society — and something we care deeply about as a company. We want to make sure we’re advancing civil rights on our platform, and today we’re sharing a second report that details our efforts.

Laura Murphy, a highly respected civil rights and civil liberties advocate, began leading an audit more than a year ago with support from the noted civil rights law firm Relman, Dane and Colfax. She’s spoken to more than 90 civil rights organizations and people from Facebook’s policy, product and enforcement teams. Last December, she shared her first update, which focused on our US election-related work, including steps to prevent voter suppression and encourage voter participation.

Today’s report, which you can read here, gives another update on our progress and points out where we need to do more. It highlights four areas where we’ve made changes:

Strengthening Our Policies and Enforcement Against Harmful Content

Our Community Standards, the policies for what’s allowed on Facebook, are key to making sure people can freely and safely connect and share with each other. In March, we built upon our longstanding ban against white supremacy after speaking with civil rights leaders, experts across the political spectrum and academics in race relations. We now ban praise, support and representation of white nationalism and white separatism. Today’s report recommends we go further to include content that supports white nationalist ideology even if the terms “white nationalism” and “white separatism” aren’t explicitly used. We’re addressing this by identifying hate slogans and symbols connected to white nationalism and white separatism to better enforce our policy.

We also recently updated our policies so Facebook isn’t used to organize events that intimidate or harass people based on their race, religion, or other parts of their identity. We now ban posts from people who intend to bring weapons anywhere to intimidate or harass others, or who encourage people to do the same. Civil rights leaders first flagged this trend to us, and it’s exactly the type of content our policies are meant to protect against.

Getting our policies right is just one part of the solution. We also need to get better at enforcement — both in taking down and leaving up the right content. For example, civil rights groups have been concerned about us mistakenly taking down content meant to draw attention to and fight discrimination rather than promote it. We’re taking steps to address this, including a US pilot program where some of the people who review content on Facebook only focus on hate speech instead of a range of content that can include bullying, nudity, and misrepresentation. We believe allowing reviewers to specialize only in hate speech could help them further build the expertise that may lead to increased accuracy over time.

Fighting Discrimination in Facebook Ads

Our ads tools help businesses reach people all over the world and we need to make sure they aren’t misused. In March 2019, we announced historic settlement agreements with leading civil rights organizations to change how US housing, employment and credit ads are run on Facebook.

Our policies have always prohibited advertisers from using our tools to discriminate. In 2018, we went further by removing thousands of categories from targeting related to protected classes such as race, ethnicity, sexual orientation and religion. But we can do better. As a result of the settlement, we’re rolling out updates so anyone who wants to run US housing, employment and credit ads will no longer be allowed to target by age, gender or zip code and will have a much smaller set of targeting categories overall. We’re building ways to make sure advertisers follow these rules with plans for full enforcement by the end of the year. We will also have a tool where you can search for and view current US housing ads by advertiser and location, regardless of whether the ads are shown to you.

We’re committed to going beyond the settlement agreement to let people search US employment and credit ads on Facebook too. These ads are crucial to helping people buy homes, find jobs, and gain access to credit — and it’s important that everyone on Facebook has access to these opportunities.

Protecting the 2020 Census and Elections Against Intimidation

With both the US Census and the US presidential elections, 2020 will be big year. An accurate census count is crucial to governments for functions like distributing federal funds and to businesses and researchers. That’s why we’re going to treat next year’s census like an election — with people, policies and technology in place to protect against census interference.

We’re building a team dedicated to these census efforts and introducing a new policy in the fall that protects against misinformation related to the census. We’ll enforce it using artificial intelligence. We’ll also partner with non-partisan groups to help promote proactive participation in the census.

To protect elections, we have a team across product, engineering, data science, policy, legal and operations dedicated full time to these efforts. They’re already working to ban ads that discourage people from voting, and we expect to finalize a new policy and its enforcement before the 2019 gubernatorial elections. This is a direct response to the types of ads we saw on Facebook in 2016. It builds on the work we’ve done over the past year to prevent voter suppression and stay ahead of people trying to misuse our products.

Just as civil rights groups helped us better prepare for the 2018 elections, their guidance has been key as we prepare for the 2020 Census and upcoming elections around the world.

Formalizing Facebook’s Civil Rights Task Force

Perhaps most importantly, today we’re announcing plans to build greater awareness about civil rights on Facebook and long-term accountability across the company. Since the first audit update in December, I created a civil rights task force made up of senior leaders across key areas of the company. Today, we’re going one step further and formalizing this task force so it lives on after the audit is finished.

The task force will onboard civil rights expertise to ensure it is effective in addressing areas like content policy, fairness in artificial intelligence, privacy, and elections. For example, we will work with voting rights experts to make sure key members of our election team are trained on trends in voter intimidation and suppression so they can remove this content from Facebook more effectively.

We’re also introducing civil rights training for all senior leaders on the task force and key employees who work in the early stages of developing relevant products and policies. The training is designed to increase awareness of civil rights issues and build civil rights considerations into decisions, products and policies at the company. We know these are the first steps to developing long-term accountability. We plan on making further changes to build a culture that explicitly protects and promotes civil rights on Facebook.

Laura’s second report includes more information about the updates we’re making. Over the past year, I’ve had the privilege of meeting with many key leaders, and our conversations have been humbling and invaluable. We will continue listening to feedback from the civil rights community and address the important issues they’ve raised so Facebook can better protect and promote the civil rights of everyone who uses our services.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Anna Benckert, VP and Associate General Counsel

People should have clear, simple explanations of how online services work and use personal information. Today we’re announcing updates to our Terms of Service to clarify how Facebook makes money and better explain the rights people have when using our services. The updates do not change any of our commitments or policies — they solely explain things more clearly.

Several of the updates are the result of our work with the European Consumer Protection Cooperation Network. Others are based on input from ongoing conversations with regulators, policymakers and consumer protection experts around the world.

They are also part of our ongoing commitment to give people more transparency and control over their information. For example, earlier this year we introduced updates to “Why am I seeing this ad?” and we launched “Why am I seeing this post?” to help people understand and control what they see in News Feed.

Here’s a summary of the information we’ve added to our terms:

  • How we make money: We include more details on how we make money, including a new introduction explaining that we don’t charge you money to use our products because businesses and organizations pay us to show you ads.
  • Content removals: We provide more information about what happens when we remove content that violates our terms or policies.
  • Your intellectual property rights: We clarify that when you share your own content — like photos and videos — you continue to own the intellectual property rights in that content. You grant us permission to do things like display that content, and that permission ends when the content is deleted from Facebook. This is how many online services work and has always been the case on Facebook.
  • What happens when you delete: We’re providing more detail about what happens when you delete content you’ve shared. For example, when you delete something you’ve posted, it’s no longer visible but it can take up to 90 days to be removed from our systems.

The updated terms will take effect for everyone on Facebook around the world on July 31, 2019, and you can preview the changes by visiting our Terms of Service page. Beyond these updates, we’ll keep working on ways to make sure people understand how our business works, how their information is used and how they can control it.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As part of his 2019 personal challenge, today Mark Zuckerberg shared the latest in his series of discussions on the future of technology and society. Mark spoke with with Dean of Stanford Law School Jenny Martinez and Noah Feldman, a professor at Harvard Law and advisor to Facebook’s planned Oversight Board for content decisions. They talked about governance issues related to technology and giving people a voice.

See all of Mark’s challenge videos here.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Brent Harris, Director of Governance and Global Affairs

In November, Mark Zuckerberg laid out a plan for a new way for people to appeal content decisions through an independent body. Earlier this year we released a draft charter outlining a series of questions that we wanted to answer through a global input process, including public consultation, to form that body.

Since that time we have traveled around the world hosting six in-depth workshops and 22 roundtables attended by more than 650 people from 88 different countries. We had personal discussions with more than 250 people and received over 1,200 public consultation submissions. In each of these engagements, the questions outlined in the draft charter led to thoughtful discussions with global perspectives, pushing us to consider multiple angles for how this board could function and be designed.

Today, we are releasing a report with appendices that summarize all of the feedback and recommendations we heard through those conversations, workshops and roundtables; internal research; white papers; media reports; and public proposals.

There are some general themes we have heard during consultation that are echoed in the report.

  • First and foremost, people want a board that exercises independent judgment — not judgment influenced by Facebook management, governments or third parties. The board will need a strong foundation for its decision-making, a set of higher-order principles — informed by free expression and international human rights law — that it can refer to when prioritizing values like safety and voice, privacy and equality.
  • Also important are details on how the board will select and hear cases, deliberate together, come to a decision and communicate its recommendations both to Facebook and the public. In making its decisions, the board may need to consult experts with specific cultural knowledge, technical expertise and an understanding of content moderation.
  • And people want a board that’s as diverse as the many people on Facebook and Instagram. They would like board members ready and willing to roll up their sleeves and consider how to guide Facebook to better, more transparent decisions. These members should be experts who come from different backgrounds, different disciplines, and different viewpoints, but who can all represent the interests of a global community.

As we close this period of consultation and turn our attention to getting this up and running, including deciding on membership of the board, the feedback from the report will be used to answer the questions that were first posed in the draft charter. These answers will be released in a final charter that will govern the work of the board and will be released in August.

We’re continuing to consider who will serve on the 40-person board. This process will include sourcing, vetting, interviewing, selecting and providing training for members. In sourcing potential candidates, we have been soliciting suggestions from those who have participated in the public consultation process and the in-person workshops and roundtables. In addition we have been engaging consultants and executive search firms, and will soon be opening a nomination process. We want to make sure that we’re casting a wide net, not just looking to those experts who may already be known to us. Facebook will select the first few people and those members will then help select the remaining people.

For more on this topic, Mark is releasing the next video in his series of discussions on the future of technology and society. He sat down with Jenny Martinez, Dean of Stanford Law School and Noah Feldman, a professor at Harvard Law and advisor on the Oversight Board to discuss governance and what that looks like for the technology industry.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Sarah Schiff, Product Manager

We believe that transparency leads to increased accountability and responsibility over time – not just for Facebook but advertisers as well. It’s why we continue to introduce tools that allow elected officials, those seeking office, and organizations aiming to influence public opinion to show more information about the ads they run and who’s behind them. At the same time, we’re continuing our work to combat foreign interference in elections worldwide.

Starting today, we are rolling out our transparency tools globally for advertisers wanting to place ads about social issues, elections or politics. For a full list of countries and territories where these tools are now available, visit our Help Center.

Getting Authorized 

As part of the authorization process for advertisers, we confirm their ID and allow them to disclose who is responsible for the ad, which will appear on the ad itself. The ad and “Paid for by” disclaimer are placed in the Ad Library for seven years, along with more information such as range of spend and impressions, as well as demographics of who saw it. 

The authorization process will not change in countries where we’ve previously launched, and people who previously authorized will not need to reauthorize.

Holding Advertisers Accountable

Elections are happening all over the world and some happen with very little notice. We are committed to requiring authorizations and disclaimers for social issue, electoral or political ads in more places. 

We already require that advertisers get authorized and add disclaimers to these ads in over 50 countries and territories, and now we’re expanding proactive enforcement on these ads to countries where elections or regulations are approaching, starting with Ukraine, Singapore, Canada and Argentina. Beginning today, we will systematically detect and review ads in Ukraine and Canada through a combination of automated and human review. In Singapore and Argentina, we will begin enforcement within the next few months. We also plan to roll out the Ad Library Report in both of those countries after enforcement is in place. The Ad Library Report will allow you to track and download aggregate spend data across advertisers and regions. For all other countries included in today’s announcement, we will not be proactively detecting or reactively reviewing possible social issue, electoral or political ads at this time. However, we strongly encourage advertisers in those countries to authorize and add the proper disclaimers, especially in a rapidly evolving regulatory landscape.  

In all cases, it will be up to the advertiser to comply with any applicable electoral or advertising laws and regulations in the countries they want to run ads in. If we are made aware of an ad that is in violation of a law, we will act quickly to remove it. With these tools, regulators are now better positioned to consider how to protect elections with sensible regulations, which they are uniquely suited to do.

In countries where we are not yet detecting or reviewing these types of ads, these tools provide their constituents with more information about who’s influencing their vote — and we suggest voters and local regulators hold these elected officials and influential groups accountable as well. 

Expanding the Ad Library API

We know we can’t do this alone, which is why we’re also rolling out access to our Ad Library API globally so regulators, journalists, watchdog groups and other people can analyze ads about social issues, elections or politics and help hold advertisers and Facebook accountable. Since we expanded access in March, we’ve made improvements to our API so people can easily access ads from a given country and analyze specific advertisers. We’re also working on making it easier to programmatically access ad images, videos and recently served ads.

We’ll continue to partner with governments, civil organizations and electoral authorities to protect the integrity of elections worldwide. Our work to help protect elections is never done, but we believe changes like these continue to move us in the right direction.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A family of four making over $100,000 per year is considered low-income in San Francisco. In its outlying cities and regions, the issue of housing affordability persists for middle to lower-middle class people and families. In 2017, Facebook launched a rental assistance program to help professionals who serve the community, like teachers and firefighters, afford to pay rent in the greater Menlo Park area.

We’ll continue to support the communities in which we operate, but we can’t solve the housing crisis alone. To that end, we consult with local people, government and organizations to develop programs that work for all. For example, the $75-million Catalyst Housing Fund was co-created with five local community groups, the City of East Palo Alto, and the City of Menlo Park.

Facebook has also sponsored the advocacy group Support Teacher Housing, which advocates for housing for people of all income levels including teachers and other middle-income earners. We also joined the Partnership for the Bay’s Future as a founding member to address the regional housing shortage by protecting current tenants, preserving existing affordable housing, and producing housing for people at all income levels. By partnering with community organizations to help support residents, we can help the communities that we belong to thrive.

In the video below, Konstance, a Menlo Park teacher and mother of two, talks about how being able to live within minutes of her school — as opposed to hours — has afforded her more time for her students, her children and herself to pursue continued education.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
At Facebook, a dedicated, multidisciplinary team is focused on understanding the historical, political and technological contexts of countries in conflict. Today we’re sharing an update on their work to remove hate speech, reduce misinformation and polarization, and inform people through digital literacy programs.

By Samidh Chakrabarti, Director of Product Management, Civic Integrity; and Rosa Birch, Director of Strategic Response

Last week, we were among the thousands who gathered at RightsCon, an international summit on human rights in the digital age, where we listened to and learned from advocates, activists, academics, and civil society. It also gave our teams an opportunity to talk about the work we’re doing to understand and address the way social media is used in countries experiencing conflict. Today, we’re sharing updates on: 1) the dedicated team we’ve set up to proactively prevent the abuse of our platform and protect vulnerable groups in future instances of conflict around the world; 2) fundamental product changes that attempt to limit virality; and 3) the principles that inform our engagement with stakeholders around the world.

About the Team

We care about these issues deeply and write today’s post not just as representatives of Facebook, but also as concerned citizens who are committed to protecting digital and human rights and promoting vibrant civic discourse. Both of us have dedicated our careers to working at the intersection of civics, policy and tech.

Last year, we set up a dedicated team spanning product, engineering, policy, research and operations to better understand and address the way social media is used in countries experiencing conflict. The people on this team have spent their careers studying issues like misinformation, hate speech, polarization and misinformation. Many have lived or worked in the countries we’re focused on. Here are just a few of them:

Ravi, Research Manager
With a PhD in social psychology, Ravi has spent much of his career looking at how conflicts can drive division and polarization. At Facebook, Ravi analyzes user behavior data and surveys to understand how content that doesn’t violate our Community Standards — such as posts from gossip pages — can still sow division. This analysis informs how we reduce the reach and impact of polarizing posts and comments.

Sarah, Program Manager
Beginning as a student in Cameroon, Sarah has devoted nearly a decade to understanding the role of technology in countries experiencing political and social conflict. In 2014, she moved to Myanmar to research the challenges activists face online and to support community organizations using social media. Sarah helps Facebook respond to complex crises and develop long-term product solutions to prevent abuse — for example, how to render Burmese content in a machine-readable format so our AI tools can better detect hate speech.

Abhishek, Research Scientist
With a masters in computer science and a doctorate in media theory, Abhishek focuses on issues including the technical challenges we face in different countries and how best to categorize different types of violent content. For example, research in Cameroon revealed that some images of violence being shared on Facebook helped people pinpoint — and avoid — conflict areas. Nuances like this help us consider the ethics of different product solutions, like removing or reducing the spread of certain content.

Emilar, Policy Manager
Prior to joining Facebook, Emilar spent more than a decade working on human rights and social justice issues in Africa, including as a member of the team that developed the African Declaration on Internet Rights and Freedoms. She joined the company to work on public policy issues in Southern Africa, including the promotion of affordable, widely available internet access and human rights both on and offline.

Ali, Product Manager
Born and raised in Iran in the 1980s and 90s, Ali and his family experienced violence and conflict firsthand as Iran and Iraq were involved in an eight-year conflict. Ali was an early adopter of blogging and wrote about much of what he saw around him in Iran. As an adult, Ali received his PhD in computer science but remained interested in geopolitical issues. His work on Facebook’s product team has allowed him to bridge his interest in technology and social science, effecting change by identifying technical solutions to root out hate speech and misinformation in a way that accounts for local nuances and cultural sensitivities.

Focus Areas

In working on these issues, local groups have given us invaluable input on our products and programs. No one knows more about the challenges in a given community than the organizations and experts on the ground. We regularly solicit their input on our products, policies and programs, and last week we published the principles that guide our continued engagement with external stakeholders.

In the last year, we visited countries such as Lebanon, Cameroon, Nigeria, Myanmar, and Sri Lanka to speak with affected communities in these countries, better understand how they use Facebook, and evaluate what types of content might promote depolarization in these environments. These findings have led us to focus on three key areas: removing content and accounts that violate our Community Standards, reducing the spread of borderline content that has the potential to amplify and exacerbate tensions and informing people about our products and the internet at large. To address content that may lead to offline violence, our team is particularly focused on combating hate speech and misinformation.

Removing Bad Actors and Bad Content

Hate speech isn’t allowed under our Community Standards. As we shared last year, removing this content requires supplementing user reports with AI that can proactively flag potentially violating posts. We’re continuing to improve our detection in local languages such as Arabic, Burmese, Tagalog, Vietnamese, Bengali and Sinhalese. In the past few months, we’ve been able to detect and remove considerably more hate speech than before. Globally, we increased our proactive rate — the percent of the hate speech Facebook removed that we found before users reported it to us — from 51.5% in Q3 2018 to 65.4% in Q1 2019.

We’re also using new applications of AI to more effectively combat hate speech online. Memes and graphics that violate our policies, for example, get added to a photo bank so we can automatically delete similar posts. We’re also using AI to identify clusters of words that might be used in hateful and offensive ways, and tracking how those clusters vary over time and geography to stay ahead of local trends in hate speech. This allows us to remove viral text more quickly.

Still, we have a long way to go. Every time we want to use AI to proactively detect potentially violating content in a new country, we have to start from scratch and source a high volume of high quality, locally relevant examples to train the algorithms. Without this context-specific data, we risk losing language nuances that affect accuracy.

Globally, when it comes to misinformation, we reduce the spread of content that’s been deemed false by third-party fact-checkers. But in countries with fragile information ecosystems, false news can have more serious consequences, including violence. That’s why last year we updated our global violence and incitement policy such that we now remove misinformation that has the potential to contribute to imminent violence or physical harm. To enforce this policy, we partner with civil society organizations who can help us confirm whether content is false and has the potential to incite violence or harm.

Reducing Misinformation and Borderline Content

We’re also making fundamental changes to our products to address virality and reduce the spread of content that can amplify and exacerbate violence and conflict. In Sri Lanka, we have explored adding friction to message forwarding so that people can only share a message with a certain number of chat threads on Facebook Messenger. This is similar to a change we made to WhatsApp earlier this year to reduce forwarded messages around the world. It also delivers on user feedback that most people don’t want to receive chain messages.

And, as our CEO Mark Zuckerberg detailed last year, we have started to explore how best to discourage borderline content, or content that toes the permissible line without crossing it. This is especially true in countries experiencing conflict because borderline content, much of which is sensationalist and provocative, has the potential for more serious consequences in these countries. 

We are, for example, taking a more aggressive approach against people and groups who regularly violate our policies. In Myanmar, we have started to reduce the distribution of all content shared by people who have demonstrated a pattern of posting content that violates our Community Standards, an approach that we may roll out in other countries if it proves successful in mitigating harm. In cases where individuals or organizations more directly promote or engage violence, we will ban them under our policy against dangerous individuals and organizations. Reducing distribution of content is, however, another lever we can pull to combat the spread of hateful content and activity.  

We have also extended the use of artificial intelligence to recognize posts that may contain graphic violence and comments that are potentially violent or dehumanizing, so we can reduce their distribution while they undergo review by our Community Operations team. If this content violates our policies, we will remove it. By limiting visibility in this way, we hope to mitigate against the risk of offline harm and violence.

Giving People Additional Tools and Information

Perhaps most importantly, we continue to meet with and learn from civil society who are intimately familiar with trends and tensions on the ground and are often on the front lines of complex crises. To improve communication and better identify potentially harmful posts, we have built a new tool for our partners to flag content to us directly. We appreciate the burden and risk that this places on civil society organizations, which is why we’ve worked hard to streamline the reporting process and make it secure and safe.

Our partnerships have also been instrumental in promoting digital literacy in countries where many people are new to the internet. Last week, we announced a new program with GSMA called Internet One-on-One (1O1). The program, which we first launched in Myanmar with the goal of reaching 500,000 people in three months, offers one-on-one training sessions that includes a short video on the benefits of the internet and how to stay safe online. We plan to partner with other telecom companies and introduce similar programs in other countries. In Nigeria, we introduced a 12-week digital literacy program for secondary school students called Safe Online with Facebook. Developed in partnership with Re:Learn and Junior Achievement Nigeria, the program has worked with students at over 160 schools and covers a mix of online safety, news literacy, wellness tips and more, all facilitated by a team of trainers across Nigeria.

What’s Next

We know there’s more to do to better understand the role of social media in countries of conflict. We want to be part of the solution so that as we mitigate abuse and harmful content, people can continue using our services to communicate. In the wake of the horrific terrorist attacks in Sri Lanka, more than a quarter million people used Facebook’s Safety Check to mark themselves as safe and reassure loved ones. In the same vein, thousands of people in Sri Lanka used our crisis response tools to make offers and requests for help. These use cases — the good, the meaningful, the consequential — are ones that we want to preserve.

This is some of the most important work being done at Facebook and we fully recognize the gravity of these challenges. By tackling hate speech and misinformation, investing in AI and changes to our products, and..

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview