Written by Kelly Matheson, Senior Attorney and Program Manager, WITNESS
In a time when climate change threatens our environment and when lawmakers in the United States reject the severity of it, a group of passionate youth have stepped up to demand action. In 2010, we began working with Our Children’s Trust and iMatter supported by climate scientists, constitutional law professors and faith-based communities to bring attention to a new movement of young people calling on state and federal governments to protect the atmosphere for future generations.
For decades, the U.S. government knew about the connection between fossil fuel and climate change. Yet, administration after administration took affirmative actions to permit and encourage fossil fuel development even though government reports from as early as 1965 recognized climate change as a catastrophic threat. “Today, this known threat is violating the youngest generation’s constitutional rights to life, liberty, and property,” said WITNESS Senior Attorney and Program Manager, Kelly Matheson.
On May 4, 2011, youth with the help of Our Children’s Trust brought legal action against all 50 states and the United States government. WITNESS collaborated with Our Children’s Trust, iMatter and a team of filmmakers from Montana State University to create “Stories of Trust: Calling for Climate Recovery”, ten videos featuring youth plaintiffs from across the U.S.
These videos tell the personal stories of each of the plaintiffs, how they and their communities have been harmed by climate change, how their rights have been infringed and why, ultimately, they were suing the government, not for financial compensation, but for action. The young people are demanding that the federal government enact a “National Climate Recovery Plan that is in line with the best available science and honors their constitutional rights.
With our partners, we distributed the videos globally, and five of them were hand-delivered to President Obama. “Stories of Trust” helped the innovative legal strategy gain more attention and recognition in the U.S. and beyond.
As the youths’ landmark constitutional climate lawsuit, Juliana v. United States, makes its way through the legal system, the Trump administration has employed extreme tactics to get the case dismissed, including the unprecedented use of a rare legal maneuver called a petition for a writ of mandamus. To date, the Trump administration has used this “drastic and extraordinary measure” three times in combination with a series of other delay tactics to silence the voice of youth and keep science out of the courtroom.
Yet, time and time again, the Trump administration’s relentless efforts to close the doors of justice to these young people and stop the “the trial of the century” have failed. Even the United States Supreme Court has twiceruled in favor of the youth.
WITNESS has believed in these brave youth and Our Children’s Trust since the beginning of their journey. We are so excited that their work has been featured recently on 60 minutes. We add our voices of support to the 21 young plaintiffs who are demanding an end to reckless fossil fuel development and for the drastic curbing of carbon emissions that contribute to detrimental climate change.
Environmental rights are human rights. We must take action now to ensure the rights of this youngest generation and of those to come.
On February 20th, Samir Flores Soberanes, popular communicator and activist, was murdered outside his house in Amilcingo, a community located at Morelos state in the eastern center of Mexico. His murder took place the day after he publicly announced, once again, his opposition to the Morelos Integral Project (PIM in Spanish) at a federal government assembly – three days before the government launched a public referendum about the project.
Samir Flores Soberanes woke up every day at six in the morning to speak at the local community radio station. Along with other community leaders, he founded the local radio station, Radio Amilzintko, in 2013, to create a channel to share news and information in the face of the Morelos Integral Project threat.
Since 2011, the Mexican government has tried to impose the Morelos Integral Project. Amilcingo is located near the Popocatepetl volcano, and it is one of the several communities affected by the Morelos Integral Project which geographically crosses 24 municipalities of three Mexican states: Morelos, Puebla and Tlaxcala.
The PIM will result in:
The production of electrical energy for the Federal Electricity Commission, through a thermoelectric project that uses natural gas and steam. The construction of the thermoelectric plants in Morelos has already begun;
An aqueduct that will cut people off from the water supply from the Cuautla River
Seeking Justice for Indigenous Land Rights Activist Samir Flores Soberanes - YouTube
The thermoelectric project belongs to the Federal Electricity Commission.Since 2011, the local communities, international human rights groups and academics have opposed the project because of the health risks it imposes on indigenous communities, the damage it will cause to their lands and crops and the monumental safety risks due to its proximity to the Popocatepetl volcano.
Samir organized with the community through daily radio broadcasts. His voice represented the voice of a community standing together to defend themselves from a project that would affect their land, water and wellbeing.
WITNESS firmly believes in sharing and promoting the use of video tools to strengthen community movements to defend human rights. On August 2014, we started working in Amilcingo and met Samir.Alternative media gathered in Amilcingo to learn from each other’s experiences about their use of communication and media platforms for defending the land, territory and natural resources.
As a result, independent and alternative media made this video called Raising the Eyes, advocating the voice to show what the PIM is about and how Amilcingo people are organizing to resist. One of these acts of resistance was building the community radio station.
Samir was brutally murdered on February 2019, as the government began its federally-mandated consultative process on PIM with the Amilcingo community.
Now the government is pointing the finger at organized crime groups as possible perpetrators and going so far as to say it will investigate people from the community. It is the responsibility of the government to obtain a verdict in the investigation of this atrocious crime, while restoring a favorable climate so that human rights defenders can carry out their work. Criminalizing land defenders is an action that goes directly against this obligation. WITNESS believes this creates a hostile environment, and until the impunity of Samir’s murder is addressed, the PMI has no ground to move forward, not until the dialogue with the community is established.
WITNESS strongly condemns the murder of Samir. It is urgent that the Mexican government thoroughly investigate the case. It cannot go unpunished as it is impunity that enables killings of leaders from indigenous communities to continue. These communities are fighting every day to defend their rights, their community, their children, their wellbeing. Sign the petition today.
Exactly a year ago, on 5th March 2018, we watched as flames engulfed Digana, in the heart of Kandy. We saw quiet, multi-ethnic suburban localities swept up by a wave of anti-Muslim wrath. On 6th March 2018, the Sri Lankan government responded by blocking access to Facebook, Whatsapp and Viber, on the grounds that they fueled communal violence. And yet, as navigators of virtual realms, we know that the distinction between online and offline spaces is swiftly disappearing.
Certainly, Facebook did play a strong role in the violence meted out against minority communities in Digana, as Sri Lankan media and human rights activists have been saying over and over for years. WITNESS has previously spoken out about how we, as human rights practitioners, cannot allow a few for-profit companies in the Silicon Valley to mete out decisions for public good.
Mr President, Our Children Are Watching - YouTube
A culture of violence and misogyny that is deeply embedded in any political, social or legal structure cannot have an easy solution. Sri Lanka’s legal system is woefully ill-equipped to deal with sexual and gender based violence, online and offline. According to police statistics, over 41,882 cases of violence against women were recorded between 2012 and 2016 (source: World Health Organization).
In 2017, police statistics noted 294 recorded cases of rape, 1206 cases of statutory rape, categorised under “with consent”, and 232 cases of statutory rape, categorised under “without consent”. There were zero convictions recorded by the end of the year. Cases reported to the police are likely very low compared to the real number of incidents that actually occur.
We recently interviewed Raisa Wickrematunge, co-editor of leading civic media initiative Groundviews, on some of their strategies to fight sexual and gender-based violence, including digital approaches. Their website was the very first attempt by a Sri Lankan collective that provided a digital platform for citizens to express their views and document life post-conflict and during emergent crises, and regularly dispenses innovative solutions by way of new media technology. Their sister sites, Vikalpa and Maatram, serve audiences in the Sinhala and Tamil languages. All three initiatives are anchored at the Centre for Policy Alternatives (CPA), Sri Lanka.
According to Raisa, her team are currently in the process of writing a research report that focuses on tech-based violence, looking particularly at the ways in which women and the LGBT community are being discussed on Facebook. They have been archiving examples for almost a year. The report will be used for advocacy purposes and to provide recommendations for policy makers, social media platforms and civil society. Photo-essays and stories published on the site highlight different types of violence – see here and here.
With cases backlogged for years, Sri Lanka’s legal system is notoriously slow. Officials processing cases are often not sensitised to deal with those who have experienced sexual and gender based violence, and are often responsible for re-traumatising victims. There are some state efforts to mitigate this through women’s and children’s police desks, with some police units provided training, but are not considered a priority.
Incidents of sexual harassment, non-consensual dissemination of photos and private, intimate images, are some key trends around technology-related violence that the research team have uncovered. They find that these are normalised by Facebook across its pages in Sinhala, Tamil, and English. The monitored pages indicate that the non-consensual sharing of images publicly are just a preview of larger databases employed to invite page followers to privately gain access to more images. In some instances, the images used are personal images and not intimate images, but the accompanying captions and comments are derogatory, abusive, violent and incite further violence.
Celebrating IWD2019: Interview with Raisa Wickrematunge - YouTube
Media literacy is one component of Groundviews’ work, through which the importance of fact-checking and verification is emphasized. Resources include that of Cambridge Analytica and Facebook, short videos on creating a secure password,mobile phone security, and on two-factor authentication, among many others. While not many incidents were recorded by video on Facebook for the research on tech-based violence against women, the team received several videos via WhatsApp, and documented two videos on Facebook, downloaded using Keepvid. The videos on WhatsApp were sent to the team from trusted sources and were later corroborated by other activists interviewed in the north and east who confirmed that they were recent.
Raisa and her team contend that there is a strong culture of impunity enjoyed by the perpetrators of online violence on Facebook. The pages monitored do not indicate who is administering them and often, followers and commenters are unidentifiable. They use a fake identity or profile, or do not share identifiable markers such as a profile picture or a full name (see this interesting study of hate speech on Facebook in Sri Lanka). While there might be opportunities to use Facebook reporting guidelines to report perpetrators and get content removed, for women facing violence on Facebook wanting to report to law enforcement, information to identify perpetrators remains elusive.
There has been very little research done on tech-based violence against women and girls in Sri Lanka. Trilingual, open source resources are rare, but raising awareness and providing easy access to empowering tools can save lives. With these latest findings from their research on tech-based violence against women, we think this team of researcher-activists are already turning the tide.
Meghana Bahar is WITNESS’ Communications & Engagement Consultant for the Asia-Pacific region. She is a gender and media specialist, with 19 years of experience in transnational women’s and human rights movements.
Image: Brazilian politicians livestreaming while campaigning and breaking a Marielle Franco sign. Current Rio governor on the right and a now-elected deputy breaking the sign.
By Adriano Belisário
As mobile phone access and Internet bandwidth increased in this decade, video streamings became an important part in the political dispute: such as in the beginning with the Arab Spring, the Occupy movement in the USA, the indignados movement (15-M) in Spain or the 2013 protests in Brazil. We have many examples around the world about how video and amateur live streamings in social networks transformed activist and democratic practices, protests and methodologies to document human rights violations. Seeing the potential in this area has driven much of WITNESS’ work around livestreaming and co-presence including the Mobil-Eyes Us project.
However, currently, these same tools are being used politically to promote hate speech, attacks against human rights defenders and progressive activists. WITNESS has been covering this topic since the start and have followed some of these experiences globally. In this essay, we will analyze these shifts, reviewing recent aspects of the political use of video and live streamings in Brazil, more specifically, Rio de Janeiro.
Livestream and activism in Rio
Since the beginning of this decade, live streaming via web has been used widely to denounce abuses or human rights violations in Rio de Janeiro, especially after the announcement of mega international events in the city, like the World Cup (2014) and the Olympic Games (2016). To host those events, a new apparatus of force was created, including the intensified use of the army for “the Guarantee of Law and Order (GLO)” in favelas; the creation of the Anti-Terrorism law, which now can be applied to ban social movements; and the creation of the Pacifying Police Unit (UPP), which was positioned in strategic city axes around facilities used during the 2014 World Cup and the 2016 Olympic Games. Since then, the city has also witnessed an intense increase in the military presence, which also has lead to more extra-judicial executions during police operations and paramilitary activities.
Simultaneously, activists and civil society in general organized themselves to resist, using video many times to do so. In 2012, for instance, a protest in Vila Autódromo, a community evicted because of the Olympic Games, was streamed in the international meeting of Rio+20 People’s Summit. But to understand this context fully we need to go back to the iconic year of 2013, when protests popped up in hundreds of Brazilian cities, engaging millions of people of different political persuasions.
At that time, Rio de Janeiro was one of the epicenters of this political earthquake. Video streamings played an important role not only of engaging people to reclaim the streets politically and creating a process of co-presence and support between the streamer and their audience, but also of producing evidence in legal courts, by sentencing or proving the innocence of activists. Many of them went to jail or were targeted by surveillance schemes or court lawsuits.
After 2013, on one side, activists and human right defenders kept using live streamings widely during strikes or protests, like in the 2014 World Cup or the 2016 Olympic Games. WITNESS started the Mobil-Eyes-Us pilot in Brazil in this context, during the Rio Olympic Games in 2016, aggregating live-streamings while enriching them with further context and storytelling.
However, during the period between of 2013 and 2016, the far right-wing opposition also grew quickly, utilizing an anti-corruption discourse against the Workers Party and left-wing or progressive groups in general. So, simultaneously, conservative and anti-human rights groups flourished and created their own media environment, mainly pushing for the dimission of the then president Dilma Rousseff from the Workers Party. Some left-wing activists also used streaming regularly to broadcast demonstrations leaded mainly by political parties and networks linked to traditional social movements. In terms of audience, this period was marked by a huge polarization between people anti and pro-impeachment and Dilma’s deposition in 2016 represented a victory for the right-wing groups in terms of successful mobilization.
The conservative wave
The incoming governments after 2016 and the end of the “mega events cycle” in the city amplified even more the military presence in Rio de Janeiro, instead of reversing that process. Even though the military presence is not new, as we mentioned before, military and the conservative power pushing it scaled up rapidly recently. Nowadays, Rio’s activism with video and live streaming has new challenges, especially related to the risks involved in denouncing abuses publicly.
Police force in Complexo do Alemão, in Rio de Janeiro. (Photo: Agência Brasil)
Paramilitary executions reached international attention with the murder of the Rio councilwoman Marielle Franco in march of 2018 – the same month when five young hip-hop activists were murdered in Maricá, both in the state of Rio de Janeiro. And at the end of the year, far-right movements, police and the military came out as the main winners in the 2018 parliament and presidential elections, with the victory of former army captain Jair Bolsonaro, an long-standing Rio politician who became famous for his anti-human rights, pro-torture and pro-military dictatorship stance.
Bolsonaro and livestreams
Bolsonaro used short live streamings extensively during the campaign. He did this daily on social networks during the second round of the elections, many times ignoring mainstream media and reinforcing his cultivated image of independence by talking through his own Internet channels. These live broadcasts were scheduled at about 8pm, which was almost at the same time of the main television news program in Brazil. Despite all the interactive features of video streaming in web, his streamings during the campaign were more like casual public speeches using both Youtube and Facebook, rather than using these channels to interact directly with his supporters.
In one of his last speeches as a candidate, a week before the elections, Bolsonaro did a live streaming and it was reproduced in a big screen during a demonstration of his supporters in São Paulo. While he was talking about his plans to send political opposition to jail or exile, his audience could see the clothesline of his house in the background of the video. And, as his first public act after the final poll results had been released, Bolsonaro turned on his camera to give the first speech as president-elect in a live streaming (link to Facebook and Youtube) and, less than 2 hours after that, started another broadcast. It was a 5-minute live streaming. Next to his wife, Bolsonaro had a Brazilian flag stuck with duct tape on the wall as his background. That simplistic style contrasted with the impressive numbers of viewers: 3.1 million in the first 20 hours.
Unlike Trump, Bolsonaro as a candidate and part of the right-wing movements in Brazil didn’t use social media to cultivate a professional or high-profile image. Instead, Bolsonaro seemed proud of his amateurism during the elections. Rather, his videos seemed amateur in a professional and calculated manner.
Bolsonaro using Facebook Live
Right-wing activism and politics in online video
He wasn’t the only right-wing politician to use recorded and live-streamed videos effectively in this election. Some famous Youtubers from conservative groups were also elected this year to National Congress. Recently, Avaaz conducted research about the reach of the most famous of them, known as Joice Hasselmann. Their team analyzed around 10,000 Facebook posts that included fake rumours about falsely alleged frauds in the electronic voting system. They found her channels to be one of the main sources of that content.
Joice has 1.9 million of followers in Facebook and more than 1 million in YouTube. Avaaz found that 71% of the interactions in her Facebook page are related to the videos posted there. Her channel in YouTube offers visitors a paid subscription for almost 3 dollars monthly, so that every week they could participate in an exclusive video chat with her and get an exclusive membership stamp on your username in the comments.
During this year, Joice’s videos were used to engage her followers in her own campaign and to promote Jair Bolsonaro. Because her videos were cross-posted between two different platforms (Facebook and YouTube), it was common for Joice to ask her audience to subscribe to both channels because she had different products and approaches for each platform.
A lot has been said about how disinformation via Whatsapp affected the poll results this year in Brazil or how Facebook impacted the Trump elections, but YouTube also played an important role in Brazil’s politics. While WhatsApp seems to be the place for guerrilla communication, YouTube provides more time to present ideas and strengthen a discourse around an issue. Vice recently published an article (in Portuguese) about this phenomenon in Brazil, after Bolsonaro posted suggestions of YouTube channels for his Twitter followers to subscribe to. A researcher cited in the article pointed out that right-wing activists and conservative groups use YouTube to attack and trivialize LGBTQ+, feminism, progressive groups and even basic discussions about racism.
Many times, these groups use misinformation as a tactic. The Brazilian elections in 2018 had many episodes like these. Alarmist fake news around fraud being committed in the elections was spread by Bolsonaro and Joice Hasselmann as a tactic to engage their supporters. In one of the most famous videos, recording almost 2 million of views, Hasselmann gave the uppercase and sensationalist title “URGENT: USA detects actions from Hezbollah, Iran and Venezuela in brazilian elections” to a video where she showed a letter supposedly by the US Congressperson Dana Rohrabacher with few words about an allegedly interference, without providing further information. Using tactics like this, since dissatisfaction and disbelief in the status quo were the fuel to this far-right activism, they engaged their followers with the idea that they had to create a movement so wide that it would be impossible to corrupt it.
Recently in Brazil, WITNESS also followed two cases of small anonymous far-right groups creating copycat Facebook pages mirroring almost the same name of grassroots activists in order to promote confusion and mimic their strategies. These right-wing groups usually engage people in an anti-human rights discourse, trying to legitimize human rights abuses or violations like extrajudicial executions. Although their audience is not so wide as the original groups, they try to create a local “counter-narrative” and intimidate human rights activists.
Screenshot of a post supporting Bolsonaro after the first round of 2018 elections in a copycat Facebook page which imitates a grassroot favela-based media group. “The second round fight starts NOW. Wear your armor and let’s fight”.
With this new scenario, many activists in Rio de Janeiro reorganized their actions to denounce police abuse while trying to diminish the risks involved. The themes and content of their live-stream broadcasts has been changing. If in 2013 it was mainly streamings from demonstrations, in the last years, grassroots media groups have also diversified their streamings to include transmissions of public debates, cultural activities or other kind of contents.
Beyond that, Rio activists also developed new strategies in favour of human rights and democracy. A worthwhile example in this area is the work done by Defezap. Defezap is an organization based in Rio de Janeiro and works as a hub where people can send videos and denounce abuses using the most popular instant messenger in Brazil,WhatsApp. Currently a WITNESS partner in the Mobil Eyes-Us project, Defezap doesn’t reveal the phone number of their sources, but does provide support to assist in a legal case if needed. From the multiple videos received, Defezap maps patterns of violence in the city while gathering together crucial evidence. WITNESS also supports video as evidence initiatives in Rio de Janeiro, such as the partnership with Papo Reto, a grassroots media channel based in Complexo do Alemão, one of the main favelas of the city.
However, as we can see, live streamings and the aesthetics and strategies of amateur video are being used regularly nowadays not only by human rights or progressive groups, but also by status quo agents, army forces and conservative networks. Video streaming platforms such as Facebook and Google aren’t neutral technical platforms where discussion happens, but private profit companies with internal policies and political choices. The private policies of these companies around fake news, user privacy, content visibility algorithms or membership requirements to facilitate monetization of their channels affects democracy itself nowadays.
This episode in Rio de Janeiro helps us to visualize a global reality: streamings and video activism in the web is a field occupied by different forces, and not a unique or monolithic perspective. Conservative and progressive groups develop both co-presence and audience engagement strategies. Platforms like YouTube or Facebook tried to pose themselves as neutral, but it’s undeniable they have an important role politically that they should reflect upon.
So, beyond the support to alternative networks and open source platforms, video activism also needs to discuss those private policies in public, since it affects millions of people and shapes the majority of our discussions. Based on the recognition of those platforms as important political players, we need to push for more commitment to transparency, anti-authoritarian and pro-human rights initiatives, as well as asking for more open-source contributions from them. It’s part of our mission in WITNESS to reflect about those topics. You can check our website to know more about our work in themes like censorship in China, content regulation and ‘terrorist’ content in Europe, and misinformation in Whatsapp.
Mobil-Eyes Us is a project of WITNESS and the WITNESS Media Lab to explore potential new approaches to livestream storytelling for action. We look at technologies, tactics and storytelling strategies to use live video to connect viewers to frontline experiences of human rights issues they care about, so they become ‘distant witnesses’ who will take meaningful actions to support frontline activists. We have developed a series of storytelling experiments, in collaboration with favela-based human rights activists in Rio de Janeiro, which has lead to an app. This app, Mobil-Eyes Us (which is in its alpha stages), enables an activist group to curate a series of eyewitness Facebook live-streams and push these to the relevant people in their network to watch and take individual or collective action. They would be able to rapidly share a stream so more people are present and witness an incident, help translate, provide guidance or give context.
In the following series, our Project Lead in Rio, Clara Medeiros, will share the practices learned from the live videos of 18 activists from Rio de Janeiro and collectives from favelas who have had an impact on their communities, both by denouncing human rights violations and by giving support to initiatives to and from the favelas. From the analysis of live streamings categorized as solidarity marches, planned actions, broadcasts with rights violations, call-outs and long-duration video streamings, we will understand the strategies and techniques used successfully by front-line activists. These blogs are part of our exploration of effective new approaches to live streaming storytelling, the technologies that can support this and how both can be linked to effective ‘distant witness’ action around live streamings as part of the Mobil-Eyes Us Project at WITNESS and the WITNESS Media Lab. This was originally published on WITNESS Português.
By Clara Medeiros
During the two completed phases of the Mobil-Eyes Us pilot, we have learned that favela activists in Rio de Janeiro make regular use of live broadcasts by calling on residents and activists to attend demonstrations for rights and community events. A great deal of this is due to the fact that live-streaming videos have more views than any other type of Facebook publications, and generate 6 to 10 times more comments than a text or photo post. As a result, call-out broadcasts are excellent online engagement tools, since they are usually produced in advance of the event and can be publicized and shared as a recorded video even after it’s not live anymore.
In this post, we will share the practices learned on how to create an effective call-out broadcast with two distinct examples from prominent communicators in their communities. We lead off with Raul Santiago, from Coletivo Papo Reto at Complexo do Alemão, with exemplary storytelling within a successful campaign (link to article in Portuguese). This was included in our curation of live streamings within the Mobil-Eyes Us platform, providing a complete narrative arc of the video streamings created during the campaign, each with context and live translation into English.
Example of one of the campaign live-streamings within Mobil-Eyes Us’ previous platform with its accompanying English translation.
Call-out – example #1 – Raul Santiago from Coletivo Papo Reto
One of the main highlights of this particular call-out, which kicked off the campaign, is the chosen livestream image that shows the members of Coletivo Papo Reto with their full protective suits and photography equipment. The framing emphasizes their credibility and the seriousness of their work while incorporating the favela landscape, with the iconic cable car, as a poignant backdrop.
We at Coletivo Papo Reto are here to call-out residents, activists, independent communicators, media in general and lawyers to be with us in a walkthrough around the most critical area of this war, the Alvorada area, which is located here in Complexo do Alemão.
We will be starting at 4PM at the entrance of Grota, which is on Itararé road on the corner of Joaquim de Queiroz street.
Come and help us talk to residents and think together ways to ease this chaotic situation here in the favela.
This impactful image conjoined with the publication’s concise and compelling text description (translated to English in the box above) summarizes the situation efficiently, thus boosting the engagement and publicity strategy. It’s a way to guarantee that even those who watch the broadcast when it’s no longer live will be able to share the short 5-minute recorded video – short duration is another important feature for an impactful call-out live stream. And because the main facts, as well as time and location, are included and highlighted in the publication, people can still follow along asynchronously. In this particular case, the publication language also helps guarantee that the key practical information given in the broadcast will be fully communicated to the public even for those with a poor connection that may cause challenges to watch or hear the audio and/or video. A call-out, in general, does not have the same urgency as other types of live broadcasting. Under those circumstances, it’s even more important to ensure good internet service as well as choosing in advance a location that is visually appealing. This way, the streamed video can also be used as a high-quality video without great production costs, where the information given will not only be understandable but shareable even when it’s not live anymore.
We confirmed, during the pilot process, the importance of contextualization to get greater engagement in the fight against rights violations. In the specific case of this live streaming, Raul gives full context of all the violations that were happening. It is an effective tactic to highlight the importance of distant witness support since it helps explain why their participation matters even from far away since there are actions that can be taken from distant supporters that can move frontline activists further ahead in their struggle. This call-out was the first in a series of 12 live broadcasts produced by the Coletivo Papo Reto, in a campaign that brought together independent journalists, journalists from major news organizations, public prosecutors, public defenders and human rights advocates to denounce the numerous violations taking place at Alemão.
Public hearing on house invasions in Complexo do Alemão, which was part of the campaign started with Raul’s call-out added to the Mobil-Eyes Us. Our platform featured live Portuguese to English translation and contextualization in both languages as well as all other streamings from the campaign.
You can get a hint of the context of the situation and the level of organization of the campaign in this broadcast from Raul’s personal profile made on February 4, 2017. Raul, alongside other activists, public defender’s officers and residents go on a walk-through where they document the violations that took place in the Largo do Samba area in Complexo do Alemão.
Two screen grabs from Raull’s streaming denouncing human rights violations. On the right, he streams a picture taken by Bento Fabio, a photojournalist from Coletivo Papo Reto, so viewers can actually see the police officer in the invaded house.
We support, through our workshops and trainings, a culture of direct exchange with the audience not only as an engagement and audience reach booster, but more importantly as a collaboration and co-presence tool. We also encourage, for example, the practice of streamers asking their audience to write down contextual information mentioned live in the comment box so users that watch the recorded version can have a more comprehensive experience. This not only helps the streaming have more reach within Facebook – since videos that have been interacted with more through comments and reactions are more likely to appear on users’ News feeds – it also gets participants away from being passive audiences by motivating them to use their various abilities and capacities and thus understand their value as collaborative resources.
Call-out – example #2 – Mc Martina and Mc Al Neg from Poetas Favelados for Voz das Comunidades
In this call-out example, we look into the main strategies used by two poets from Complexo do Alemão, MC Martina and MC Al-Neg, who were invited to broadcast on Voz das Comunidades’ channel (a community-focused newspaper from Complexo do Alemão created in 2005 by Rene Silva, then 11 years old, to report on social issues in the favelas from Complexo do Alemão) to invite residents for a poetry and cultural resistance event at Complexo do Alemão. Early on in their captivating opening remarks, the two poets disclose that they aren’t familiarized with the mobile device used in this specific livestream, freeing themselves from any pressure to create a professional broadcast. At the same time, laying out their vulnerability and being casual about it also brings them closer to the audience that often enjoys artists using other platforms to connect with their fans in a candid manner. Martina makes this connection with the audience in a dexterous, charming and natural manner.
Soon after introducing herself, while waiting for Facebook to generate an audience for the broadcast, she already starts saying hi directly, citing the full name to people already present in the livestream. This audience engagement tactic works great for call-out video streamings, just like it does in the beginning of any streaming category or any moment of little action and/or information to provide to your audience, as we will see later in our series of publications when we go over the specific tactics for long-duration broadcasts. One of the advantages of this tactic is that it also encourages those of the public who get mentioned to comment, which spreads the video streaming more easily through Facebook’s NewsFeed.
The approach is more dynamic in this case, nonetheless it still follows the logic of creating impact with what people see while also fostering a bond with residents from the area. Even though they are in a location with low bandwidth issues, both MCs show adaptability as they grab the audience’s attention. They give background information about the collective they are a part of; explain the origin of Slam Laje; contextualize not only the event but also the participating poets; and they improvise rhymes and dance while doing so. Connectivity issues don’t impede the streaming and, instead, it becomes a great opportunity to repeat key information. It is noteworthy how repetition of key information is essential to communicate urgency in live broadcasts, especially if such information builds a narrative listing key impactful facts.
Since nobody from the audience collaborated to include links to mentioned pages in the comment box, as requested in the live-streaming while the content was given, an hour after the broadcast was over, MC Martina herself commented tagging her Facebook Page, Poetas Favelados’ Page, Slam Laje’s page as well as a link to the mentioned event. The advantage of giving such feedback after the broadcast is that these comments are more likely to appear in Facebook’s comments selection after the end of the live streaming and therefore be readable to anyone who watches the video later.
During the test phases of Mobil-Eyes Us, we learned that it is ideal to have a support person for base commenting on these key information while live so that it’s done in real-time, thus making it easier for users to access the information. This also allows the streamer to keep their focus on general content and manage interactions throughout the broadcasting.
With the Mobil-Eyes Us app, this can be done with distant collaborators who receive a notification with specific requests for help and exchange. A previously registered user can receive specific asks according to their registered abilities. So if a series of streamings is in need of international attention, for leverage and pressure for instance, users registered as translators can help get the message across borders faster and more effectively while users registered as contextualizers can add articles and information in real-time all according to the collaborators’ availability in responding to the app’s notification.
Collaborators managing translation of a broadcast added to the Mobil-Eyes Us platform that featured Portuguese to English translation and live contextualization in April 2017.
So, to summarize…
Whatever platform is used, we have observed that people are generally willing to participate more actively in live broadcasts, either by helping filling out a post and enriching it with relevant information and links, or by helping to share and spread a situation quickly. The scope of possibilities for exchange and collaboration is vast and is further extended when the people realize that their skills are useful and more diverse than previously thought.
We have observed that call outs live streamings can help create a wider buzz for planned events and a more complex narrative around subjects. They are a fast, inexpensive and practical tool to engage an audience deeper into a subject.
So, let’s review some of our tips. First, in your call-our streaming, try to have an impactful starting image, as well as a text referencing the main information of the event we’re going to promote. If you have doubts about it, just try to answer these simple questions in text and repeat it in the streamings: What will happen and why should your audience care about it? Where is it gonna happen or what is the physical or virtual address they should know about? When does it start? Who’s promoting it?
Second, film vertically and keep the focus of your frame in the upper half of the video as comments and reactions are superimposed on the bottom of the video in Facebook and can clutter the image.
Finally, make sure your streaming is concise. Try to keep its length to about 4-6 minutes. Provide further contextualization in the comments so it will make your post visible for more people – or, even better, ask your audience to do so. Always try to highlight the importance of distant witness support and repeat the main information of your call-out enough times during the broadcast. When people are reminded that their personal abilities can help others, even through social media, they are inclined to participate more regularly as active distant witnesses.
That’s why WITNESS has come together with 25 other organizations and individuals to let Rapporteur Dan Dalton and the rest of the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs know that this proposal doesn’t just threaten freedom of expression in Europe. It is also likely to threaten freedom of expression globally, by both inspiring dangerous copycat laws and by encouraging increased use of opaque machine-learning algorithms to remove content—including content created by human rights defenders, alternative media, and journalists. Our letter is based on real-world experience of exactly these problems.
In particular, as we note in the letter, Syrian Archive and WITNESS, as well as many of our partners, “have seen firsthand that use of machine-learning algorithms to detect ‘extremist content’ has created hundreds of thousands of false positives and damaged [a huge] body of human rights content.”As Jillian York of the Electronic Frontier Foundation notes, “”By placing the burden on companies to make rapid decisions in order to avoid penalties, this regulation will undoubtedly result in censorship of legitimate speech. That includes content essential “for prosecutions or other accountability processes across borders or in conflict situations.” The proposal ignores the incredible work of human rights defenders and journalists who “risk their freedom, safety, and sometimes even their lives to upload videos for the purpose of exposing abuses to the world. Without this documentation, we would have little idea what is happening in Syria, Yemen, Myanmar, the favelas of Brazil, and elsewhere.”
We also note that the threats of misuse of the machinery created to comply with this proposal and copycat legislation are not hypothetical:
Germany’s NetzDG law is already inspiring replicas in authoritarian countries, including Russia. The impact of the European Union on global norms should be a net positive, especially in the face of a rising tide of political repression, violence and fascism. This regulation hinders efforts to fight that tide.
WITNESS is honored to bring together the diverse voices on this letter. In addition to leading global digital and human rights organizations like Amnesty International, our letter includes organizations doing vital work in their own countries, like 7amleh of Palestine. The letter also includes journalist organizations such as Reporters sans frontières and Southeast Asian Press Alliance and archives such as the Blinken Open Society Archives in Hungary. Lastly, the letter includes experts on ethical and effective open source investigations like Alexa Koenig of the Berkeley Human Rights Center and, of course, the Syrian Archive. We hope that the broad geographic and organizational range of signatories makes it clear—this regulation must not become law. Human rights demand better in any lawmaking body, and frankly, in the European Union, common decency demands better, too.
Follow us here on our blog, as well as on Twitter, to get updates on the progress of the legislation. The text of the letter is below, and you can download a pdf here.
Dear Members of the Committee on Civil Liberties, Justice and Home Affairs
The undersigned organizations and individuals are dedicated to ensuring justice for human rights abuses around the world and to upholding human rights, including the right to freedom of expression. We rely on online platforms to both find and share evidence of these abuses and to counter official misinformation from repressive governments. We write to urge you to oppose the proposed “regulation on preventing the dissemination of terrorist content online.”
We echo the voices that have already explained the flaws in this regulation, including a December 7 letter from three Special Rapporteurs of the United Nations which notes, “with serious concern what we believe to be insufficient consideration given to human rights protections in the context of the proposed rules governing content moderation policies.” We agree that this regulation does not comply with human rights standards. It contains overbroad and unclear definitions. It also leaves essential matters such as issuance of removal orders and other roles in the content removal process unsettled. Worse, this regulation not only encourages but essentially forces companies to bypass due process and make rapid and unaccountable decisions on expression through automated means. In fact, this regulation does not reflect the realities of how violent groups recruit and share information online, and these groups will continue to engage in that behavior. Instead, the regulation is most likely to hamper the efforts of journalists and human rights defenders, who use these platforms to help document, expose and tell the stories of human rights abuses – including when committed by armed groups.
We are writing to you because we know from experience that the danger posed by this regulation is not hypothetical. Due to difficulty with getting evidence for prosecutions or other accountability processes across borders or in conflict situations, some of us rely on social media sites that would be affected by this regulation, such as Facebook and YouTube. In fact, in 2017 the International Criminal Court issued an arrest warrant in the Al-Werfalli case that was based largely on videos found on social media. Similarly, some of us archive evidence of human rights abuses committed by governments and violent groups, and some verify content through well-established processes. In order to prosecute war crimes and terrorism in Syria and elsewhere, these archives are and will be vital. Finally, some of us work directly with, or are ourselves, human rights defenders that risk their freedom, safety, and sometimes even their lives to upload videos for the purpose of exposing abuses to the world. Without this documentation, we would have little idea what is happening in Syria, Yemen, Myanmar, the favelas of Brazil, and elsewhere.
We have seen firsthand that use of machine-learning algorithms to detect “extremist content” has created hundreds of thousands of false positives and damaged this body of human rights content. One group, Syrian Archive, observed that after Google instituted a machine-learning algorithm to “more quickly identify and remove extremist and terrorism-related content” in June of 2017, hundreds of thousands of videos went missing. This included not only videos created by perpetrators of human rights abuses, but also documentation of shellings by victims, and even videos of demonstrations. Instead of working to address these problems, companies have refused to provide basic transparency around moderation processes or the machine-learning algorithms they are employing. This regulation would only worsen that problem.
In addition to devastating the processes being used to create and preserve human rights content, this regulation will harm some of the most vulnerable groups in the world by inspiring dangerous copycat regulation that will be used to silence essential voices of dissent. This is not hypothetical, as Germany’s NetzDG law is already inspiring replicas in authoritarian countries, including Russia. The impact of the European Union on global norms should be a net positive, especially in the face of a rising tide of political repression, violence and fascism. This regulation hinders efforts to fight that tide.
In conclusion, we urge you to oppose this regulation. Instead of passing new regulations, we urge Parliament to reconsider existing directives, make a thorough study on what is actually needed, and start again from scratch if a need for rapid content removal is real.
7amleh – Arab Center for Social Media Advancement (Palestine)
Alexa Koenig, Executive Director of the Human Rights Center at Boalt School of Law, University of California, Berkeley*
The Association for Freedom of Thought and Expression (Egypt)
Association for Progressive Communications (APC), Global
Bits of Freedom (Netherlands)
Blinken Open Society Archives (Hungary)
Dima Saber, Senior Research Fellow at the Birmingham Centre for Media and Cultural Research*
Electronic Frontier Foundation
epicenter.works – for digital rights (Austria)
Freedom Forum (Nepal)
Global Voices Advox
Gulf Centre for Human Rights (Middle East)
Kari Papadopoulos, Faculty Member, Stockholm University Department of Journalism, Media and Communication*
La Quadrature du Net (France)
Open Rights Group (United Kingdom)
Reporters sans frontières (RSF) / Reporters Without Borders
Southeast Asian Press Alliance
SMEX (Social Media Exchange) (Lebanon)
Stichting IISG (Foundation International Institute of Social History) (Netherlands)
Syrian Archive (Germany)
Tara Vassefi, Washington Director, Truepic, Inc*
Viet Tan (Vietnam)
*Affiliations listed for identification purposes only
 Mandates of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression; the Special Rapporteur on the right to privacy and the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering Terrorism, Communication to the European Union on draft EU Regulation on preventing the dissemination of terrorist content , 7 December 2018, available at https://spcommreports.ohchr.org/TMResultsBase/DownLoadPublicCommunicationFile?gId=24234
The messaging platform WhatsApp has 1.5 billion users worldwide and is increasingly being used as a tool to organize, mobilize, as well as to share important human rights content. This tutorial, put together by WITNESS, helps you decide whether you should use WhatsApp’s built-in backup options, and if so, how to back up your WhatsApp.
Backing up your WhatsApp is different from exporting content from your WhatsApp. Backups are designed for restoring your WhatsApp account, for example if you are changing phones. Backups are stored in an encrypted WhatsApp database format that cannot be opened or read outside of WhatsApp. If you are interested in learning how to export content (e.g. .txt and .mp4 files) for access and use beyond WhatsApp then go on this tutorial. Exported content cannot be restored to your WhatsApp account like a backup can, but is useful if you want to save important content outside of WhatsApp.
Why would you want to back up WhatsApp?
In case you need to delete WhatsApp from your phone, and reinstall it later.
In cases where your phone might be searched or you are concerned about your physical and digital security, you might want to uninstall WhatsApp on your phone. By backing up your WhatsApp on your phone or on the cloud before you uninstall, you can reinstall and restore your content from this backup later.
In case your phone gets lost, stolen, broken or you need to replace it.
WhatsApp messages and media are stored on your phone, not on WhatsApp’s servers (WhatsApp deletes messages from their servers either after they are delivered or after 30 days, whichever is sooner). But phones can get lost, stolen, or destroyed. Backing up your WhatsApp to your computer or to the cloud is the only way to ensure you can restore your WhatsApp if something happens to your device.
In case you want to revert to a backed up copy.
There may be instances where you want to revert your WhatsApp to a backed up copy; for example, if you accidentally delete some important content. In most cases, such as when using Google Drive or iCloud backup, you can only revert to the most recent backup. However, Android devices store the last 7 days of backup, and it is possible for Android users to restore from an older local backup. iPhone users can revert to older (non-most-recent) backups only through iTunes backups of their entire device. It is important to note that when you restore from a backup, you will lose all the new content created after the backup date.
Why would you not want to back up WhatsApp?
WhatsApp backups are not as secure as WhatsApp chats.
WhatsApp provides built-in options to back up locally (Android) and to Google Drive (Android) or iCloud (iPhone). Android users can back up all of their WhatsApp data, with no file size restrictions, for free on Google Drive. These backups are encrypted by WhatsApp and, if uploaded, also by Google or Apple’s server side encryption, but they aren’t protected by WhatsApp´s end-to-end encryption. End-to-end encryption refers to encryption where only the sender and the recipient can decrypt the message, but not anyone in between, including the service provider. Once you, the recipient, decrypt (i.e. receive and read) the message, the WhatsApp end-to-end encryption chain ends.
When you make a backup, WhatsApp encrypts it with a key generated with your phone number. When the backup is uploaded to Google Drive or iCloud, Google or Apple also encrypt it with their own server-side encryption. This means that they, or anyone with your Google Account or iCloud authorization, have the capability to decrypt the server-side encryption on your backup, and, say, provide the backup to law enforcement. Meanwhile, the key for decrypting the WhatsApp encryption is stored on your phone, or can be regenerated by anyone using a SIM card with your phone number, as it was designed to enable users to easily restore backups onto new phones.
It is also important to note that only text conversations, not media, are encrypted by the WhatsApp backup encryption. Media attachments are not encrypted by WhatsApp on your phone, nor when uploaded to Google Drive and iCloud.
Google and Apple may provide access to backups to law enforcement.
When you back up content to the cloud, you lose control of it, and you are at the hands of the companies and their business decisions. Apple has provided iCloud data to law enforcement, and Google will respond to legal requests for data as well. By backing up via the cloud, you lose protection against court orders and subpoenas to gain access to WhatsApp data on third-party servers. It is possible that you would never even hear about it or be alerted to it, depending on the kind of order used.
As alternatives to using these cloud services, Android users can opt to only use local (encrypted) backup, while iPhone users can use iTunes (with encrypted option) to back up their entire phone to their computer. Exporting your WhatsApp content is another alternative to backing up to save important messages and media, although you cannot restore exported content to a WhatsApp account as you would a backup. Note that all of these options have their own security risks, for instance if your device is lost, stolen, or confiscated or if your cloud services or device is hacked or breached. WITNESS will shortly be releasing a tutorial on security and privacy while using WhatsApp.
Cloud backup makes content more vulnerable to breaches and hacks
Storing data on the cloud can be infamously insecure as cloud storage can introduce many vulnerabilities, and users do not take the measures needed to secure their information. Individual accounts can be hacked or breached through weak passwords, poor password management, and phishing (combined with not using 2-factor authentication, which helps protect users). Cloud storage providers can also have poor security practices, such as weak protection of their databases, leaving them and their users vulnerable to breaches, hacks and leaks.
How to back up your WhatsApp
Click on the links below to be taken to the section relevant to you and your operating system.
I want to back up WhatsApp on my Android operating system
I want to do a local backup
You do not need to do anything. WhatsApp automatically creates local backups to your device every day at 2:00am. This feature cannot be turned off. WhatsApp stores the last 7 days of backups on your device. If you want to backup manually at some point between the 24 hour back up cycle then go to WhatsApp > Menu > Settings > Chat > Chat backup > Back Up.
Manual backup button in WhatsApp
Local backups are stored as encrypted database files in your device storage under WhatsApp > Databases. They are decrypted by WhatsApp using a key stored on your phone (which is inaccessible unless you have root access to your phone).
A view of the Databases folder in Android’s WhatsApp storage.
I want to restore from a local backup
I want to restore from the most recent backup on my phone:
This process is mostly automated. When you reinstall WhatsApp, you will be prompted to restore after you verify your phone number. WhatsApp will choose the most recent backup by default. WhatsApp will choose the local backup if it is the most recent or if there is no Google Drive backup. To force WhatsApp to choose the local backup, log out of your Google Account before reinstalling WhatsApp.
The messaging platform WhatsApp has 1.5 billion users worldwide and is increasingly being used as a tool to organize, mobilize, as well as to share important human rights content.
This tutorial, put together by WITNESS, walks you through how to download and export your chat messages, videos, photos, and other media from WhatsApp along with basic metadata. Since WhatsApp is a closed platform, we have to use its own built-in tools to access, download, and export the content.
Exporting content is different from backing up your WhatsApp. Backups are designed for restoring your WhatsApp account, for example if you are changing phones, and are stored in an encrypted WhatsApp database format that cannot be opened or read outside of WhatsApp. Instead, this tutorial walks you through exporting content (e.g. .txt and .mp4 files) for access and use beyond WhatsApp. This exported content cannot be restored to your WhatsApp account like a backup can, but is useful if you want to save important content outside of WhatsApp. In the coming month we will be publishing a tutorial on backing up on WhatsApp.
Why would you want to export content off WhatsApp?
To make use of important evidence and information beyond WhatsApp
Important evidence and information is being shared on WhatsApp, from videos of human rights abuses to images created to misinform. This information can be used outside of WhatsApp for research, journalistic purposes, fact-checking, mobilization, or in courts of law. For example, you can export a video from WhatsApp to use it in a report, share it with non-WhatsApp users, analyse it for veracity, or edit it into your own video.
To preserve evidence and information outside of a proprietary system
Exporting information from a closed system like WhatsApp allows you to have control over its preservation. Even if WhatsApp disappears, you will still have your information. You can decide how to organize it, where it is stored, how many copies you have, who has access, and how you want to preserve it in the long-term. A good organization and storage plan can enable you to reliably locate and retrieve your content whenever you need it.
In case your phone gets lost, stolen or broken
WhatsApp messages and media are stored on your phone, not on WhatsApp’s servers. WhatsApp deletes messages from their servers either after they are delivered or after 30 days. But phones can get lost, stolen, or destroyed. Besides cloud backup, which you may want to avoid for security reasons (explained below), exporting your WhatsApp content is one way to save important media and information from your phone.
Authors can delete important information using the Delete for Everyone feature
With WhatsApp’s new Delete for Everyone feature, a person who writes a message can delete it from WhatsApp for all recipients within an hour of sending it. Exporting the message is a way to save a record of a message before it disappears.
As an alternative to WhatsApp’s cloud backup options
WhatsApp provides built-in options to backup to Google Drive (Android) or iCloud (iPhone). These backups are encrypted by Google and Apple’s server side encryption, and additionally encrypted by WhatsApp in the case of iCloud, but they aren’t protected by WhatsApp´s end-to-end encryption. End-to-end encryption means that only you and the person you are communicating with can have access to your messages ensuring that nobody inbetween or the service provider themselves can have access to them.
Google has the capability to decrypt your backups on their end, and provide them to law enforcement, and the additional encryption provided to iCloud users can be decrypted using a SIM card with your phone number- and potentially by WhatsApp. Google Drive backups will auto-delete if they aren’t updated in more than one year. To avoid using these cloud services, Android users can opt to use local (encrypted) backup, while iPhone users can use iTunes (with encrypted option) to back up their entire phone to their computer, outside of WhatsApp. Exporting is another alternative to backing up to save important messages and media, although you cannot restore exported content to a WhatsApp account as you would a backup.
Consent and permission
When exporting videos, images, voice notes and chats from WhatsApp that are sent to you privately or in a group, it is important to consider the permission you have or don’t have from those posting the content. WITNESS has put together a tipsheet on informed consent that outlines its four main elements: disclosure, voluntariness, comprehension, and competence.
Consider asking the group about the norms around content when you join. There may be an implicit understanding that content from the group may be made public, or group members may consider their content private.
You could also consider asking the individuals or the group who shared the content for their permission to safely export it, and inform them where you will be storing it and who will have access to it. This also lets them know where and how they can retrieve a copy of the content if they ever delete or lose their own copies.
There are situations, however, where asking for consent could put you or those you are asking in difficult or dangerous positions; for instance, if a phone is confiscated and messages indicating the existence of valuable information stored elsewhere are found. It is important to assess the specific context and threat models of those who you are receiving the information from.
As with many questions of permission and consent, they are context specific. In situations of conflict, for example, where there is a need to preserve content quickly that may be soon deleted it might be appropriate to preserve the content first, keep it in a safe and restricted place, and then address the permissions at a later stage.
How to save content from WhatsApp
Click on the links below to be taken to the section relevant to you and your operating system.
I am using the Android operating system on my phone
Do you want to save individual videos or photos, or do you want to save your entire chat history? This tutorial walks you through both! Skip down if you want to save your entire chat history.
Tip: If saving a video or photo for evidentiary purposes, you may want to save the entire chat history to provide context of where, when, and by whom the video or photo was shared.
I want to save individual videos or photos
Saving individual videos or photos off of WhatsApp is a two-step process. First, you need to download the media from WhatsApp’s servers to WhatsApp on your phone. Then, you need to export the media from your WhatsApp to another location (like to your phone gallery or to your computer).
Download from WhatsApp servers
You can download media selectively by simply selecting it within WhatsApp.
You can also set WhatsApp up to auto-download using mobile data, using Wi-Fi, when roaming, or not at all; you can also select what kind of messages you want to auto-download (in WhatsApp Settings > Data and Storage Usage). Auto-download will save incoming media to your Android and keep it available within the app. By default, photos are set to auto-download unless you turn it off.
Tip: Once you download a message or media to your phone, it is no longer protected by WhatsApp’s end-to-end encryption.
Export from your WhatsApp
On Android, WhatsApp will automatically export your downloaded media to your Gallery, unless you create a .nomedia file in your WhatsApp images folder.
To export an individual piece of media elsewhere, like to your computer, select the media within the chat to view and download it, then select the Menu button and select “Share.” Select the export option that is convenient for you (options will depend on your device/services).
Unlike iPhones, Androids also allow users to access the stored WhatsApp media files directly using a file manager app. Your device may have a built-in file manager (depends on manufacturer) or you can install one from the Play Store. If installing from the PlayStore, pay attention to who created the app and what kind of information they collect.
In the file manager, navigate to WhatsApp > Media > WhatsApp Video (path may differ slightly depending on device/file manager). In the Menu, tap “Share” and select the file(s) you want to export (this process may differ slightly depending on the file manager).
Ten years ago, when the UDHR turned 60, we invited you to share images that had opened your eyes to human rights. Responses poured in from around the world, each one a deeply moving testament to the power of images to mobilize, expose, catalyze change.
Fast forward to 2018, and the seemingly never-ending bombardment of Trump/Orban/Bolsonaro/Duterte news has left many of us human rights defenders T-I-R-E-D, dizzy, exhausted, exasperated really. Empty coffee cups decorate our desks, news notifications ping relentlessly on our phones, reports and new playbooks try to help us make sense of these times, and our walls generously allow themselves to be covered in post-it-notes as we restrategize yet… again…
If this feels familiar, stay with us! The UDHR is turning 70 on December 10th and we’d like to celebrate by changing the conversation. To hope. To celebrate by inspiring each other. So we ask: in these times, what image gives you hope for human rights?
On the seven-month anniversary of the killing of Marielle Franco in Brazil, activists distributed 1000 street signs named after Marielle in downtown Rio. Credit: Fernando Frazão/Agência Brasil
So join us, won’t you? Tell us about your image – the one you go to when you need to rekindle your faith in humanity, the one you look back at when you’ve had a hard day in the fight for human rights. Record a video describing it and tag us on December 10th (scroll down for how to tag us). We’ll be featuring your responses and inviting the world to reflect on which images we choose to focus on, and the true revolutionary power and promise of hope.
Here’s how to join the conversation from your corner of the world:
There’s a reason why, when courts function properly, they offer more due process than corporations when it comes to making decisions about free expression. Deciding what speech can take place in public forums in democratic societies is not an easy task. While standards range from permissive jurisprudence of the First Amendment to the broader prohibitions against hate speech found in German law, at least these standards can be litigated, discussed, and understood. So why is the European Commission trying to push through a legislative proposal that would not only force private corporations to regulate broad swathes of expression, but would actually require the use of opaque filters and artificial intelligence driven algorithms to do so? The only answer is that this proposal, set to be considered today, is political—and it must not pass.
The European Commission has proposed a regulation on “dissemination of terrorist content online,” based on a process which started in September 2017. The Commission adopted a recommendation on “illegal content” more broadly in March of 2018, and proposed specific legislation on “terrorist content” in September of this year. A recent draft introduced slight revisions from the September version, but not in a way that addresses the grave concerns raised by civil society and business alike. The most recent version can be found here. The legislation is being discussed today in the Justice and Home Affairs Council and supporters are trying to push a “general approach,” which means less opportunity for discussion
The ostensible goal of this legislation is to force “hosting service providers” to remove “terrorist content” within one hour upon receipt of a removal order from any EU member states “competent authority,” and to use proactive measures, “including automated means,” to detect such content and prevent reappearance. The quotation marks all signify places in the legislation where definitions are unclear or poorly worded. The proposal would create specific obligations for hosting service providers to remove, to report on removals, and to coordinate their required “proactive measures” against terrorist content with authorities on an ongoing basis. Member states designate their own competent authorities, and set their own penalties, which can be up to “up to 4% of the hosting service provider’s global turnover of the last business year” if the state concludes that the hosting service provider has systematically failed to comply with the regulation. Furthermore, terrorist content is defined by each member state, and can vary from country to country. For example, in Spain a single law prohibits the glorification of terrorism alongside “humiliating victims of terrorism”—conflating two very different forms of expression.
Much has already been written about the dangers of this legislation. Unfortunately, however, WITNESS has first-hand experience with how this proposal could go very, very badly—both inside and outside of the European Union—through dangerous copycat legislation and overuse of automated content moderation.
We agree with our allies, including the signatories to a letter sent to the Ministers on December 4, that the legislation is poorly written, unnecessary, and overbroad–and that it appears to be getting pushed through as a ploy, coming before the 2019 parliamentary elections. For good general background information and legal analysis, check out these articles:
We have two concerns about how, based on our experience, this legislation will be harmful.
Automated content moderation
First and foremost, this proposal encourages increased use of machine-learning algorithms for identifying terrorist content, as well as increased automatic takedown through shared databases of material that has been deemed terrorist content. This architecture already exists, and it is shrouded in secrecy and error.
Through the Global Internet Forum to Counter Terrorism (GIFCT), many major companies already have a shared database of extremist content that violates their Terms of Service (which means it might not even violate the law.)This database helps companies take down content before it is ever seen. Unfortunately, the GIFCT provides almost no information about this database publicly, including information about quality checks or reassessment. Errors in this database will be propagated throughout all of GIFCT’s members if not corrected.
What we don’t know is almost anything at all about those algorithms. We don’t have the most basic assurances of algorithmic accountability or transparency, such as accuracy, explainability, fairness, and auditability. Platforms use machine-learning algorithms that are proprietary and shielded from any review. Groups like WITNESS have to look to patterns and hearsay to get any idea of how they’re working. Unless they’re specifically designed to be “interpretable,” these algorithms can’t be understood by humans, since they learn and grow over time—but we can’t even get access to the training data or basic assumptions driving the algorithms. There has never been any sort of third-party audit of such proprietary technology, although we would strongly support this if companies were open to it.
When it comes to accuracy, we have seen how even existing systems are already having disastrous effects on freedom of expression and documentation and research of human rights abuses. As noted, Syrian Archive has seen hundreds of thousands of videos go missing, and whole channels shut down. We have reviewed many of these videos personally. They are coming from groups recognized by journalists and human rights bodies like the United Nations. When they do depict extremist activities, they often indicate that they are trying to show the world the human rights abuses taking place in Syria. But many of these videos simply have no link to extremism—they are showing demonstrations, or the aftermath of a bombing.
We have no doubt that this legislation strongly incentivizes companies to open the floodgates on automated content moderation.
A bad precedent for the world
WITNESS works at a global scale, and we have seen how policies made in democratic societies can be used to repress human rights in the wrong hands, or even simply how policies in one setting don’t work elsewhere when they are copied. At its most basic level, this legislation takes decisions about what speech is legal or illegal away from courts and lawmakers, and places those decisions with corporations and poorly defined “competent authorities. It encourages platforms to create machinery that easily could be used in undemocratic societies to silence critics. And it encourages the idea that fighting “terrorist content” can justify any legislative excess—an idea already embraced by Russia, Egypt and many other countries with terrible human rights records.
The role of the European Union in setting norms that can potentially affect the human rights of billions is only growing larger. What’s more, no legal system has caught up with the Internet, and every new piece of legislation regulating it matters. Legislative responses to problems raised by the Internet, can and do spread globally, even when they pose a threat to free expression, like the many legal responses to misinformation popping up around the world . That’s why we’re deeply concerened about the bad precedent that would be set globally should this legislation pass.
Governments around the world already abuse platforms’ terms of service to silence critics. Imagine how expertly governments that already abuse Facebook’s terms of service to silence dissent, such as Cambodia, will abuse legislation like the terrorist content proposal in their own countries?
Although this legislation is being pushed hard, it’s not likely to pass before the European Parliament goes on its winter break on December 13. WITNESS will monitor the legislation, and will be sharing the above analysis of it with Ministers and Members of Parliament. We will continue to support efforts of our allies, such as this open letter from La Quadrature du Net. For the most up-to-date news, follow us on Twitter: @witnessorg