Loading...

Follow Privacy International on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Tech companies are trying to redefine privacy - what's missing is real competition on privacy staff Friday, May 17, 2019

CEOs of the big tech companies have all recently discovered the value of privacy. On Tuesday, 30 April 2019, Mark Zuckerberg, announced his future plans to make Facebook a "privacy-focused social platform". This was followed by Google's Sundar Pichai demand that “privacy must be equally available to everyone in the world.” Meanwhile, Twitter's Jack Dorsey, has described the General Data Protection Regulation (GDPR) as "net-positive", while Apple had already positioned itself as the champion of privacy.

These announcements are surprising, given that big tech companies have heavily lobbied to weaken data protection regulations, such as the General Data Protection Regulation (and continue to do so). These very same companies are now proclaiming that their primary concern is to protect our privacy. What they are really protecting though is a notion of privacy that is remarkably compatible with their own business models. To put it bluntly: since privacy and data protection gained prominence, it seems as if big tech is reinventing privacy.

Here’s what’s striking: despite recent scandals and investigations, the latest financial reports of all major tech companies remain rosy. These are illustrative of their continued dominance, and, as we argue, a world in which consumers have little choice when it comes to the digital market.

 

Q1 2019 Financial reports Google

Google’s parent company, Alphabet, announced $36.3 billion in total revenue for the first quarter of 2019. The company's net earnings for the quarter were $6.7 billion, much lower compared to $9.4 billion just a year ago. This was largely due to a $1.7 billion fine for violating antitrust rules in the European advertising market. Investors reacted strongly to the results, but the company’s overall revenue is still grew by 17% in the past year. Google's ad sales – which count for more than 84% of the company's total revenue – also grew about 15%.

These staggering results must be understood in light of the company’s dominance of the advertising industry, its influence on the Android operating system (which runs on 80% of smart mobile devices globally), and its dominance the global market of general internet search services.

Here’s just one example of how this plays out in practice: when Google recently updated its Terms of Service and Privacy Policy, the company made sure that users have no control over the collected data. If you think Google is big, wait for what’s coming next. Similar to the other tech giants, Google has been heavily investing in areas such as hardware and cloud computing.

Facebook

Facebook's Q1 2019 financial results show just how much the social network’s financial prospects have remained untouched by its privacy scandals. As TechCrunch concluded: “The fact that Facebook isn’t losing massive numbers of users after years of sustained scandals is a testament to how deeply it’s woven itself into people’s lives.” Facebook still earned $2.429 billion in profit, despite setting aside $3 billion for what is expected to be a record-breaking, multi-billion dollar settlement with the FTC over the Cambridge Analytica scandal.

Zuckerberg’s plans for a "privacy-focused vision for the future of social networking” have raised concerns about further dominance. We have already expressed our doubts regarding Mark Zuckerberg's newly founded compassion for our privacy. As other commentators have noted, unifying its messaging apps and promoting ephemerality in content sharing and communication could help deter calls for regulation, making Facebook harder to break up, and helping it stay ahead of competitors like Snapchat.

Amazon

Amazon revealed a record-high profit in the first quarter of 2019, more than double of what investors predicted. The increase in profits continued, despite (or due to) a history of risky investments, which if successful will reinforce Amazon's dominance in the markets. For example, in 2016, Amazon spent $7 billion on shipping fees, accrued a net shipping loss of $5 billion, and yet posted an overall profit of $2.4 billion. Although, this may sound absurd, this was part of Amazon's strategic plan to reduce the power of its competitors.

Amazon has not joined the privacy bandwagon yet – but there’s no shortage of privacy scandals. Most recently, Bloomberg reported that Amazon workers have been listening to what you tell Alexa, Amazon's voice-activated assistant. Amazon had failed to previously disclose that it employs thousands of people around the world to help improve Alexa's understanding of human speech and its responses to commands. In response, we wrote a letter to Jeff Bezos to address some of our concerns.

What is clear is that Amazon’s market dominance is coming under increased scrutiny. The company is currently under investigation in Germany, Austria and Italy, as well as Europe-wide for the way it uses sales data from its "Marketplace" platform to compete with independent retailers who sell through it.

Apple

Despite the fact that Apple's revenue from iPhone declined 15% from the previous year, the total revenue from all other products and services grew 19% in the first quarter of 2019. Apple announced an all-time high earning per share of $4.18, up 7.5%. The moment Apple saw its most valuable asset's - the iPhone - sales falling, the company decided to focus on capitalising on other investments and products.

In many ways, Apple has legitimate reasons to promote itself as a more privacy friendly company. Its primary source of revenue still comes from the products it sells to us – mainly hardware, laptops, phones etc. – which is a very different business model than that of Facebook or Google. However, this is not the entire story. First, iPhones generate and gather a lot of data, from our location data to our requests made to Siri (Apple’s virtual assistant). In fact, Apple’s growing investment in the healthcare space relies to a large extent on the data it collects from our devices. Apple's CEO, Tim Cook, reportedly highlighted that Apple is in a unique position to capitalize on its technology and encourages users to monitor their health. Second, whilst Apple limits the data is directly shares with outside companies, they still for example, use  information about a user’s account, browsing, purchases and downloads in Store to target advertising in the App Store and Apple News. Also, until recently, it allowed iPhone app developers to access users' phone contacts in order to use them for marketing purposes or sell them to third parties.

 

The need for competition

These financial results paint a very clear picture: the digital economy is composed by a small number of big technology companies that have claimed dominant positions. As a result, they are now able to impose terms and conditions onto their users, and they often do so in ways that exploit their data and infringe their privacy.

This is why, now more than ever, effective competition in the marketplace is necessary for safeguarding privacy and promoting innovation. Whilst recent developments in privacy and data protection regulation such as the EU's GDPR have paved the path for new hope and inspiration for better control over personal data, we still have a long way to go.

Even though the competition models are different in the US compared to the EU, there are battles waging on both sides of the Atlantic about tightening the rules. The American model tends to assess whether a company’s position is pushing up prices for consumers whereas the EU model focuses on investigating companies that are deemed to have a dominant position in the market.

Significantly, both the US FTC and the EU antitrust authorities are grappling with ways to address the market powers of big tech companies, power which is fuelled by the harvesting of people's data. The FTC held a series of hearings on competition and consumer protection in the digital age. The EU is a few steps ahead and in April 2019 the European Commissioner on competition published a report on competition in the digital age following extensive consultation with stakeholders, including Privacy International. The final report outlines the need to adapt competition law and its application to capture "for new ways for those platforms to achieve old goals – like extending their dominance to new markets".

When assessing market power, some competition authorities are starting to recognise the need to consider privacy and data protection implications. For example, the German competition authority has noted that “where access to personal data of users is essential for the market position of a company [here, Facebook], the question of how that company handles the personal data of its users is no longer only relevant for data protection authorities. It becomes a relevant question for the competition authorities, too.” However, this is not enough. Competition around the world need to take further steps to ensure that these companies do not reinvent privacy and data protection to suit their own interests.

Words are not enough. Privacy was never dead, but to achieve a world in which privacy is protected into the future, regulators and law-makers need to hold powerful companies to account today. And together with the media and civil society organisations, we must expose their duplicity. Civil society need to remain vigilant and active. In the US there are already signs that big tech companies are fighting the adoption of strong data protection laws. In the EU an important piece of legislation, the e-privacy regulation, is stuck because of strong opposition of tech companies.

 

What Is the Solution?
  • When assessing a company’s powers, we need to take personal data into account, focusing not only on price, but on quality of service, innovation, and privacy.
  • Regulators need to work towards creating conditions for genuine competition on key features such as privacy, which would allow companies to compete to provide the most privacy friendly services.
  • Regulators should address the harm that derives from lack of competition, including by adopting analyses of market powers that take into account societal concerns, privacy harms, as well as economic aspects, including the opportunity costs.
  • There should be coordinated national and international enforcement across antitrust authorities and other regulatory bodies, such as data protection or consumer protection authorities, to avoid loopholes, and a ‘race to the bottom’.
  • Ultimately, human rights and consumer organisations should be empowered, including through collective proceedings, to question market dominance which negatively affects individuals’ rights and obtain effective redress.
  Corporate profiles timelines
Resource Type
Targeted Adversary
Type of abuse
Date
Friday, May 17, 2019
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The first half of 2018 saw two major privacy moments:  in March, the Facebook/ Cambridge Analytica scandal broke, followed in May by the EU General Data Protection Regulation ("GDPR") taking effect. The Cambridge Analytica scandal, as it has become known, grabbed the attention and outrage of the media, the public, parliamentarians and regulators around the world - demonstrating that yes, people do care about violations of their privacy and abuse of power. This scandal has been one of many that illustrate that privacy is also about the autonomy, dignity, and self-determination of people — and a necessary precondition for democracy.  At the same time GDPR, which was years in the making, finally took effect across the EU on 25 May 2018, bringing with it more stringent obligations for those using personal data and stronger rights for individuals, both within and outside the EU.

These two events have been a catalyst for debate regarding the lack of sufficient safeguards, oversight measures and enforcement to adequately protect our data from exploitation. 

A year on, the world - including regulators and legislatures - has begun to wake up to the nature and the scale of the problem and how to grapple with it.

To recap - The Facebook Cambridge Analytica scandal in March 2018

On March 17, 2018, the Guardian and New York Times simultaneously published stories exposing how the personal data of over 50 million Facebook users ended up in the hands of Cambridge Analytica, a company which then sought to increase support for the 2016 Trump presidential campaign.  The company's work had been reported before, for example, for US Senator Ted Cruz using Facebook data had previously been reported in 2015, but the March revelations propelled the company to worldwide attention, perhaps due to the scale and potential links with the 2016 Brexit referendum and the 2016 US presidential election.  

Cambridge Analytica was a consulting and data analytics company that was funded by right-wing American billionaire Robert Mercer and headed by Breitbart-founder Steve Bannon before he left to serve as chief executive for the 2016 Trump campaign. Reporting covered how Cambridge Analytica used data to profile and target individual voters with the aim of predicting and influencing their voting decisions. Reporting further revealed that Cambridge Analytica also supported the Brexit campaign in the UK. According to the Guardian and the New York Times, by late 2015, Facebook was aware that Cambridge Analytica had exploited its users’ data, but Facebook failed to inform people who were affected and engaged in limited and ineffective efforts to recover their data. Facebook later admitted that the number of people affected was much higher than what the Guardian and New York Times had initially reported: it had actually shared the data of 87 million users.  The scandal and its impact are thanks to the persistence and dedication of a number of individuals, including investigative journalists such as Carol Cadwalldar, researchers, the whistleblower Christopher Wylie (a former employee of Cambridge Analytica), Shahmir Sanni (a volunteer with the Vote Leave Campaign in the UK Brexit Referendum), and Professor David Carrol, a New York-based professor, who has engaged in a lengthy battle to obtain his data from Cambridge Analytica.

The story that broke in March 2018 was not the beginning or the end.  Over a year since, more information and questions have emerged.

Furthermore, Cambridge Analytica's role was by no means limited to the UK and US. It was involved in elections around the world. Privacy International had previously looked at the role of Cambridge Analytica in the Kenyan elections and as the revelations unfolded we provided an update in response to the focus, and our Kenyan partner CIPIT further examined the role of the company. 

How did PI respond?

The Facebook Cambridge Analytica scandal highlighted issues with data exploitation that Privacy International fights against. In the days following the Cambridge Analytica revelations, Privacy International highlighted that the companies involved are part of an industrial sector that exploits personal data and called on policy makers to move swiftly.

At the core of the scandal was disregard for the protection of data and the concerns surrounding the profiling of individuals. Cambridge Analytica is by no means the only company that operates in the shadows. Tactical Tech has documented an entire industry of "Whose working for your vote?".

Over the past year we have used data protection law to investigate and seek to hold to account lynch pins of the advertising industry - data brokers and AdTech companies - submitting complaints to data protection authorities in the UK, France and Ireland. We are continuing to follow up with regulators and are pleased to see that our efforts together with others have helped put AdTech squarely on the agenda for data protection authorities.

We have continued to look at Facebook and the data it gets, including revealing the large scale transfer of data from apps to Facebook, whether or not an individual has a Facebook account and the use of Facebook and other data in the Fintech sector, for example by Lenddo.

What has happened since?

In the year since the Guardian and New York Times broke this story, much has been said and some has been done. Cambridge Analytica went into administration, but its parent company SCL is still around and has been succeeded by Emerdata.  Here is a by no means an exhaustive snapshot of some of the responses from regulators, parliamentarians and Facebook. These developments are taking place in the context of a wider debate regarding misinformation, disinformation, electoral interference and more, which is promoting regulation and actions around the world, not covered here.

Data Protection authorities are taking action

In the UK, the Information Commissioner’s Office (ICO) was already conducting an investigation into data analytics for political purposes and in response to the Cambridge Analytica scandal eventually obtained a warrant to inspect the Cambridge Analytica premises. The challenges in responding immediately to the Cambridge Analytica reports led, in a large part, to the ICO being granted new and stronger powers as the UK Data Protection Bill (now the Data Protection Act 2018) as it made its way through Parliament. In July 2018, the ICO published its report Democracy Disrupted. In the ICO's update on its investigation into data analytics in political campaigns, the ICO announced their intention to fine Facebook for lack of transparency and security issues relating to the harvesting of data breaching the Data Protection Act 1998 (the UK data protection law in place at the time).  The ICO also issued an enforcement notice against Aggregate IQ to require them to cease processing the personal data of UK or EU citizens obtained from UK political organisations or otherwise for the purposes of data analytics, political campaigning or any other advertising purpose. In the report, the ICO also announced their intention to bring criminal proceedings against SCL Elections, Cambridge Analytica's parent company, for failing to comply with an enforcement notice issued by the ICO to require them to do deal properly with Professor David Carroll's request to access his data. Then in October 2018, the ICO fined Facebook £500,000 for breaching the UK’s prior data protection law. In discussing the numerous reasons for imposing the maximum fine, the ICO noted “the personal information of at least one million UK users was among the harvested data and consequently put at risk of further misuse.” This fine was the maximum allowable under previous law; if its replacement, the GDPR, had been in place at the time of the Cambridge Analytica data breach, the ICO could have fined Facebook 4% of the company’s total worldwide annual turnover, which would have been over £1 billion. Facebook is currently appealing this fine.  In November 2018, the ICO published its report to Parliament on the use of data analytics in political campaigns, among its findings, the ICO highlighted a disturbing disregard for voters’ personal privacy by players across the political campaigning eco-system — from data companies and data brokers to social media platforms, campaign groups and political parties.  The report sets out that the ICO is continuing to investigate Cambridge Analytica and is analysing materials it has seized in the course of this investigation. In January 2019, the ICO fined Cambridge Analytica's parent company, SCL Elections, for failing to comply with an ICO enforcement notice and in March 2019, issued fines to Vote Leave.

The ICO is not the only DPA looking at this issue and others around the world have also investigated, for example:

Italy

Cambridge Analytica reportedly worked with “a resurgent Italian political party last successful in the 1980s”. It was reported that out of the 87 million Facebook users who had their data illicitly transferred and analyzed by Cambridge Analytica, some 214,000 were Italians. In April 2018, both the Italian Data Protection Authority and Antitrust Authority started an investigation into what exactly happened with the data, both in terms of individual privacy and “alleged improper commercial practices”. In February 2019, the Italian DPA announced that it was ready to impose sanctions.

Germany

The Facebook-Cambridge Analytica scandal also reportedly affected approximately 300,000 German Facebook users. The Hamburg Commissioner for the Protection of Data and Freedom of Information, Johannes Caspar was reported as having initiated legal proceedings against Facebook in April 2018 on the basis of "collection of data without a legal basis". However, the proceedings were then shut down due to issues related to time bar.

Canada

In April 2019, the Office of the Privacy Commissioner of Canada and the Information and Privacy Commissioner for British Colombia found Facebook violated Canada's privacy laws as part of their report on their investigation of the Cambridge Analytica revelations. The Commissioners' also highlighted the pressing need for legislative change to protect the rights of Canadians as well as Facebook's refusal to address deficiencies identified. They announced that the Office of the Privacy Commissioner of Canada plans to take the matter to Federal Court to seek an order to force the company to correct its privacy practices.

Electoral laws need reform and authorities need more powers

As well as data protection law, electoral law is relevant to data exploitation in the electoral context. Like with data protection law, there have been breaches and questions as to whether the current legal frameworks are sufficient. 

In the UK, the Electoral Commission investigated Vote Leave as well as campaign spending relating to Facebook and Cambridge Analytica services. Then in July 2018, the Electoral Commission determined that five payments various Leave campaign groups made to a Canadian data analytics firm, AggregateIQ, violated campaign funding and spending laws. The Electoral Commission fined Vote Leave and referred them to the police for breaking electoral law.  Vote leave has since dropped their appeal against the fine. AggregateIQ has been linked to Cambridge Analytica. It had a contractual relationship with Cambridge Analytica’s parent company in its work for the Leave campaign groups. The Electoral Commission found AggregateIQ and Facebook had “used identical target lists for Vote Leave and BeLeave ads.” 

The Electoral Commission has called for more powers to increase transparency and available sanctions in relation to digital campaigning. 

Politicians are angry

Politicians around the world have sought answers from Facebook, Cambridge Analytica and big tech more widely -  being met largely with frustrating responses. The EU, in May 2018, finally pinned down Mark Zuckerburg. In November 2018, parliamentarians from across the world united to grill Facebook on its practices as part of an ‘international grand committee’ on Disinformation and ‘fake news’.  In Canada, the Standing Committee on Access to Information, Privacy and Ethics has looked into and reported on the Cambridge Analytica scandal.  The UK Parliament has recognised that Facebook has resisted efforts to expose and regulate it. The UK Parliament, through the Digital, Culture, Media, and Sport Committee, scrutinised Facebook and other platforms to examine how peoples’ privacy and political choices could be compromised by online disinformation and online interference in the democratic election cycle. In February 2019, the Committee concluded that while “Facebook seems willing neither to be regulated nor scrutinised,” “[c]ompanies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and above the law.”  

There may be consequences in the US, too

Among other developments in the United States, the Department of Justice, Federal Bureau of Investigation, the Securities and Exchange Commission, and the Federal Trade Commission have been examining Facebook’s data sharing and protection practices. This includes a new investigation into the Cambridge Analytica revelations. On April 24, 2019 it was reported that Facebook expected to be fined up to $5 billion by the Federal Trade Commission for privacy violations, which would be a record fine imposed by the FTC against a technology company. Facebook disclosed this amount in its quarterly financial results, saying that it expected a one-time charge of $3 billion to $5 billion. Furthermore, federal prosecutors in California are still actively investigating the Facebook-Cambridge Analytica scandal. 

Facebook fails again and again

Throughout the aftermath of this scandal, Facebook has been criticised for denying wrongdoing and seeking to deflect blame. The day before the initial Guardian article was published, Facebook threatened to sue The Guardian, a move that Facebook has since expressed regret over. In November 2018, the New York Times revealed that Facebook hired a public relations firm to attack and discredit its critics by linking them to George Soros, cast criticism of the company as anti-Semitic, and shift attention to Facebook rivals such as Google. After such revelations were met with public outcry, Facebook terminated its relationship with the PR firm. The UK Parliament Digital, Culture, Media, and Sport Committee accused Facebook co-founder and CEO Mark Zuckerberg of showing contempt towards the UK Parliament by failing to appear before the Committee, and noted “Facebook used the strategy of sending witnesses who they said were the most appropriate representatives, yet had not been properly briefed on crucial issues, and could not or chose not to answer many of our questions. They then promised to follow up with letters, which—unsurprisingly—failed to address all of our questions.” 

Mark Zuckerberg announced in early March 2019 that Facebook would be building a “privacy-focussed messaging and social networking platform,” but he was criticised for failing to address whether the company would stop purchasing information from data brokers, collecting browsing data, collecting data from people who are not even on Facebook, and micro-targeting users. Facebook has yet to answer these questions. Calls by Zuckerberg for regulation to protect privacy and election integrity gave rise to some scepticism.

Next Steps

Privacy International is continuing to work to expose and challenge data exploitation, in political campaigning and advertising more generally. We will follow developments closely, push for the enforcement of existing legal protections and advocate for stronger protections where these are absent. We will demand that those involved are more transparent, implement their obligations and commitments, and are held accountable. 

In May 2019, a year on from the Cambridge Analytica Scandal and GDPR we have..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We look at the recently published report on forensic science in the UK, highlight concerns about police not understanding new tech used to extract data from mobile phones, the risk of making incorrect inferences and the general lack of understanding about the capabilities of these tools. 

The delivery of justice depends on the integrity and accuracy of evidence and trust that society has in it. So starts the damning report of the House of Lords Science and Technology Select Committee’s report into forensic science in the UK. Gillian Tulley, Forensic Science Regulator describes the delivery of forensic science in England and Wales as inadequate and ‘lurching from crisis to crisis’. The report issues stark warnings in relation to digital evidence and the absence of ‘any discernible strategy’ to deal with the ‘rapid growth of digital forensic evidence’. 

Use of mobile phone extraction [to obtain digital evidence] is arguably the most intrusive form of technology used by the police and relates not just to the data of the user of the phone but also those of the many other people who have communicated with the user. 

The police have had over a decade to ensure that their use of powerful technologies to extract, analyse and present data contained on mobile phones is done in a way which not only respects individual privacy but is attuned to security risks. The outrage over new victim consent forms, which allow police to access deeply personal data from mobile phones of rape survivors is arguably one of the police’s own making given the secrecy surrounding mobile phone extraction; the absence of local and national guidance; the lack of clarity on what basis it is actually lawful; and the failure to consult the public.

With threats to information security worsening each year, there are clearly risks associated with obtaining vast amounts of highly personal data which could be valuable in the wrong hands. As stated by the National Cyber Security Centre “bulk data stores make very tempting targets for attackers of all kinds. So, it’s essential to ensure they’re adequately protected.”

If we are to entrust the police with our data, not only should the lawfulness of this be clear and accessible, but we also want to know they have good security and detailed security policies and procedures in place, specifically for mobile phone extraction. We also need to know that they understand these technologies.

Physical Extraction from an Android device using Cellebrite UFED Touch 2

 

Data breaches happen due to poor implementation or complete absence of security controls. The police are no strangers to appalling data breaches through their own fault and have received a number of fines from the Information Commissioner. Gloucestershire Police were fined £80,000 in 2018 for revealing identities of child abuse victims. As for security controls, the criticism of North Yorkshire police failing to encrypt data despite having the capacity to do this, is deeply concerning.

Extract from Mobile Phone Examination Report, North Yorkshire PCC

But whilst the volume and retention of data are fundamental, there are wider security issues. As noted by Bruce Schneier:

“Security is complicated and challenging. Easy answers won’t help, because the easy answers are invariably the wrong one. What you need is a new way to think.”

Do the police know what they’re doing?

Police officers who operate mobile phone extraction technologies often have little or no forensic training and are increasingly reliant on devices whose capabilities they do not understand, particularly as budgets are cut and the volume of data they have to cope with increases.

Forensics experts have highlighted that heavy reliance on easy to use tools which grab data off a phone “by pushing some buttons” dumbs down digital forensics examinations and could undermine the successful examination of mobile phones. On this front, the hearings of the Science and Technology Committee made for concerning reading. 

Dr Jan Collie, Managing Director and Senior Forensic Investigator at Discovery Forensics, highlighted to the Committee the lack of forensic skill by those using MPE technology:

“What I am seeing in the field is that regular police officers are trying to be digital forensic analysts because they are being given these rather whizzy magic tools that do everything, and a regular police officer, as good as he may be, is not a digital forensic analyst. They are pushing some buttons, getting some output and, quite frequently, it is being looked over by the officer in charge of the case, who has no more training in this, and probably less, than him. They will jump to conclusions about what that means because they are being pressured to do so, and they do not have the resources or the training to be able to make the right inferences from those results. That is going smack in front of the court.

Dr Gillian Tully, UK Forensic Science Regulator commented that: 

“There is a lot of digital evidence being analysed by the police at varying levels of quality. I have reports coming in in a fairly ad hoc manner about front-line officers not feeling properly trained or equipped to use the kiosk technology that they are having supplied to them.”

The Serious Fraud Office argues to the Committee that there is a need for more regulation, or a mechanism by which agreed criteria and standards are adhered to as digital evidence became more ubiquitous in criminal trials. The ‘provenance and integrity of material obtained from digital devices is a key area’, they said.

What can the tech do?

In the world of digital forensics, write Angus Marshall and Richard Page, we tend to rely on third-party tools which we trust have been produced in accordance with good engineering practices. If we consider mobile phone extraction, the examination starts with a phone whose contents are unknown, thus, “inputs to the whole forensic process are unknown”. This impacts on whether it is possible to identify when something has gone wrong with the so-called evidence. They state that:

“It is entirely possible for a tool to process inputs incorrectly and produce something which still appears to be consistent with correct operation. In the absence of objective verification evidence, assessment of the correctness, or otherwise, of any results produced by a tool relies solely on the experience of the operator.”

“It should also be borne in mind that updates to hardware and software may have no apparent effect on system behaviour as far as a typical user is concerned, but may dramatically change the way in which internal processing is carried out and data is stored.”

In the US, the National Institute of Standards and Technology have a Cyber Forensics project to provide forensic tool testing reports to the public. Their tests have revealed, according to MobileEdit, that there are significant differences in results between individual data types across the competitive tools tested. Each tool was able to demonstrate certain strengths over the others, and there is no single tool that demonstrated superiority in all testing categories.

In the UK there is no testing of devices purchased by law enforcement. Forensic Regulator Dr Gillian Tully, commented that:

“As yet, those [mobile phone extraction tools] have not all been properly tested…”

Dr Jan Collie also noted:

“Most of the forensic tools we use are tested to within an inch of their lives by the companies who produce those tools. We are not allowed to reverse engineer them [digital forensics tools] because that would be illegal anyway.”

Ignorance of the functioning of these tools is dangerous to the proper functioning of the criminal justice system. It is essential “that there is disclosure so that the methods used are open to scrutiny and peer review of its accuracy and reliability”.

One explanation for the lack of testing relates to claims of commercial confidentiality:

"private providers are unwilling to disclose information about their own development and testing methods [which] means that the evidence base for the correctness of many digital methods is extremely weak or non-existent.” 

However, in criminal proceedings, where individuals risk losing their liberty, or the digital evidence of victims and witnesses is to be interrogated, it is a worry that disclosure of methodology is being withheld. 

Cellebrite UFED Touch 2

Academic and forensics practitioner Angus M.Marshall commented to the UK Select Committee that:

“...the commercial tool providers unwillingness to disclose information about their own development and testing methods means that the evidence base for the correctness of many digital methods is extremely weak or non-existent.”

Sir Brian Leveson told the Committee about a case in which:

“The contents of a phone had been wiped … and there were great difficulties finding out what had been on the phone. However, a commercial provider managed to download or retrieve some of the messages. The defence wanted to know how they had done that and the scientist was not prepared to explain it, first, because it was commercially confidential and, secondly, if he explained how he had done it, the next time round they would find a way of avoiding that problem.”

A recent update from one extraction company, Magnet Forensics, exemplifies the problems that stem from a lack of awareness about the capabilities of mobile phone extraction tools, to the point that they can do things that are unlawful, but without the police officer realising this. 

“When we introduced Magnet.AI, it ran automatically when you opened your case in AXIOM Examine... this could be a problem in some cases: if it revealed evidence that was outside the scope of your warrant … risk the admissibility of all the evidence on the device.”

Remote access

One area the Committee did not touch upon was the ability of extraction companies to remotely access products which have been purchased by police officers. This access is not only for product updates but analytics. Cellebrite, a company whose UFED device is popular with police forces around the world, including in the UK, state in their Privacy Policy that:

“When you use our Products such as the UFED Cloud Analyzer, then subject to your separate and explicit consent, we will collect information about how you use the Product under the license Cellebrite gives you, such as the number of  extractions you perform with the software, which type of data source you extract, any errors you came across using the Product, the number of events you collect, how often you use the Web crawler and which views you use... All of the above data is referred to as Analytical Information.”

Cellebrite and presumably other companies can remotely access their devices located in UK police stations and thus access tools which are connected to the phones of victims, witnesses and suspects. This area needs further interrogation. At the very least we might want to know a bit more about the extent of this access and what safeguards are in place. 

Cyber attack

Thousands of phones are plugged directly into these tools, the kiosks themselves are vulnerable to attack - for example they could be hijacked to send out malicious updates or a phone intentionally infected with malware could be provided for extraction, thus potentially enabling access to police infrastructure. This may not be an obvious or the simplest route for such an attack, but a determined adversary may try all means necessary. Even a nation state who is interested in a police investigation, such as that of the Skripals.

A more critical analysis is needed of the security risks presented by mobile phone exaction, because, as opined by Professor of Security Engineering at Cambridge University, Ross Anderson: 

“Security engineering is about ensuring that systems are predictably dependable in the face of all sorts of malice, from bombers to botnets. And as attacks shift from the hard technology to the people who operate it, system must also be resilient to error, mischance and even coercion. So a realistic understanding of human stakeholders - both staff and customers - is critical; human, institutional and economic factors are already as important as technical ones. The ways in which real systems provide dependability will become ever more diverse and tuning the security policy to the application will be as essential as avoiding technical exploits. In ten years’ time, protection goals will not just be closer to the application, they will be more subtle: examples include privacy, safety and accountability.”

Where next?

As more of our lives are lived online, often mediated via our phones, personal data has become increasingly valuable. The value of the data is exactly why the police want to collect, access and mine it, and criminals want to steal it. 

A search of a person’s phone is far more invasive than a search of their home, not only for the quantity and detail of information but also the historical and intimate nature. The state should not have unfettered access to the totality of someone’s life and the use of mobile phone extraction requires the strictest of protections. 

No data can be completely secure. Once we store data it becomes vulnerable to a breach due to accident, carelessness, insider threat, or a hostile opponent. Poor practices in handling the data can undermine the prosecution of serious crimes, as well as result in the loss of files containing intimate details of people who were never charged. 

Technical standards are in desperate need to ensure trust in the deployment of powerful technologies. 

 

------------------------------------------------------------------------------------------------------------------------------------------------------------

Marshall, Angus.M, Paige, R, Requirements in digital forensics method definition: Observations from a UK Study, Digital Investigation 27(2018) 23-29, 20 September 2018

New Features (April 2017) Magnet Forensics [ONLINE] Available at: https://www.magnetforensics.com/blog/introducing-magnet-ai-putting-machine-learning-work-forensics/[Accessed on 23 March 2019]

Privacy Statement (2019) Cellebrite [ONLINE] Available at: https://www.cellebrite.com/fr/privacy-statement/[Accessed on 12 April 2019]

Anderson, R, Security Engineering, Indiana, Wiley, Second Edition 2008, p.889-890

NCSC (September 2018) National Cyber Security Centre [ONLINE] Available at: https://www.ncsc.gov.uk/collection/protecting-bulk-personal-data[Accessed on: 18 March 2019] 

Pidd, H, May 2017, The Guardian [ONLINE] Available at: https://www.theguardian.com/uk-news/2017/may/04/greater-manchester-police-fined-victim-interviews-lost-in-post[Accessed on 21 April 2019] 

BBC (June 2018) BBC [ONLINE] Available at: https://www.bbc.co.uk/news/uk-england-gloucestershire-44486535[Accessed on 23 March 2019]

Schneier, Bruce Beyond Fear. Thinking sensibly about security in an uncertain worldUnited States, Copernicus Books, 2006, p.11

Bowcott, O (May 2018) The Guardian [ONLINE] Available at: https://www.theguardian.com/law/2018/may/15/police-mishandling-digital-evidence-forensic-experts-warn[Accessed on 26 April 2019]

Reiber, L, Mobile Forensic Investigations, A Guide to Evidence Collection, Analysis, and Presentation, New York, McGraw Hill, 2019, p.2 

Dr Jan Collie (27 November 2018) Evidence to Select Committee on Science and Technology Available at: http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee-lords/forensic-science/oral/93059.html[Accessed on 5 March 2019] 

Marshall,A (September 2018) Science and Technology Committee, UK Parliament [ONLINE] Available at:  [Accessed on 26 April 2019] http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee-lords/forensic-science/written/89341.html

Marshall, Angus.M, Paige, R, Requirements in digital forensics method definition: Observations from a UK Study, Digital Investigation 27(2018) 23-29, 20 September 2018 

Marshall, Angus.M, Paige, R, Requirements in digital forensics method definition: Observations from a UK Study, Digital Investigation 27(2018) 23-29, 20 September 2018 

BBC News (October 2018) BBC Available at: https://www.bbc.co.uk/news/uk-43315636[Accessed on 6 May 2019]

New Features (April 2017) Magnet Forensics [ONLINE] Available at: https://www.magnetforensics.com/blog/introducing-magnet-ai-putting-machine-learning-work-forensics/[Accessed on 23 March 2019]

MOBILedit News (October 2018) MobilEdit [ONLINE] Available at: https://www.mobiledit.com/news[Accessed on 01 April 2019]

Large Image
List Image
List Icon
Location / Region / Locale
Campaign name
Targeted Adversary
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Photo by Mike MacKenzie (via www.vpnsrus.com)

 

Ever, a cloud storage app, is an example of how facial recognition technology can be developed in ways people do not expect and can risk amplifying discrimination.

Ever is a cloud storage app that brands itself as “helping you capture and rediscover your life’s memories,” including by uploading and storing personal photos; Ever does not advertise that it uses the millions of photos people upload to train its facial recognition software, which it then aims to sell to law enforcement, private companies, and the military.

It is of serious concern that Ever is using its platform, in a way that violates peoples’ right to privacy, to create an inherently discriminatory, error-prone, forensically unreliable technology. Furthermore, Ever’s process of creating that technology is opaque and unaccountable.

While many companies such as Amazon and Microsoft train facial recognition technology from publicly available datasets (which may nonetheless include photos obtained without peoples’ knowledge or permission), Ever uses photos of its own customers and their friends and family–which they only alluded to in their privacy policy after  a query from NBC News. Ever’s updated policy provides that it “uses facial recognition technologies” and that peoples’ files “may be used to help improve and train our products and these technologies.”

Despite the privacy policy change, Ever uses peoples’ photos in ways they likely do not anticipate to train software that can ultimately be used for surveillance or to produce discriminatory outcomes that people would not necessarily condone. Furthermore, friends and family members who have not signed up Ever, but who are included in photos uploaded by others, do not know that their images are being used in this way. Using peoples’ photos for software to sell to outside companies is not based primarily on protecting customers’ interests or improving their cloud-storage experience. Ever’s practice violates the right to privacy of people who use its services and of people whose images appear in photos uploaded to Ever.

There is a lack of transparency and accountability in the technology Ever is developing. The dataset of photos that Ever uses could be unrepresentative of the broader population. As a result, the technology could have higher error rates for groups whose photos are absent from or underrepresented in the photos Ever uses to train its software. Because Ever’s dataset is derived from the photos people store, the database cannot be examined by independent outside groups to uncover such problems and demand change. In contrast, in the case of Amazon’s Rekognition software, researchers have exposed errors and biases in the software that have spurred demands for Amazon to stop selling it to law enforcement.

Furthermore, it is unclear how Ever is labelling the shape and composition of peoples’ facial features in the photos it feeds to its facial recognition technology, which the technology depends on to learn to create detailed biometric maps of peoples’ faces and match photos. Mislabelling features, employing gender binaries, and failing to recognize physical differences such as disability or injury could lead to discriminatory outcomes and amplify discrimination.

Ever serves as an example of how companies are able to exploit people’s data for their own purposes, in part due to the absence of laws restricting the development and use of facial recognition technology. Around the world, such technology is being rolled out and used for a wide range of purposes, often in secret and with highly discriminatory effects.

San Francisco is taking a step in the right direction and serves as a blueprint for how jurisdictions could respond to the type of problem Ever embodies: on May 14, 2019, legislators voted in favour of the “Stop Secret Surveillance Ordinance” to prevent local law enforcement from using facial recognition, making San Francisco the first city in the United States to do so. However, the San Francisco ordinance applies only to public entites, not to private companies.

San Francisco is one of a number of jurisdictions that have recognised the potentially dangerous application of such technology and sought to provide safeguards: Privacy International calls on others to do so and is actively campaigning for increased transparency and accountability over the use of such technology as part of our Neighbourhood Watched campaign.

Further reading:

How Privacy International is working to ensure new technologies, including facial recognition, are governed and used in ways that protect our privacy, preserve our civic spaces, and support democracy: https://privacyinternational.org/feature/2852/protecting-civic-spaces

Privacy International’s Response to the West Minster Hall Debate on Facial Recognition: https://privacyinternational.org/advocacy-briefing/2835/our-response-westminster-hall-debate-facial-recognition

Privacy International’s Neighbourhood Watched Campaign: https://privacyinternational.org/campaigns/neighbourhood-watched

Large Image
List Image
List Icon
Date
Wednesday, May 15, 2019
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Data Exploitation and Democratic Societies staff Wednesday, May 1, 2019

Image Source: "Voting Key" by CreditDebitPro is licensed under CC BY 2.0

 

Democratic society is under threat from a range of players exploiting our data in ways which are often hidden and unaccountable. These actors are manifold: traditional political parties (from the whole political spectrum), organisations or individuals pushing particular political agendas, foreign actors aiming at interfering with national democratic processes, and the industries that provide products that facilitate the actions of the others (from public facing ones, such as social media platforms and internet search engines, to the less publicly known, such as data brokers, ad tech companies and what has been termed the 'influence industry').

Personal data plays a fundamental role in this emerging way of influencing democratic processes. Through the amassing and processing of vast amounts of data, individuals are profiled based on their stated or inferred political views, preferences, and characteristics. These profiles are then used to target individuals with news, disinformation, political messages, and many other forms of content aimed at influencing and potentially manipulating their views.

Data is also becoming integral to the ways in which we vote - from the creation of vast voter registration databases, sometimes including biometric data, to reliance on electronic voting. Such voting processes are often implemented without sufficient consideration for their considerable privacy and security implications.

In attempting to understand and control this increasingly prevalent data exploitation, other actors - including governments, regulators and civil society - are beginning to push for more transparency and accountability regarding how data is used in political processes. While generally positive, this push may have some drawbacks. Many of the efforts so far have focused on regulating content, e.g. demanding the taking down of political or issue-based content, requiring the introduction of fact checking, curbing anonymous posting. Relatively less attention has been paid to measures to prevent the exploitation of personal data to distribute such content in the digital space.

It is therefore more important than ever for us to consider the way in which data is used in the context of modern democratic societies. Left unchecked, such exploitation is highly privacy invasive, raises important security questions, and has the potential to undermine faith in the democratic process, including in relation to transparency, fairness and accountability.

WHAT IS THE PROBLEM?

Data Exploitation and Elections

Nowhere is the increasingly data intensive nature of the electoral cycle more prevalent than in political campaigning. Around the world, political campaigns at all levels have become sophisticated data operations. As the amount of data and the ways it is used increase, so do the risks for data exploitation. This has consequences for the right to privacy and data protection, but also other rights including freedom of expression, association and political participation.

Data driven campaigning has been at the core of recent elections and referenda around the world. Campaigns rely on data to facilitate a number of decisions: where to hold rallies, on which States or constituencies to focus, which campaign messages to promote in each area or to each constituency, and how to target supporters (and people 'like' them), undecided voters, and non-supporters.

Back in 2017, Privacy International looked at the use of data in campaigning in the Kenyan elections, and the role of a US-based digital media company. Since then, the use (and exploitation) of data in campaigning has become ever more acute and pervasive. For example, the use of data for profiling and targeting in political campaigning has come under scrutiny in recent elections in France, Germany and Italy. In the UK, the Information Commissioner's Officer has opened multiple investigations into data use during the Brexit referendum.

Whilst the use of data in political campaigning is not new, the scale and granularity of data, the accessibility and speed of the profiling and targeting which it facilitates, and the potential power to sway or suppress voters through that data is. The actors, tools, and techniques involved - who is using data, where are they getting it, and what are they doing with it - vary depending on the context from country to country and campaign to campaign and even within a campaign. The sources and types of data used in political campaigning are multiple. Political parties and campaigns gain access to data from electoral roll/voter registration records. They also have data on members and supporters, as well as data from canvassing and use of social media, apps, online tracking, surveys, and competitions. Then there is commercial data, that can be tapped into - through data brokers, platforms, and the wider online advertising ecosystem. Tactical Tech has identified over 300 organisations around the world as working with political parties on data-driven campaigning. Data can be exploited through a range of mediums to build profiles and to disseminate messages in a targeted manner, ranging from the use of text messages (SMS), to calls, to messaging apps (e.g. Whatsapp), to search results (e.g. through AdWords), to campaign apps, to ad-supported platforms (e.g. Google, Facebook, Twitter, YouTube, Instagram) and websites, to television. A vast range of factors may play a role in the political content you see, including where you've been (e.g. geotargeting - geofencing, beacons), what you've been doing (online and offline) what this says about your personality (e.g. psychometric profiling), and what messages people (like you) with particular traits have been most susceptible too (e.g. A/B testing).

This collection, generation and use of data happens at all times, however, not just during the course of electoral campaigns.

While individuals are targeted as potential voters at key democratic moments, such as political elections and referenda, they are also increasingly targeted outside formal electoral campaigning periods. This targeting may, for example, seek to influence their political views more broadly, or demand they support or oppose a political issue, such as a draft law or a key policy vote in Parliament.

For example, the UAE and Saudi Arabia used online advertising and social media campaigns to seek to influence US policy on Qatar. Exxon Mobile, a US oil company, has spent millions on ads promoting oil production and opposing regulation.

There might not be obvious links between the way the data is used politically, for example in an effort to influence or create division, and a political party manifesto commitment or support or opposition to a referendum.

And political parties are just one of many actors involved. There are many other actors that play a role (whether intentional or unintentional) in political campaigning (including through influencing and nudging), but do not have a direct relationship with or are not affiliated with a particular party or candidate, often raising questions including in relation to finance. For example, during the UK's Brexit referendum, advertisements appeared from apparent 'grassroots' groups, which actually had a large lobbying company behind them.

What is clear is that there a serious lack of transparency about who is using this data, where they are getting it and how they are using it. This exacerbates the issues of fairness and accountability. Too often the laws that are meant to protect people's data and regulate the electoral process are not enforced, out of date, or non-existent in the digital data-driven campaign environment, which leads to inherent risks and threats to democracy.

Lack of Transparency

The data exploitation just described is often shrouded in secrecy. Certain actors, including online platforms and social media companies, have begun proposing voluntary ways to increase transparency, particularly with regard to "political advertisements."

At present, people using these platforms are not able to completely understand why they are targeted with any ad, much less a political ad. How ads are targeted at users is incredibly complex and can involve targeting parameters and tools provided by platforms, or data compiled by advertisers and their sources, such as data brokers. What is especially difficult for most users to understand is how data from disparate sources is linked, and how data can be used to profile them in ways that can be incredibly sensitive. These companies' business models necessitate the collection and exploitation of data in ways that are opaque, not only to profile and target people with advertisements, but also to keep their valuable attention on the platform.

Because all advertisements have the potential to be political in nature, steps to increase transparency must be applied broadly and uniformly across advertising platforms and networks, and not limited to ads bought by often self-identified political actors or narrow definitions of "political issues". The steps that certain companies have so far taken to increase transparency are therefore insufficient, including because they often apply only to "political" or "issue" advertisements, as defined by the companies, and reveal too little about how those ads are targeted.

To improve transparency, the European Union developed a code of practice on disinformation aimed at online platforms, leading social networks, advertisers, and advertising industry. It has been signed by Facebook, Google, Twitter, and Mozilla as well as by some national online advertisement trade associations. The code contains a range of commitments mostly focussed on improving transparency of political and issue-based ads, and on limiting techniques such as the malicious use of bots and fake accounts. However, the implementation of this code by the main companies has been patchy.

Companies are also not sufficiently addressing other ways in which data can be exploited to influence elections, including through the promotion of content that is not explicitly identified as an ad.

Data Abuses and Breaches in Electoral Processes

Democratic elections are complex processes that require sophisticated legal and institutional frameworks. Their functioning demands the collection and processing of personal data. Increasingly governments are creating databases which store a vast array of personal information about voters, sometimes including biometric data.

If not properly regulated, these databases may undermine the democratic processes they ostensibly support.  For instance, unrestrained sale of the data contained in these databases might exacerbate the profiling concerns articulated above. Insufficiently secure databases might also be subject to breaches or leaks of personal information, which might discourage voters from registering in the first place and could lead to other harms such as identity theft.

For example, in March 2016, the personal information of over 55 million registered Filipino voters were leaked following a breach on the Commission on Elections' (COMELEC's) database. The investigation of the national data protection authority concluded that there was a security breach that provided access to the COMELEC database that contained both personal and sensitive information, and other information such as passport information and tax identification numbers. The report identified the lack of a clear data governance policy, vulnerabilities in the website, and failure to monitor regularly for security breaches as main causes of the breach. Similarly, in 2015, the personal information of over 93 million voters in Mexico, including home addresses, were openly published on the internet after being taken from a poorly secured government database.

As another example, in Kenya during the 2017 presidential election, there were reports that Kenyans received unsolicited texts messages from political candidates asking the receiver to vote for them. These messages referenced individual voter registration information such as constituency and polling station, which had been collected for Kenya's biometric voter register. There are concerns that this database has been shared by Kenya's electoral commission (IEBC) with third parties, without the consent of the individual voters, and that telecoms companies may have shared subscriber information, also without consent, in order to allow this microtargeting to happen. It is not clear who the registration database was shared with and therefore which company, if any, were responsible for this microtargeting. Privacy International's partner, the Centre for Intellectual Property and Technology Law (CPIT) at  Strathmore University, Kenya, researched whether the 2017 voter register was shared with third parties, and if so, with whom, finding more questions than answers.

Similarly, increased reliance on technical solutions, such as e-voting, raise the risks of abuse and specific challenges related to the protection of anonymity of voters. For example, in Switzerland researchers found technical flaws in the electronic voting system that could enable outsiders to replace legitimate votes with fraudulent ones.

WHAT IS THE SOLUTION?

Privacy International would like to see stronger enforcement of existing laws and adoption of new regulations to limit the exploitation of data that affects democratic processes. These would need to balance the legitimate interest of political actors to communicate with the public and the right of individuals to be free of unauthorised, opaque targeting.

First, we need a review of existing laws.

Our research suggests that national laws and regulations are often not fit for purpose. There are many laws and regulatory frameworks that are relevant in this space: from electoral law, to political campaign financing, from data protection to audio-visual media rules regarding the broadcasting of political messages.

These laws are often not adequately regulating online data-driven campaign practices. They do not always address the technical and privacy concerns of modern electoral systems that rely on electronic voting and voter databases. And when laws are relevant, they are often not being enforced effectively.

Data protection and electoral laws need to be examined closely in order to address the use of data in electoral campaigns from a comprehensive perspective. For example, data protection law should regulate profiling and not include loopholes that can be exploited in political campaigning. Electoral laws should be reviewed to ensure they apply to digital campaigns in the same way as they might to the print and broadcasting context.  There should also be full, timely, detailed transparency of digital campaign financing. Where these frameworks fall short, they should be amended and enhanced.

Second, we need an approach that fosters collaboration and interaction across different actors involved in this field, from election officials, to data protection authorities, from election monitors, to civil society.

As noted recently by European institutions, this is a complex, multifaceted area, with many actors and many interests. It will not be possible for one single regulator, no matter how well resourced, to address all of these aspects.

Third, we need to make sure those actors who can affect change received the support they require, including: 

  • Empowering regulators to provide clear guidance, take action and enforce the law, having the ability to conduct their work without external pressure and with the ability to request information from all involved parties: political parties, campaign groups, companies, other private actors, and other government actors involved in the electoral cycle. Regulators must be given the necessary resources (financial and capacity) to take such action. 
  • Capacitating all parties involved in regulating political campaigning, including national electoral commissions and electoral monitoring bodies, on these issues according to their roles, including the type of technologies and methods deployed for campaigning, applicable privacy and electoral law, and on good practices as to how to exercise their powers.
  • Supporting civil society and public interest actors seeking to scrutinise, monitor and expose data exploitation in the electoral context. 

Fourth, actors who are exploiting data must be held to account:

  • Political parties and campaign groups must fully comply with data protection and campaigning/ electoral laws, be accountable for all the campaigning they do both directly and indirectly, and subject that work to close public supervision.
  • Companies in the campaigning ecosystem need to be transparent and accountable with regard to the services and products they offer to political parties around the world, and the methods used to obtain and process personal data. 
  • Companies should also implement best practices across all jurisdictions, not only in those that have legislated or enforced them.

Fifth, companies should expand the scope of their ads transparency efforts to include all advertisements bought on their platforms. Companies should provide users with a straightforward and simple way to understand why they are targeted with an ad, including information such as what the intended audience of the ad was and what the actual audience was. Transparency efforts should be rolled out globally, and must take into consideration regional and local contexts. Such efforts should not be applied in a mechanical or generalist way. Privacy International also believes that there may be a legitimate need to protect political anonymity - such as for civil society working on sensitive issues in certain countries. These nuances must be considered in companies' transparency efforts.

To fully address concerns about targeted advertising, more information should be made easily available to users. Companies should provide information related to microtargeting - if multiple versions of an ad were created, how ad audiences were created, what segments or interests were selected, what data was used by an advertiser to reach the audience or if the audience was generated via a lookalike advertising feature. This sort of information should be understandable to users.

Additionally, it is important for researchers to be able to monitor and study political and issue-based advertising.

Political parties and campaign groups likewise play a role in ads transparency. Political parties should be proactive and take steps to provide transparency to their constituents as to where they are advertising, who they are targeting, and on what topics they are advertising.

Finally, companies should comply with and support strong regulation to protect privacy. Companies may need to make significant changes in the ways in which they permit targeting of individuals with political ads, with many such practices likely to fall short of modern data protection laws, such as the EU's General Data Protection Regulation. While increased transparency is welcome, it cannot stop there.

WHAT PI IS DOING

In 2017, PI revealed how two highly problematic and inflammatory political campaigns were created by a US-based data analytics company on behalf of Kenyan President Kenyatta’s re-election campaign. Our investigation, which was released prior to the 2018 Cambridge Analytica scandal, showed how data companies are able and willing to insert themselves into national politics around the world.

Also in 2017, PI challenged an exemption in the UK's new data protection regime which may be used to facilitate data exploitation. We wrote to the UK's main political parties seeking assurances that they would not rely on this loophole.

In 2018, PI investigated seven data analytics and data broker companies to understand how the companies profile people and where the companies get the data to do so. A number of these companies have been linked to political parties and campaigns. Our investigation showed how it is currently impossible to understand the sources and use of such data, and resulted in PI filing regulatory complaints against all companies. We are in ongoing conversations with the relevant regulators regarding those complaints.

Also in 2018, the Guardian newspaper revealed that Cambridge Analytica - company purporting to facilitate digital campaigning - was able to harvest data from Facebook. By Facebook's own design, companies like Cambridge Analytica were able to amass data on users' friends. In the case of Cambridge Analytica, they were able to amass data on 87 million people, the vast majority of whom had never interacted with Cambridge Analytica in the first place. PI was quick to develop policy objectives, key questions for Facebook to answer, and to engage our global network of partner organisations to respond. We worked closely with media organisations to ensure that this story received sufficient attention.

In 2019, PI is:

  • Taking stock and learning from our own past experience and that of others (for example, experiences in France,
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Consultation Submission

In March 2019, Privacy International submitted a response to a consultation on Disinformation in Electoral Contexts, led by the Office of the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights together with the Department of Electoral Cooperation and Observation (DECO) and the Department of International Law (DIL) of the Organisation of American States (OAS).

In our submission we highlighted the importance of minmising data exploitation in the electoral context;  actions and actors involved; and key legal frameworks to consider - namely data protection and electoral law.

Civil Society in Latin America, including some of Privacy International's partners, also submitted to this consultation. Among other matters raised in their submission, they also highlighted the need to strengthen data protection and electoral laws in the region. 

Expert Group on disinformation in electoral contexts

Following the consultation, Privacy International was invited by the Special Rapporteur to form part of a group of international experts . The group met in Mexico City from the 23 - 24 of April 2019, to analyse topics related to the causes, impacts and responses to the phenomenon of disinformation and elections in the region, with the objective of reaching conclusions and recommendations, and contributing to the elaboration of a "practical guideline of recommendations for guaranteeing freedom of expression and access to information from a variety of Internet sources during electoral processes without improper interference" (as mandated by the OAS general assembly AG / RES. 2928). 

Privacy International is grateful for this opportunity, given the important data and privacy implications of this discussion. As we have highlighted, to consider the issue of ‘disinformation in electoral contexts’ it is essential to look at the use of data. If we think of disinformation as the ‘front end’, then we recognise that data is the ‘back end’ that feeds into and facilitates many of the practices that raise concerns.

Repeating Image and Text
Campaign name
When Your Data Becomes Political

Description about campaign here - (Ailidh/Harmit)

Topic

Political campaigns around the world have turned into sophisticated data operations. They rely on data- your data- to facilitate a number of decisions: where to hold rallies, which States or constituencies to focus resources on, which campaign messages to focus on in which area, and how to target supporters, undecided voters, and non-supporters.

 

Data Protection laws seek to protect people's data by providing individuals with rights over their data, imposing rules on the way in which companies and governments use data, and establishing regulators to enforce the laws.

 

Programme
Defending Democracy and Dissent

Our relationships with our governments are increasingly mediated by technology. Whether we are viewing election advertisements on Facebook, expressing our dissent on Twitter, chatting about our political views or sharing a video on Whatsapp, voting electronically, or attending a protest that is being monitored by cameras equipped with facial recognition software, technology is now infused into the political process.

The seamless way we communicate using some of these technologies has helped many to organise politically and to express dissent online and offline. But the hidden data harvesting on which many of these technologies rely also threatens our ability to challenge power, no matter the type of government.

At Privacy International, we seek to defend democracy and dissent by investigating the role technology plays in facilitating and/or hindering everyone's participation in civic society. As a privacy organisation, we focus on the ways in which governments, political parties, other political actors, and corporations are exploiting our data. We advocate for limits on data exploitation throughout the election cycle. [Link to data and elections policy paper] We challenge the ability of police forces and intelligence agencies to monitor us in increasingly intrusive ways. [Link to civic spaces paper] Ultimately, we fight to preserve the privacy, dignity, and autonomy of individuals so that they can exercise and defend their own rights and freedoms.

Issue
Modernise Data Protection Law

Advocating for strong data protection around the world.

 

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
FAQ: Privacy International UK Supreme Court Judgment staff Wednesday, May 15, 2019
Details of case:

R (on the application of Privacy International) (Appellant) v Investigatory Powers Tribunal and others (Respondents) 

[2019] UKSC 22

15 May 2019

The judgment What two questions was the Supreme Court asked to answer?
  1. Whether section 67(8) of RIPA 2000 “ousts” the supervisory jurisdiction of the High Court to quash a judgment of the Investigatory Powers Tribunal for error of law?
     
  2. Whether, and, if so, in accordance with what principles, Parliament may by statute “oust” the supervisory jurisdiction of the High Court to quash the decision of an inferior court or tribunal of limited statutory jurisdiction?
What does the judgment say, in brief?

As to question 1, the language used in section 67(8) of RIPA is not sufficiently clear to oust the supervisory jurisdiction of the High Court to quash a judgment of the Investigatory Powers Tribunal (IPT) for error of law. Accordingly, Privacy International can resume its challenge at the High Court to a decision of the IPT regarding the UK government's hacking powers (described in more detail below).

As to question 2, the Court divided as to the answer. Three of the justices concluded that there was a "strong case for holding that, consistently with the rule of law, binding effect cannot be given to a clause which purports wholly to exclude the supervisory jurisdiction of the High Court to review a decision of an inferior court or tribunal, whether for excess or abuse of jurisdiction, or error of law." Several of the others adopted a narrower interpretation of the requirements of the rule of law, and one declined to address the question.

Who said what in the judgment?

The judgment is composed of four opinions. The opinion of Lord Carnwath (with whom Lady Hale and Lord Kerr agree) together with that of Lord Lloyd-Jones on the first question mean that Privacy International's judicial review may proceed.

Lord Sumption (with whom Lord Reed agrees) and Lord Wilson dissented and would have dismissed the appeal.

What is a judicial review?

A judicial review is a challenge to the lawfulness of a decision by a public body. It is not a direct appeal of the decision. A judicial review may only be brought if the public body has made an error of law, or acted unfairly or unreasonably.

The history of the case What is this all about?

A growing number of governments around the world are embracing hacking to facilitate their surveillance activities. Yet hacking presents unique and grave threats to our privacy and security. It is extremely intrusive, capable of allowing the government hacker to access information sufficient to build a detailed profile of a person, as well as altering or deleting that information. At the same time, hacking not only undermines the security of targeted systems, but also has the potential to compromise the internet as a whole. You can read more about our concerns on government hacking here.

What is hacking?

The term "hacking" is difficult to define. For these safeguards, Privacy International posits the following definition: Hacking is an act or series of acts, which interfere with a system, causing it to act in a manner unintended or unforeseen by the manufacturer, user or owner of that system. System refers both to any combination of hardware and software or a component thereof. Privacy International recognises that there may be instances of government hacking that do not conform to this definition and should nonetheless be subject to scrutiny.

How did we find out about GCHQ's hacking capabilities?

The Snowden disclosures revealed sweeping hacking operations conducted by the UK signals intelligence agency, the Government Communications Headquarters (GCHQ). The disclosures indicated that GCHQ uses hacking techniques to gain access to potentially millions of devices, including computers and mobile phones. They documented how GCHQ could, among other things: activate a device’s microphone or webcam, identify the location of a device with high-precision, log keystrokes entered into a device, collect login details and passwords for websites and record Internet browsing histories on a device, and hide malware installed on a device 

What was our initial challenge about?

Our initial claim in the Investigatory Powers Tribunal (IPT) in 2014 was about GCHQ's computer hacking operations. We further alleged that GCHQ hacking violates Articles 8 and 10 of the European Convention on Human Rights, which respectively protect the right to privacy and the right to freedom of expression and also were unlawful under UK law. In February 2016, the IPT held that GCHQ hacking is lawful under UK law and the European Convention on Human Rights. The IPT further concluded that GCHQ may hack inside and outside of the UK using "thematic warrants." Thematic warrants are general warrants covering an entire class of property, persons or conduct, such as "all mobile phones in London."

What has happened since the IPT's decision?

Privacy International challenged the IPT's decision that GCHQ hacking is lawful via two avenues. 

First, in May 2016, we filed a claim for judicial review before the English High Court challenging the UK Government's use of general warrants to hack inside and outside the UK. We argued that thematic warrants undermine 250 years of English law, which requires that a warrant must target an identified individual or individuals. The High Court questioned whether it could hear a judicial review of the IPT. In February 2017, the High Court found in favour of the Government. In November 2017, the Court of Appeal upheld the decision of the High Court. In May 2019, the Supreme Court reversed the Court of Appeal's decision. The Supreme Court decided that the language used in section 67(8) of RIPA does not remove the supervisory jurisdiction of the High Court to quash a judgment of the Investigatory Powers Tribunal (IPT) for error of law.  Accordingly, Privacy International can resume its challenge at the High Court regarding the UK government's hacking powers.

Second, in August 2016, we, together with five internet and communications service providers, filed an application to the European Court of Human Rights (ECtHR) challenging GCHQ's hacking powers outside of the UK. That case was communicated in November 2018 and is still pending before the ECtHR. It has been stayed pending the Supreme Court’s decision.

Next steps What are the ramifications of the Supreme Court's judgment? 

The UK Supreme Court has agreed with Privacy International that the Tribunal tasked with overseeing the UK intelligence services cannot escape the oversight of the English High Court. The leading judgment of Lord Carnwath confirms the vital role of the courts in upholding the rule of law. The Government’s reliance on an ‘ouster clause’ to try to remove the IPT from judicial review failed. The judgment confirms hundreds of years of legal precedent condemning attempts to remove important decisions from the oversight of the ordinary courts.

What are the next steps after the Supreme Court's judgment?

Following the judgment of the Supreme Court, the High Court will examine the merits of our initial challenge on the UK Government's use of general warrants to hack inside and outside the UK.

Useful links

PI's 10 Necessary Hacking Safeguards:https://privacyinternational.org/feature/957/government-hacking-and-surveillance-10-necessary-safeguards

Case page: https://privacyinternational.org/legal-action/queen-application-privacy-international-v-investigatory-powers-tribunal-uk-general 

Related case: https://privacyinternational.org/legal-action/privacy-international-and-others-v-united-kingdom-uk-government-hacking

Topic page: https://privacyinternational.org/topics/government-hacking

Hacking explainer video: https://privacyinternational.org/video/2068/video-government-hacking-101

 

Location / Region / Locale
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
BREAKING: following PI investigation into exploitation of data, Quantcast is under investigation by Irish Data Protection Commission tech-admin Thursday, May 2, 2019

The Irish Data Protection Commission has today launched an inquiry into the data practices of ad-tech company Quantcast, a major player in the online tracking industry. PI's 2018 investigation and subsequent submission to the Irish DPC showed how the company is systematically collecting and exploiting people's data in ways people are unaware of. PI also investigated and complained about Acxiom, Criteo, Experian, Equifax, Oracle, and Tapad.

PI welcomes this announcement and its focus on Quantcast's use of data for profiling and the use of profiles generated for targeted advertising. PI is continuing to expose data exploitation by ad-tech companies and others. We believe this investigation provides a good opportunity to shed light on Quantcast's practices, which could lead to better protections for millions of people whose data is exploited through online targeted advertising.

"We are extremely pleased that as a result of our submission the Irish DPC are commencing an inquiry into Quantcast - a company that most of us have never heard of but that through our data builds intricate profiles of our lives. PI considers Quantcast's practices to be failing to meet the standards set by GDPR, especially with regards to profiling. The real test of GDPR will be its enforcement." Ailidh Callander - Privacy International Legal Officer

Press contact: press@privacyinternational.org

Background
  • In November 2018, Privacy International made submissions about seven data brokers (Acxiom, Oracle), ad-tech companies (Criteo, Quantcast, Tapad), and credit referencing agencies (Equifax, Experian) with the data protection authorities in France, Ireland, and the UK.

  • We focussed on companies that, despite exploiting the data of millions of people, are not household names and therefore rarely have their practices challenged.

  • Our submissions set out why we consider that the data practices of Quantcast and the other companies do not comply with GDPR.
  • Our submissions focussed on profiling and failure to comply with the data protection principles such as (transparency, fairness, lawfulness, purpose limitation, data minimisation, and accuracy) and the requirement for a legal basis for the way they use people's data.
  • We called on regulators to investigate these companies’ compliance with the rights and safeguards in GDPR.
Further reading

We asked Quantcast for our data and here's what they said: https://privacyinternational.org/feature/2433/i-asked-online-tracking-company-all-my-data-and-heres-what-i-found

Why we’ve filed complaints against companies that most people have never heard of – and what needs to happen next: https://privacyinternational.org/advocacy-briefing/2434/why-weve-filed-complaints-against-companies-most-people-have-never-heard-and

Location / Region / Locale
Image
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A mobile device is a huge repository of sensitive data, which could provide a wealth of information about its owner and many others with whom the user interacts. 

Companies like Cellebrite, MSAB and Oxygen Forensics sell software and hardware to law enforcement. Once your phone is connected to one of these mobile phone extraction tools, the device extracts, analyses and presents the data contained on the phone.  

What data these tools can extract and what method is used will depend on the operating system, security features and phone model. 

Privacy International used Cellebrite’s UFED to extract an Android phone (HTC Desire) using a Physical [ADB Rooted] extraction. The images below are real examples of what a police officer might be able to see if they can extract data from your phone. The numbers in red are deleted items.

 

 

Privacy International also extracted data from an iPhone SE using a logical extraction. Again, the numbers in red are deleted items that were retrieved. 

 

Read more:

Surveillance Company Cellebrite Finds a New Exploit: Spying on Asylum Seekers https://privacyinternational.org/feature/2776/surveillance-company-cellebrite-finds-new-exploit-spying-asylum-seekers

Large Image
List Image
List Icon
Campaign name
Targeted Adversary
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Data Exploitation and Democratic Societies staff Wednesday, May 1, 2019

Image Source: "Voting Key" by CreditDebitPro is licensed under CC BY 2.0

 

Democratic society is under threat from a range of players exploiting our data in ways which are often hidden and unaccountable. These actors are manifold: traditional political parties (from the whole political spectrum), organisations or individuals pushing particular political agendas, foreign actors aiming at interfering with national democratic processes, and the industries that provide products that faciliate the actions of the others (from public facing ones, such as social media platforms and internet search engines, to the less publicly known, such as data brokers, ad tech companies and what has been termed the 'influence industry').

Personal data plays a fundamental role in this emerging way of influencing democratic processes. Through the amassing and processing of vast amounts of data, individuals are profiled based on their stated or inferred political views, preferences, and characteristics. These profiles are then used to target individuals with news, disinformation, political messages, and many other forms of content aimed at influencing and potentially manipulating their views.

Data is also becoming integral to the ways in which we vote - from the creation of vast voter registration databases, sometimes including biometric data, to reliance on electronic voting. Such voting processes are often implemented without sufficient consideration for their considerable privacy and security implications.

In attempting to understand and control this increasingly prevelant data exploitation, other actors - including governments, regulators and civil society - are beginning to push for more transparency and accountability regarding how data is used in political processes. While generally positive, this push may have some drawbacks. Many of the efforts so far have focused on regulating content, e.g. demanding the taking down of political or issue-based content, requiring the introduction of fact checking, curbing anonymous posting. Relatively less attention has been paid to measures to prevent the exploitation of personal data to distribute such content in the digital space.

It is therefore more important than ever for us to consider the way in which data is used in the context of modern democratic societies. Left unchecked, such exploitation is highly privacy invasive, raises important security questions, and has the potential to undermine faith in the democratic process, including in relation to transparency, fairness and accountability.

WHAT IS THE PROBLEM?

Data Exploitation and Elections

Nowhere is the increasingly data intensive nature of the electoral cycle more prevalent than in political campaigning. Around the world, political campaigns at all levels have become sophisticated data operations. As the amount of data and the ways it is used increase, so do the risks for data exploitation. This has consequences for the right to privacy and data protection, but also other rights including freedom of expression, association and political participation.

Data driven campaigning has been at the core of recent elections and referenda around the world. Campaigns rely on data to facilitate a number of decisions: where to hold rallies, on which States or constituencies to focus, which campaign messages to promote in each area or to each constituency, and how to target supporters (and people 'like' them), undecided voters, and non-supporters.

Back in 2017, Privacy International looked at the use of data in campaigning in the Kenyan elections, and the role of a US-based digital media company. Since then, the use (and exploitation) of data in campaigning has become ever more acute and pervasive. For example, the use of data for profiling and targeting in political campaigning has come under scrutiny in recent elections in France, Germany and Italy. In the UK, the Information Commissioner's Officer has opened multiple investigations into data use during the Brexit referendum.

Whilst the use of data in political campaigning is not new, the scale and granularity of data, the accessibility and speed of the profiling and targeting which it facilitates, and the potential power to sway or suppress voters through that data is. The actors, tools, and techniques involved - who is using data, where are they getting it, and what are they doing with it - vary depending on the context from country to country and campaign to campaign and even within a campaign. The sources and types of data used in political campaigning are multiple. Political parties and campaigns gain access to data from electoral roll/voter registration records. They also have data on members and supporters, as well as data from canvassing and use of social media, apps, online tracking, surveys, and competitions. Then there is commercial data, that can be tapped into - through data brokers, platforms, and the wider online advertising ecosystem. Tactical Tech has identified over 300 organisations around the world as working with political parties on data-driven campaigning. Data can be exploited through a range of mediums to build profiles and to disseminate messages in a targeted manner, ranging from the use of text messages (SMS), to calls, to messaging apps (e.g. Whatsapp), to search results (e.g. through AdWords), to campaign apps, to ad-supported platforms (e.g. Google, Facebook, Twitter, YouTube, Instagram) and websites, to television. A vast range of factors may play a role in the political content you see, including where you've been (e.g geotargeting - geofencing, beacons), what you've been doing (online and offline) what this says about your personality (e.g. psychometric profiling), and what messages people (like you) with particular traits have been most susceptible too (e.g. A/B testing).

This collection, generation and use of data happens at all times, however, not just during the course of electoral campaigns.

While individuals are targeted as potential voters at key democratic moments, such as political elections and referenda, they are also increasingly targeted outside formal electoral campaigning periods. This targeting may, for example, seek to influence their political views more broadly, or demand they support or oppose a political issue, such as a draft law or a key policy vote in Parliament.

For example, the UAE and Saudi Arabia used online advertising and social media campaigns to seek to influence US policy on Qatar. Exxon Mobile, a US oil company, has spent millions on ads promoting oil production and opposing regulation.

There might not be obvious links between the way the data is used politically, for example in an effort to influence or create division, and a political party manifesto commitment or support or opposition to a referendum.

And political parties are just one of many actors involved. There are many other actors that play a role (whether intentional or unintentional) in political campaigning (including through influencing and nudging), but do not have a direct relationship with or are not affiliated with a particular party or candidate, often raising questions including in relation to finance. For example, during the UK's Brexit referendum, advertisments appeared from apparent 'grassroots' groups, which actually had a large lobbying company behind them.

What is clear is that there a serious lack of transparency about who is using this data, where they are getting it and how they are using it. This exacerbates the issues of fairness and accountability. Too often the laws that are meant to protect people's data and regulate the electoral process are not enforced, out of date, or non-existent in the digital data-driven campaign environment, which leads to inherent risks and threats to democracy.

Lack of Transparency

The data exploitation just described is often shrouded in secrecy. Certain actors, including online platforms and social media companies, have begun proposing voluntary ways to increase transparency, particularly with regard to "political advertisements."

At present, people using these platforms are not able to completely understand why they are targeted with any ad, much less a political ad. How ads are targeted at users is incredibly complex and can involve targeting parameters and tools provided by platforms, or data compiled by advertisers and their sources, such as data brokers. What is especially difficult for most users to understand is how data from disparate sources is linked, and how data can be used to profile them in ways that can be incredibly sensitive. These companies' business models necessitate the collection and exploitation of data in ways that are opaque, not only to profile and target people with advertisements, but also to keep their valuable attention on the platform.

Because all advertisements have the potential to be political in nature, steps to increase transparency must be applied broadly and uniformly across advertising platforms and networks, and not limited to ads bought by often self-identified political actors or narrow definitions of "political issues". The steps that certain companies have so far taken to increase transparency are therefore insufficient, including because they often apply only to "political" or "issue" advertisements, as defined by the companies, and reveal too little about how those ads are targeted.

To improve transparency, the European Union developed a code of practice on disinformation aimed at online platforms, leading social networks, advertisers, and advertising industry. It has been signed by Facebook, Google, Twitter, and Mozilla as well as by some national online advertisement trade associations. The code contains a range of commitments mostly focussed on improving transparency of political and issue-based ads, and on limiting techniques such as the malicious use of bots and fake accounts. However, the implementation of this code by the main companies has been patchy.

Companies are also not sufficiently addressing other ways in which data can be exploited to influence elections, including through the promotion of content that is not explicitly identified as an ad.

Data Abuses and Breaches in Electoral Processes

Democratic elections are complex processes that require sophisticated legal and institutional frameworks. Their functioning demands the collection and processing of personal data. Increasingly governments are creating databases which store a vast array of personal information about voters, sometimes including biometric data.

If not properly regulated, these databases may undermine the democratic processes they ostensibly support.  For instance, unrestrained sale of the data contained in these databases might execerbate the profiling concerns articulated above. Insufficiently secure databases might also be subject to breaches or leaks of personal information, which might discourage voters from registering in the first place and could lead to other harms such as identity theft.

For example, in March 2016, the personal information of over 55 million registered Filipino voters were leaked following a breach on the Commission on Elections' (COMELEC's) database. The investigation of the national data protection authority concluded that there was a security breach that provided access to the COMELEC database that contained both personal and sensitive information, and other information such as passport information and tax identification numbers. The report identified the lack of a clear data governance policy, vulnerabilities in the website, and failure to monitor regularly for security breaches as main causes of the breach. Similarly, in 2015, the personal information of over 93 million voters in Mexico, including home addresses, were openly published on the internet after being taken from a poorly secured government database.

As another example, in Kenya during the 2017 presidential election, there were reports that Kenyans received unsolicited texts messages from political candidates asking the receiver to vote for them. These messages referenced individual voter registration information such as constituency and polling station, which had been collected for Kenya's biometric voter register. There are concerns that this database has been shared by Kenya's electoral commission (IEBC) with third parties, without the consent of the individual voters, and that telecoms companies may have shared subscriber information, also without consent, in order to allow this microtargeting to happen. It is not clear who the registration database was shared with and therefore which company, if any, were responsible for this microtargeting. Privacy International's partner, the Centre for Intellectual Property and Technology Law (CPIT) at  Strathmore University, Kenya, researched whether the 2017 voter register was shared with third parties, and if so, with whom, finding more questions than answers.

Similarly, increased reliance on technical solutions, such as e-voting, raise the risks of abuse and specific challenges related to the protection of anonymity of voters. For example, in Switzerland researchers found technical flaws in the electronic voting system that could enable outsiders to replace legitimate votes with fraudulent ones.

WHAT IS THE SOLUTION?

Privacy International would like to see stronger enforcement of existing laws and adoption of new regulations to limit the exploitation of data that affects democratic processes. These would need to balance the legitimate interest of political actors to communicate with the public and the right of individuals to be free of unauthorised, opaque targeting.

First, we need a review of existing laws.

Our research suggests that national laws and regulations are often not fit for purpose. There are many laws and regulatory frameworks that are relevant in this space: from electoral law, to political campaign financing, from data protection to audio-visual media rules regarding the broadcasting of political messages.

These laws are often not adequately regulating online data-driven campaign practices. They do not always address the technical and privacy concerns of modern electoral systems that rely on electronic voting and voter databases. And when laws are relevant, they are often not being enforced effectively.

Data protection and electoral laws need to be examined closely in order to address the use of data in electoral campaigns from a comprehensive perspective. For example, data protection law should regulate profiling and not include loopholes that can be exploited in political campaigning. Electoral laws should be reviewed to ensure they apply to digital campaigns in the same way as they might to the print and broadcasting context.  There should also be full, timely, detailed transparency of digital campaign financing. Where these frameworks fall short, they should be amended and enhanced.

Second, we need an approach that fosters collaboration and interaction across different actors involved in this field, from election officials, to data protection authorities, from election monitors, to civil society.

As noted recently by European institutions, this is a complex, multifaceted area, with many actors and many interests. It will not be possible for one single regulator, no matter how well resourced, to address all of these aspects.

Third, we need to make sure those actors who can affect change received the support they require, including: 

  • Empowering regulators to provide clear guidance, take action and enforce the law, having the ability to conduct their work without external pressure and with the ability to request information from all involved parties: political parties, campaign groups, companies, other private actors, and other government actors involved in the electoral cycle. Regulators must be given the necessary resources (financial and capacity) to take such action. 
  • Capacitating all parties involved in regulating poltical campaigning, including national electoral commissions and electoral monitoring bodies, on these issues according to their roles, including the type of technologies and methods deployed for campaigning, applicable privacy and electoral law, and on good practices as to how to exercise their powers.
  • Supporting civil society and public interest actors seeking to scrutinise, monitor and expose data exploitation in the electoral context. 

Fourth, actors who are exploiting data must be held to account:

  • Political parties and campaign groups must fully comply with data protection and campaigning/ electoral laws, be accountable for all the campaigning they do both directly and indirectly, and subject that work to close public supervision.
  • Companies in the campaigning ecosystem need to be transparent and accountable with regard to the services and products they offer to political parties around the world, and the methods used to obtain and process personal data. 
  • Companies should also implement best practices across all jurisdictions, not only in those that have legislated or enforced them.

Fifth, companies should expand the scope of their ads transparency efforts to include all advertisements bought on their platforms. Companies should provide users with a straightforwad and simple way to understand why they are targeted with an ad, including information such as what the intended audience of the ad was and what the actual audience was. Transparency efforts should be rolled out globally, and must take into consideration regional and local contexts. Such efforts should not be applied in a mechanical or generalist way. Privacy International also believes that there may be a legitimate need to protect political anonymity - such as for civil society working on sensitive issues in certain countries. These nuances must be considered in companies' transparency efforts.

To fully address concerns about targeted advertising, more information should be made easily available to users. Companies should provide information related to microtargeting - if multiple versions of an ad were created, how ad audiences were created, what segements or interests were selected, what data was used by an advertiser to reach the audience or if the audience was generated via a lookalike advertising feature. This sort of information should be understandable to users.

Additionally, it is important for researchers to be able to monitor and study political and issue-based advertising.

Political parties and campaign groups likewise play a role in ads transparency. Political parties should be proactive and take steps to provide transparency to their constituents as to where they are advertising, who they are targeting, and on what topics they are advertising.

Finally, companies should comply with and support strong regulation to protect privacy. Companies may need to make signifcant changes in the ways in which they permit targeting of individuals with political ads, with many such practices likely to fall short of modern data protection laws, such as the EU's General Data Protection Regulation. While increased transparency is welcome, it cannot stop there.

WHAT PI IS DOING

In 2017, PI revealed how two highly problematic and inflammatory political campaigns were created by a US-based data analytics company on behalf of Kenyan President Kenyatta’s re-election campaign. Our investigation, which was released prior to the 2018 Cambridge Analytica scandal, showed how data companies are able and willing to insert themselves into national politics around the world.

Also in 2017, PI challenged an exemption in the UK's new data protection regime which may be used to faciliate data exploitation. We wrote to the UK's main political parties seeking assurances that they would not rely on this loophole.

In 2018, PI investigated seven data analytics and data broker companies to understand how the companies profile people and where the companies get the data to do so. A number of these companies have been linked to poltical parties and campaigns. Our investigation showed how it is currently impossible to understand the sources and use of such data, and resulted in PI filing regulatory complaints against all companies. We are in ongoing conversations with the relevant regulators regarding those complaints.

Also in 2018, the Guardian newspaper revealed that Cambridge Analytica - company purporting to facilitate digital campaigning - was able to harvest data from Facebook. By Facebook's own design, companies like Cambridge Analytica were able to amass data on users' friends. In the case of Cambridge Analytica, they were able to amass data on 87 million people, the vast majority of whom had never interacted with Cambridge Analytica in the first place. PI was quick to develop policy objectives, key questions for Facebook to answer, and to engage our global network of partner organisations to respond. We worked closely with media organisations to ensure that this story received sufficient attention.

In 2019, PI is:

  • Taking stock and learning from our own past experience and that of others ( for example, experiences in France,
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview