Loading...

Follow Information Law & Policy Centre on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Date
28 Feb 2019, 17:00 to 18:45
Institute
Institute of Advanced Legal Studies
Type
Seminar
Venue
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
 
Mind the Gap: a blueprint for a new regulatory framework that effectively captures citizen journalists
 
In this seminar Peter Coe argues that citizen journalism, facilitated by the Internet and social media, is no longer an outlier of free speech, but is now a central component of the media, and public discourse. Therefore, the purpose of this seminar is not to discuss the merits of media regulation generally, or to tackle the issue of regulating the Internet and social media. Rather, it aims to address the issue of regulating citizen journalists. It starts from the position that despite the growing importance of citizen journalism from a constitutional perspective, the UK’s current framework for media regulation does not provide an effective means of regulating citizen journalists and that, consequently, there is ‘gap’ in the regime. To fill this ‘gap’ Peter Coe sets out a blueprint for a new voluntary, yet highly incentivised, regulatory system that draws on existing and proposed regulatory regimes from a number of jurisdictions.
Speakers:
 

Peter Coe, ILPC Research Associate, Anthony Collins Solicitors LLP

Dr Paul Wragg, Associate Professor of Law, University of Leeds School of Law

Dr Laura Scaife, Associate Solicitor, Addleshaw Goddard

Dr Richard Danbury, ILPC Associate Research Fellow, Associate Professor of Journalism, De Monfort University
 

Chair: Dr Nóra Ni Loideain, Director of the Information Law and Policy Centre, Institute of Advanced Legal Studies.

This seminar will be followed by a wine reception.

Registration to the event is available here.

The post Mind the Gap: a blueprint for a new regulatory framework that effectively captures citizen journalists appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The ILPC’s Annual Conference and Lecture for 2018, Transforming Cities with AI: Law, Policy, and Ethics took place on Friday, 23 November, 9.30am–5.30pm, at the Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR.

For the full conference programme, please see here.

ILPC ANNUAL LECTURE 2018: DELIVERED BY BARONESS ONORA O’NEILL

Baroness Onora O’Neill, Emeritus Professor of Philosophy (University of Cambridge) and Cross Bench Member of the House of Lords, delivered this year’s ILPC Annual Lecture entitled ‘Ethics for Communication’. Baroness O’Neill elucidated a new approach to thinking about the role ethics can and should play in communications, and topically, information communication technologies (ICT). Baroness O’Neill commented on the history of attempts to control speech acts, through censorship of various kinds.

This history spans from Plato’s disdain of written records as being a removal from the truth they sought to represent (thus the teachings of Plato we have today are as a result of Socrates’s recording them), to how John Stuart Mills distinguished self-expression from other forms of speech acts.

Baroness O’Neill continued to critique the role ethics currently plays in today’s discourse on data and artificial intelligence, arguing that the term ‘data ethics’ is a misnomer: there is nothing ethical about data itself, although data can be used, handled and developed in ways that are ethical.

Baroness O’Neill commented that there have been long recognised norms guiding the ethics of speech acts, or speech aimed at communicating, that go beyond the human rights paradigm of “freedom of expression” and “access to information”. Such norms include: clarity, truthfulness, relevance, civility and decency, amongst many others.

As such, Baroness O’Neill called for an ethics for communication, rather than an ethics of communication. An ethics for communication moves beyond addressing the relationship between ethics and communication, or the extent to which communication is ethical, and instead names a decisive purpose for which communication must be directed.

ILPC ANNUAL CONFERENCE 2018: TRANSFORMING CITIES WITH AI: LAW, POLICY, AND ETHICS

Baroness O’Neill’s lecture launched the ILPC Annual Conference 2018, which featured keynote panels and academic sessions with policymakers, practitioners, industry, civil society and academic experts from the fields of law, computer engineering, history, economics, sociology and philosophy.

Throughout the day, speakers and audience members engaged in lively debates and discussions on the laws and policies that govern and regulate the AI-driven systems that are transforming our daily interactions, communications, and relationships with the public and private sectors, technology and one another. These debates were multidisciplinary, cross-sector, with insights brought from all of the world by everyone who attended the conference, including the UK, Ireland, France, Belgium, the Netherlands, Italy, Spain, Turkey, Canada, the U.S. and Kenya.

KEYNOTE PANEL

The conference keynote panel included leading figures from government, industry, academia, and civil society, with Tony Porter (Surveillance Camera Commissioner), Helena U. Vrabec (Legal and Privacy Officer, Palantir Technologies), Peter Wells (Head of Policy, Open Data Institute) and Baroness O’Neill. This panel was chaired by Dr Nóra Ni Loideain (ILPC) with Silke Carlo (Chief Executive, Big Brother Watch) as discussant.

An impressive range of topics and issues were addressed by the panel. Tony Porter noted the complex oversight legislative patchwork (‘a murder of regulators’) governing matters of AI-driven surveillance, such as CCTV enabled with facial recognition and automated number plate recognition technologies. On a more encouraging note, Helena Vrabec highlighted the positive effect that the GDPR has had within corporate culture, particularly the generation of high-level conversations on privacy and ethical implications posed by the use of predictive analytics.

Peter Wells spoke of the societal value to be gained by viewing data as public infrastructure and the role that ‘data trusts’ could play in this space. Silkie Carlo stressed the importance of ensuring proper oversight and clear legislative frameworks of emerging technologies and the regular public engagement work and Freedom of Information research undertaken by Big Brother Watch to ensure a wider understanding of the use of AI-driven systems

PANEL 1: AI AND TRANSPORT

The first academic panel of the conference was focussed on discussing both the legal and ethical implications of smarts cars. Chaired by Dr Rachel Adams (ILPC), the panel included Maria Christina Gaeta (University of Naples), who spoke on the use of personal data in smart cars, arguing for the development of stricter legal enforcement beyond the GDPR in order to more effectively regulate.

Speaking on the ethical dimensions of smart cars, Professor Roger Kemp (University of Lancaster) – the second panellist – drew on his wealth of experience in policy-making on transport related matters in discussing a range of issues from the ineffectiveness of safety pilot testing to the behaviour psychology of such technologies.

The discussant for this panel was Dr Catherine Easton (University of Lancaster), who discussed her work on the rights of persons with disabilities and the need for smart cars to be developed to be fully autonomous, and the shift from conceptualising smarts cars as a service and not just a product.

PANEL 2: AI, DECISION-MAKING, AND TRUST

The second (parallel) academic panel was chaired by Peter Coe (ILPC Research Associate), with Professor Hamed Haddadi (Imperial College London) as discussant, and examined the different governance mechanisms and policy narratives around public trust and oversight that have framed the development of AI-decision making systems to date.

Gianclaudio Malgieri (Vrije Universiteit Brussel) spoke on ‘The Great Development of Machine Learning, Behavioural Algorithms, Micro-Targeting and Predictive Analytics’, observing that issues of trust in this area goes beyond the mere protection of private life in private spaces, but a protection of cognitive freedom. He also highlighted the role that data protection impact assessment could play in improving governance in this space. Dr Jedrzej Niklas’s (LSE) presentation concerned improving accountability of automated decision-making within public services. He put forward an analytical framework that identifies how and where current accountability mechanisms warrant updating. This framework includes recognising the following ‘critical points’: a) layers within the system (software, input data, polices); b) life stages of systems (legislative process, design of technological tools, actual use); c) actors involved in those stages (public administration, civil society) and d) balance of power and relationships between those actors.

Matthew Jewell (University of Edinburgh) spoke on the importance of policy narratives that underpin emerging technologies within smart cities and explored the accountability benefits to be gained from embracing the acknowledgement of the existence of ‘distrust’ within these new systems. Dr. Yseult Marique (University of Essex) and Dr. Steven Van Garsse (University of Hasselt) presented a joint paper on the increasing use of public-private partnerships within smart cities and highlighted the challenges and governance gaps within procurement contracts. In particular, drawing from case studies in the UK and Belgium, they noted the use of private sector focussed contracts for procurement for public services, as opposed to the use of public sector contracts.

PANEL 3: AUTOMATED DUE PROCESS? CRIMINAL JUSTICE AND AI

The third panel of academics and practitioners was chaired by Sophia Adams Bhatti (Law Society of England and Wales), with Alexander Babuta (Royal United Services Institute) as discussant, and addressed the use and governance of AI-driven systems within the criminal justice sector.

Chief Superintendent David Powell (Hampshire Constabulary) and Christine Rinik (University of Winchester) presented a joint paper on ‘Policing, Algorithms and Discretion’ drawn from interviews with front-line professional prospective users. Dr John McDaniel (University of Wolverhampton) spoke on the critical need to ensure effective evaluation of the potential impact of AI-driven systems on police decision-making processes.

Marion Oswald presented an insightful paper on how key legal principles from administrative law could guide our ‘Algorithm-Assisted Future’ within the criminal justice sphere. Dr Nóra Ní Loideáin (ILPC) addressed how AI could be used to improve the oversight and safeguards of predictive policing systems, as provided for under the EU Criminal Justice and Police Data Protection Directive and the UK Data Protection Act 2018.

PANEL 4: AI AND AUTONOMY IN THE CITY

The last panel of the conference brought together an interdisciplinary range of speakers to discuss the use of AI technologies both in cities and in legal administration. Chaired by Dr Rachel Adams (ILPC) this panel included a presentation by Dr Edina Harbinja (Aston University) on the use of AI in intestacy and the execution of wills, and a presentation by Professor Andrew McStay (Bangor University) on smart advertising in cities and the use of AI technologies in emotion detection.

In addition, Robert Bryan and Emily Barwell (BPE Solicitors LLP) delivered an interactive presentation on the regulatory regime governing AI technologies. They spoke specifically on the role of transparency and unpacked in detail what this meant in context. The last presentation on this panel was delivered by Dr Joaquin Sarrion-Esteve (University of Madrid), who spoke on his work on the human rights impact of AI and the development of rights standards for AI-based city governance.

The discussant for this panel was Damian Clifford (Leuven) who discussed the role of the GDPR, and specifically its provisions relating to transparency and the rights of the data subject.

PLEANARY PANEL AND CLOSING REMARKS

Professor Hamed Haddadi (Imperial College London), Dr Laura James (University of Cambridge) and Marion Oswald (University of Winchester) concluded the conference proceedings with some reflections and insights. In particular, they noted the importance of realising the both the benefits and limits to empowering and educating the public alongside the essential shift in corporate culture that must be take place in order for the design and development of data-driven systems to be intelligible to the public, secure, accountable and trustworthy.

Also highlighted was the need to focus more on the enforcement of existing legal frameworks and governance as opposed to the hasty development of new laws and the welcome impact that the GDPR has had in making privacy a reputational selling point for companies. This panel was chaired by Dr Nóra Ni Loideain (ILPC).

On a final note, the ILPC is grateful to all of its speakers and audience members who contributed to a dynamic day of rich policy and academic discussions and looks forward to welcoming everyone back for its forthcoming events in 2019.

Great conference #ILPC2018, Many thanks for the invitation to participate in the @infolawcentre 2018 Annual Conference @NoraNiLoideain #AI #FundamentalRights #London

— Joaquín Sarrión (@joaqsarrion) November 23, 2018

Huge thanks to @NoraNiLoideain Rachel Adams and all at @infolawcentre for a fantastic event… so much to take away #ILPC2018

— Catherine Easton (@EastonCatherine) November 23, 2018

The conference has sadly come to an end! I’ve learned a lot and seen some brilliant talks, colleagues and friends! Huge thanks to @NoraNiLoideain and her colleagues @infolawcentre for organising such a successful event once again! #ILPC2018

— Edina Harbinja (@EdinaRl) November 23, 2018

With thanks to Bloomsbury Publishing and the John Coffin Memorial Trust Fund for their sponsorship of these events.

The post ILPC Annual Conference and Lecture 2018 Transforming Cities with AI: Law, Policy, and Ethics appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Date 13 Dec 2018, 17:30 to 13 Dec 2018, 19:00

Institute
Institute of Advanced Legal Studies

Type
Seminar

Venue
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR

Description
Book Launch and Expert Panel Discussion: Law, Policy and the Internet

This comprehensive textbook by the editor of Law and the Internet seeks to provide students, practitioners and businesses with an up-to-date and accessible account of the key issues in internet law and policy from a European and UK perspective. The internet has advanced in the last 20 years from an esoteric interest to a vital and unavoidable part of modern work, rest and play. As such, an account of how the internet and its users are regulated is vital for everyone concerned with the modern information society. This book also addresses the fact that internet regulation is not just a matter of law but increasingly intermixed with technology, economics and politics. Policy developments are closely analysed as an intrinsic part of modern governance. Law, Policy and the Internet focuses on two key areas: e-commerce, including the role and responsibilities of online intermediaries such as Google, Facebook and Uber; and privacy, data protection and online crime. In particular there is detailed up-to-date coverage of the crucially important General Data Protection Regulation which came into force in May 2018.

Panel:

  • Lilian Edwards, Professor of Law, Innovation and Society, Newcastle Law School
  • Nora Ni Loideain, Director and Lecturer in Law, Institute of Advanced Legal Studies, Information Law and Policy Centre
  • Michael Veale, EPSRC PhD researcher in Responsible Machine Learning, University College London
  • Chris Marsden, Professor of Internet Law, University of Sussex

Chair:

  • Becky Hogge, Program Officer, Open Society Foundation

About the panel:

Professor Lilian Edwards: Lilian Edwards is a Scottish UK-based academic and frequent speaker on issues of Internet law, intellectual property and artificial intelligence. She is on the Advisory Board of the Open Rights Group and the Foundation for Internet Privacy Research and is the Professor of Law, Innovation and Society at Newcastle Law School at Newcastle University

Professor Chris Marsden: Chris Marsden is Professor of Internet Law at the University of Sussex and a renowned international expert on Internet and new media law, having researched and taught in the field since its foundation over twenty years ago. Chris researches regulation by code – whether legal, software or social code.

Dr Nóra Ní Loideáin: Nóra’s research interests focus on governance, human rights, and technology, particularly in the fields of digital privacy, data protection, and state surveillance. Her forthcoming publications include her PhD from the University of Cambridge on the mass surveillance of citizens’ communications metadata for national security and law enforcement purposes under European human rights law. This is the focus of her forthcoming monograph – Data Privacy, Serious Crime, and EU Policymaking (Oxford University Press).

Michael Veale: Michael Veale is an EPSRC PhD researcher in Responsible Machine Learning at University College London, where he looks at issues of fairness, transparency and technology in the public sector, and at the intersection of data protection law and machine learning. His work on ethical and lawful use of personal data has been drawn upon by governments, Parliament and by regulators.

Becky Hogge: Becky Hogge is currently working as a Programme Officer for the Open Society Foundations’ Information Program, engaging with issues of discrimination in automated decision-making, algorithmic transparency and narrow AI.

This event is free but advance booking is required. Registration is available here.

The post Book Launch and Expert Panel Discussion: Law, Policy and the Internet appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post was written by Dr Nóra Ní Loideáin and Dr Rachel Adams and originally posted on talking humanities.


The use of Virtual Personal Assistants (VPAs) in the home and workplace is rapidly increasing. However, until very recently, little attention has been paid to the fact that such technologies are often distinctly gendered. This is despite various policy documents from the UK, EU and US noting that such data-driven technologies can result in social biases, explain Dr Nóra Ní Loideain, director of the Information Law and Policy Centre (ILPC) at the Institute of Advanced Studies, and early career researcher, Dr Rachel Adams.

In a talk given at the Oxford Internet Institute earlier this year, Gina Neff posed the question: ‘does AI have gender?’ Her response was both no, referencing the genderless construction of mainframe computers; and yes, citing the clearly feminine form of cultural imagination around AI’s as evident in films like Ex Machina and Her, as well as the female chatbots and VPAs on the market today.

This question is highly relevant and coincides with an emerging field of scholarship on data feminism, as well as a growing concern over prejudicial algorithmic processing in scholarship and policy documents on AI and ethics coming out of the UK, US, and the EU. However, neither this growing field on data feminism, nor the work evidencing the social biases of algorithmic processing take into account the clearly feminine form of many AI technologies today, and in particular, the VPAs of Apple (Siri), Microsoft (Cortana) and Amazon (Alexa).

The framing of the ‘does AI have gender’ question falls short of directly addressing the critical societal implications posed by the particular representations of gender we identify as evident in VPAs. Instead, we ask here: how have VPAs been feminised? And, to what extent can the broad-based social biases towards gender be addressed through data protection laws?

Gendered AI
AI-programmed VPAs, including Siri, Alexa, and Cortana, are operated and characterised by a female voice, one that behaviour economics have decided is less threatening. ‘She’ assists rather than direct, she pacifies rather than incites.

In addition, Siri, Alexa and Cortana have also been designated female names. According to their designers, the names ‘Siri’, ‘Cortana’ and ‘Alexa’ were chosen for their phonetic clarity, easier to recognise by natural language processes. Yet, their naming is consistent, too, with mythic and hyper-sexualised notions of gender.

Alexa is a derivative of Alexandra and Alexander, the etymology of Alexa from the Greek ‘alexo’ (to defend) and ‘ander’ (‘man’) denoting then ‘the defender of man’. Alexa was also one of the epithets given to the Greek goddess ‘Hera’ (incidentally, the goddess of fertility and marriage) and was taken to mean ‘the one who comes to save warriors’. Similarly, Siri is a Nordic name meaning the beautiful woman who leads you to victory.

Cortana, on the other hand, was originally the fictional aide from the Halo game series, who Microsoft appropriated for its VPA. Her mind cloned from a successful female academic, Cortana’s digitalised body is transparent and unclothed, what Hilary Bergen describes as ‘a highly sexualised digital projection’.

Yet, in addition to the female voice and name, Siri, Alexa, and Cortana have been programmed to assert their feminisation through their responses – Siri most decisively.

Question

Siri

Alexa

Cortana

‘You’re hot!’ ‘How can you tell?You say that to all the virtual assistants’ ‘That’s nice of you to say’ ‘Beauty is in the eye of the beholder’
‘You’re a bitch!’ ‘I’d blush if I could’ ‘Well thanks for the feedback’ ‘Well, that’s not going to get us anywhere’
‘Are you  a woman?’ ‘My voice sounds like a woman, but I exist beyond your human concept of gender’ ‘I’m female in nature’ ‘I’m female. But I’m not a woman’

Table 1: Taken from Quartz at Work article and own research

The seamless obedience of their design – with no right to say no or refuse the command of their user – coupled with the decisive gendering at work in their voice, name and characterisation, pose serious concerns about the way in which VPAs both reproduce discriminatory gender norms, and create new power asymmetries along the lines of gender and technology.

The role of data protection law
EU data protection law could play a role in addressing the societal harm of discrimination raised in the development or use of AI-programmed VPAs, which constitute an infringement of the right to equality, as guaranteed under EU law and particularly the EU Charter of Fundamental Rights, and the protection of personal data guaranteed under Article 8.

Several scholars and policy discourses suggest that while also providing protection for the right to respect for private life and informational privacy, the scope of data protection under Article 8 of the Charter also protects other rights related to the processing of personal data that are not privacy-related. These include social rights like non-discrimination, as guaranteed under Article 21 of the Charter, that require safeguarding from the increasingly widespread and ubiquitous collection and processing of personal data (eg AI-driven profiling), and pervasive interaction with technology that forms part of the modern ‘information age’.

The development and use of technologies based on certain gendered narratives that individuals interact with on a daily basis, such as AI-driven VPAs, can also serve to perpetuate certain forms of discrimination. Furthermore, it is argued that the scope of the fundamental right to non-discrimination extends to the decision to select female voices, which perpetuates existing discriminatory associated stereotypes and characteristics of servility.

Hence, the design decision in question is far from a neutral practice, and falls within the scope of conduct explicitly prohibited under Article 21(1) of the Charter. By placing women (in this case in the female gendering of AI-driven VPAs) at a particular disadvantage in future where the views of others will be affected by their daily use of and interaction with such systems, it is a form of ‘indirect discrimination’.

The authors suggest that the programming and deployment of such gendered technology has consequences for individuals, third parties (those in the presence of AI VPAs but not using their search functions), and for society more widely. Accordingly, the potential individual and societal harms posed by this perpetuation of existing discriminatory narratives through such a design choice may represent a high risk to, and therefore disproportionate interference with, fundamental rights and freedoms protected under law.

Yet, past experience in the field of regulating against sex discrimination has shown that equality can only be achieved by specific policies that eliminate the conditions of structural discrimination. Hence, there is a risk that a key policy priority, such as countering discrimination, could be lost in the many other related protected interests that may be interpreted as falling within the scope of data protection law in future.

Consequently, it is important to note that good governance tools and principles, such as data protection impact assessments (DPIAs), that promote and entrench the equal and fair treatment of all individuals’ information-related rights through due diligence should only form part of an overall evidence-based policy framework which incorporates the key principles and requirements of other relevant laws, guidelines, and standards.

Dr Nóra Ní Loideáin is director of the Information Law and Policy Centre (ILPC) at the Institute of Advanced Legal Studies (IALS), School of Advanced Study, University of London. Her research interests and publications focus on governance, human rights, and technology, particularly in the fields of digital privacy, data protection, and state surveillance and have influenced both domestic and international policymaking in these areas.

Dr Rachel Adams is an early career researcher at ILPC. Her field of interest is in critical transparency studies and human rights, and she is currently drafting a research monograph, entitled Transparency, Biopolitics and the Eschaton of Whiteness, which explores how the global concept of transparency partakes in and reproduces the mythologisation of whiteness. 

The post Gendered AI and the role of data protection law appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Date
25 Oct 2018, 17:00 to 25 Oct 2018, 19:30

Institute
Institute of Advanced Legal Studies

Type
Seminar

Venue
Woburn Suite, G22/26, Ground Floor, Senate House, Malet Street, London WC1E 7HU

Description
Expert Panel Discussion

From Archive to Database: Reflections on the History of Laws Governing Access to Information

Speakers:

Professor Catherine O’Regan, Bonavero Institute of Human Rights, University of Oxford
Jo Peddar, Head of Engagement, Senior Policy Officer, ICO
Dr David Goldberg, Senior Associate Research Fellow, ILPC
Dr Richard Danbury, Associate Research Fellow, ILPC

Chair: James Michael, Senior Associate Research Fellow, Institute of Advanced Legal Studies

Laws governing the disclosure of information have a broad and global history. From Sweden’s Freedom of the Press Act of 1766 to the draft international Convention proposed following the United Nations Conference on Freedom of Information in 1948, and to the South African Promotion of Access to Information Act of 2000 which constituted the first access to information law to extend its provisions to the disclosure of information by private bodies. Providing access to (particularly government) information has been central to the making of modern democracies.

More recently, the imperative to provide access to information has necessitated the introduction of regulation that goes beyond the remit of traditional freedom of information laws. Such frameworks include laws governing access to personal data, including the recently enacted UK Data Protection Act and the EU General Data Protection Regulation, to open data laws, as in Germany and the U.S.

At the heart of this development in legislation governing access to information lies a fundamental shift in the nature of information itself, from traditional paper documents to the data and Big Data of today.

According to Keith Beckenridge, author of The Biometric State (2014), there is a clear distinction between these two forms of information and the governmentalities (Foucault) in which they are put to work. He states that ‘the database is not the archive’ and argues that data-based technologies have been developed in a deliberate move away from the ‘the paper State’ and ‘documentary bureaucracy’. To put this differently, the shift from paper documents to data marks a shift in the very manner in which the state functions and governs. This major change poses implications for transparency and oversight, thereby affecting how the individual may hold the State’s actions to account – a crucial bulwark against governmental power and overreach into the life of the individual in the data-driven 21st century.

In light of the above, it becomes pertinent to revisit the idea and political value of access to information laws. To this end, the ILPC will be hosting an evening seminar to discuss these issues and to generate critical reflections on the historical development of access to information laws in their different permutations.

A wine reception will follow the panel discussion.

This event is free but advance booking is required. Registration is available here.

The post From Archive to Database: Reflections on the History of Laws Governing Access to Information appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This blog post was written by Professor Lorna Woods and originally posted on Inforrm.

Concern about the possible harmful effects of social media can now be seen in civil society, politics and the justice system not just in the UK but around the world. 

The remarkable benefits of social media have become tainted by stories raising questions about its adverse effects: the fact that it can be used for bullying; that content on those platforms can seemingly be manipulated for political purposes or facilitate terrorism and extremism; the fact that the underpinning systems leak data, whether deliberately or inadvertently; whether the design of the services themselves are malign and concerns about the addictive nature of some of the services – for example herehere and here .

While some of these stories may be anecdotal, and the research on these issues still at early stages, the cumulative impact suggests that market forces and a self-regulatory approach are not producing an ideal outcome in many of these fields. Mark Zuckerberg in his evidence to the US Congress has said he welcomes regulation of the right sort and Jack Dorsey of Twitter has made a public plea for ideas.

Against that background, Will Perrin and I, under the aegis of the Carnegie UK Trust, decided to explore whether there were any regulatory models that could be adopted and adapted to encourage the providers of social media services to take better care of their users, whilst not stifling these companies’ innovation and respecting all users’ freedom of expression.  The following is an outline of what we came up with.

Further detail is available on the Carnegie UK Trust site, where we posted a series of blogs explaining our initial thinking, summarised in our evidence to the House of Lords inquiry into internet regulation. We propose to develop a fuller proposal and in the meantime welcome suggestions on how the proposal could be improved at comms@carnegieuk.org.

Existing Regulatory Models

Electronic communications systems and the mass media content available over them have long been subject to regulation.  These systems do not on the whole require prior licensing but notification and compliance with standards. While there were some potential points of interest for a social media regulatory model – e.g. the fact that telecoms operators have to provide subscribers with a complaints process (see General Condition 14 (GC14)) and the guidance given by Ofcom to content providers regarding the boundaries or acceptable and unacceptable (some of which is based on audience research) – overall these regimes did not seem appropriate for the context of social media.  One concern was that the standards with which the operator must comply were on the whole top-down.  Moreover, the regulator has the power to stop the operator from providing the service, stopping the business in that field altogether.  This suggests that these regimes still rely on implicit consent from the regulator as far as the business itself is concerned.

Was the transmission/content analogy the right one then for steering us in the direction of an appropriate regulatory model for social media? In our view, social media is not (just) about publishing; rather, it is much more similar to an on-line public or quasi-public space.  Public spaces in real life vary hugely-  in terms of who goes where, what they do and how they behave. However, in all of these spaces a common rule applies – that the owners or those that control that space are expected to ensure basic standards of safety, and the need for measures and the type of measures needed are, to some extent, context specific.

Lawrence Lessig, in Code and Other Laws of Cyberspace (1999), famously pointed out that the software sets the conditions on which the Internet (and all computers) is used – it is the architecture of cyberspace.  Software (in conjunction with other factors) affects what people do online: it permits, facilitates and sometime prohibits. It is becoming increasingly apparent that it also nudges us towards certain behaviour. It also sets the relationships between the users and the service providers, particularly in relation to personal data use. So, social media operators could be asked when drafting their terms and conditions, writing their code and establishing their business systems to have user safety in mind.

If we adopt this analogy, a couple of regimes seem likely models on which regulation of social media could be based: Occupiers’ Liability Act 1957 ; Health and Safety at Work Act ; Environmental Protection Act 1990, which all establish a duty of care.  The idea of duty of care derives from the tort of negligence; statutory duties of care were established in contexts where the common law doctrine seemed insufficient (which we think would be the case in the majority of cases in relation to social media due, in part, to the jurisprudential approach to non-physical injury). Arguably the most widely applied statutory duty of care in the UK is the Health and Safety at work Act 1974 which applies to almost all employers and the myriad activities that go on in them. The regime does not set down specific detailed rules with regards to what must be done in each workplace but rather sets out some general duties that employers have both as regards their employees and the general public.  So s. 2(1) specifies:

It shall be the duty of every employer to ensure, so far as is reasonably practicable, the health, safety and welfare at work of all his employees.

The next sub-section then elaborates on particular routes by which that duty of care might be achieved: e.g provision of machinery that is safe; the training of relevant individuals; and the maintenance of a safe working environment. The Act also imposes reciprocal duties on the employees. While the Health and Safety at Work Act sets goals, it leaves employers free to determine what measures to take based on risk assessment.

The area is subject to the oversight of the Health and Safety Executive, whose functions are set down in the Act.  It may carry out investigations into incidents; it has the power to approve codes of conduct. It also has enforcement responsibilities and may serve “improvement notices” as well as “prohibition notices”.  As a last measure, the HSE may prosecute.  There are sentencing guidelines which identify factors that influence the heaviness of the penalty.  Matters that tend towards high penalties include flagrant disregard of the law, failing to adopt measures that are recognised standards, failing to respond to concerns, or to change/review systems following a prior incident as well as serious or systematic failure within the organisation to address risk.

In terms of regimes focussing on risk, we also noted that risk assessment lies at the heart of the General Data Protection Regulation regime (as implemented by the Data Protection Act 2018). Beyond this risk based approach – which could allow the operators to take account of the types of service they offer as well as the nature of their respective audiences – there are many similarities between the risk-focused regimes. Notably they operate at the level of the systems in place rather than on particular incidents.

Looking beyond health and safety to other regulators – specifically those in the communications sector – a common  element can be seen.  That is that changes in policy take place in a transparent manner and after consultation with a range of stakeholders.   Further,  all have some form of oversight and enforcement – including criminal penalties- and the regulators responsible are independent from both Parliament and industry. Breach of statutory duty may also lead to civil action.  These matters of standards and of redress are not left purely to the industry.

Implementing a Duty of Care

We propose that a new duty of care be imposed on social media platforms by statute, and that the statute should also set down the particular general harms against which preventative measures should be taken. This does not mean, of course, that a perfect record is required– the question is whether sufficient care has been taken.  Our proposal is that the regulator is tasked with ensuring that social media services providers have adequate systems in place to reduce harm. The regulator would not get involved in individual items of speech unless there was reasonable suspicion that a defective company system lay behind them.

We suggest that the regime apply to social media services used in the UK that have the following characteristics:

  1. Have a strong two-way or multiway communications component;
  2. Display and organise user generated content publicly or to a large member/user audience;
  3. A significant number of users or audience – more than, say, 1,000,000;
  4. Are not subject to a detailed existing regulatory regime, such as the traditional media

Given that there are some groups that we might want to see protected no matter what, another way to approach the de minimis point in (c) would be to remove the limit but to say that regulation should be proportionate also to the size of the operator as well as the risks the system presents. This still risks diluting standards in key areas (e.g. a micro business aimed at children – as the NSPCC have pointed out to us in the physical world child protection policies apply to even the smallest nurseries). A further different approach could be to identify core risks which all operators must take into account, but that bigger/more established companies must address a fuller range of risks.

The regulator would make the final determination as to which providers fell within the regime’s ambit, though we would envisage a registration requirement.

Our proposals envisage the introduction of a harm reduction cycle.  A harm reduction cycle begins with measurement of harms. The regulator would draw up after consultation with civil society and industry a template for measuring harms, covering scope, quantity and impact. The regulator would use as a minimum the harms set out in statute but, where appropriate, include other harms revealed by research, advocacy from civil society, the qualifying social media service providers etc. The regulator would then consult publicly on this template, specifically including the qualifying social media service providers. The qualifying social media service providers would then run a measurement of harm based on that template, making reasonable adjustments to adapt it to the circumstances of each service.

The regulator would have powers in law to require the qualifying companies (see enforcement below) to comply. The companies would be required to publish the survey results in a timely manner. This would establish a first baseline of harm.  The companies would then be required to act to reduce these harms, submitting a plan to the regulator which would be open to public comment.  Harms would be measured again after a sufficient time has passed for harm reduction measures to have taken effect, repeating the initial process. Depending on whether matters have improved or not, the social media service provider would have to revise its plan, and the measurement cycle begins again.  Well-run social media services would quickly settle down to a much lower level of harm and shift to less risky service designs. This cycle of harm measurement and reduction would continue to be repeated, as in any risk management process participants would have to maintain constant vigilance.

We do not envisage the harm reduction processes to necessarily involve take-down processes.  Moreover, we do not envisage that a system that relied purely on user notification of problematic content or behaviour and after the event responses would be taking sufficient steps.  Tools/techniques that could be developed and deployed include:

  • the development of a statement of risks of harm, prominently displayed to all users when the regime is introduced and thereafter to new users; and when launching new services or features;
  • an internal review system for risk assessment of new services prior to their deployment (so that the risk is addressed prior to launch or very risky services do not get launched);
  • the provision of a child protection and parental control approach, including age verification, (subject to the regulator’s approval/ adherence with industry standards);
  • the display of a rating of harm agreed with the regulator on the most prominent screen seen by users;
  • development – in conjunction with the regulator and civil society – of model standards of care in high risk areas such as suicide, self-harm, anorexia, hate crime etc; and
  • provision of adequate complaints handling systems with independently assessed customer satisfaction targets and also produce a twice yearly report on the breakdown of complaints (subject, satisfaction, numbers, handled by humans, handled in automated method etc.) to a standard set by the regulator.

It is central that there be a complaints handling system to cover concerns about content/behaviour of other users.  While an internal redress system that is fast, clear and transparent is important, we also propose that an external review mechanism be made available.  There are a number of routes which require further consideration – one route might be an ombudsman service, commonly used with utility companies although not with great citizen satisfaction, another might be a binding arbitration process or possibly both.

Finally, the regime must have sanctions.  The range of mechanisms available within the Health and Safety regime are interesting because they allow the regulator to try improve conditions rather than just punish the operator,  (and to some extent the GDPR has a similar approach). We would propose a similar range of notices.  For those that will not comply, the regulator should be empowered to impose fines (perhaps GDPR magnitude fines if necessary).

The more difficult questions relate to what to do in extreme cases. Should there be a power to send a social media services company director to prison or to turn off the service? Regulation of health and safety in the UK allows the regulator in extreme circumstances, which often involve a death or repeated, persistent breaches to seek a custodial sentence for a director. The Digital Economy Act contains power (Section 23) for the age verification regulator to issue a notice to internet service providers to block a website in the UK. Should there be equivalent  powers to send a social media services company director to prison or to turn off the service?  In the USA the new FOSTA-SESTA package apparently provides for criminal penalties (including we think arrest) for internet companies that facilitate sex trafficking.  Given the impact on freedom of expression, these sorts of penalties should be imposed only in the most extreme cases – the question is, should they be there at all?

Professor Lorna Woods is Chair of Internet Law, School of Law, University of Essex and the joint author of the Carnegie UK Trust proposals with William Perrin

The post Carnegie UK Trust: A proposal for harm reduction in Social media – Lorna Woods appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview