Loading...

Follow Information Law & Policy Centre on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Digital Rights in Brexit: Changes and Challenges CALL FOR PAPERS

We are pleased to announce this call for papers for the Information Law and Policy Centre’s Annual Conference on 22nd November 2019 at IALS in London, this year supported by Bloomsbury’s Communications Law journal. You can read about our previous annual events here.

We are looking for high quality and focused contributions that consider the changes and challenges facing the protection and enjoyment of digital rights in the UK and elsewhere as a result of Brexit.

Whether based on doctrinal analysis, or empirical research, papers should offer an original perspective on the implications posed by Brexit. This scope covers both the impact on digital rights of an impending Brexit since the 2016 referendum to date, as well as the potential consequences for digital rights on leaving the EU for all individuals resident in the UK.

Topics of particular interest include:

  • Immigration
  • Border control
  • Online harms and the regulation of social media
  • Surveillance and data privacy
  • Data protection law and data transfers
  • Data ethics and innovation
  • Employment and labour
  • Public international law

The conference will take place on Friday, 22nd November 2019 and will include the Information Law and Policy Centre’s Annual Lecture and an evening reception.

We are delighted to announce that Dr Jeni Tennison OBE, CEO of the Open Data Institute, will deliver this year’s ILPC Annual Lecture.

Attendance will be free of charge thanks to the support of the IALS and our sponsors, although registration is required as places are limited.

The best papers will be featured in a special issue of Bloomsbury’s Communications Law journal, following a peer-review process. Those giving papers will be invited to submit full draft papers to the journal by Friday, 1st November 2019 for consideration by the journal’s editorial team.

How to apply:

Please send an abstract of between 250-300 words and some brief biographical information to Eliza Boudier, Fellowships and Administrative Officer, IALS: eliza.boudier@sas.ac.uk by Friday, 3rd May 2019 (5pm, BST).

Abstracts will be considered by the Information Law and Policy Centre’s academic staff and advisors, and the Communications Law journal editorial team.

About the Information Law and Policy Centre at the IALS

The Information Law and Policy Centre (ILPC) produces, promotes, and facilitates research about the law and policy of information and data, and the ways in which law both restricts and enables the sharing, and dissemination, of different types of information. 

The ILPC is part of the Institute of Advanced Legal Studies (IALS), which was founded in 1947. It was conceived, and is funded, as a national academic institution, attached to the University of London, serving all universities through its national legal research library. Its function is to promote, facilitate, and disseminate the results of advanced study and research in the discipline of law, for the benefit of persons and institutions in the UK and abroad.

About Communications Law (Journal of Computer, Media and Telecommunications Law)

Communications Law is a well-respected quarterly journal published by Bloomsbury Professional covering the broad spectrum of legal issues arising in the telecoms, IT, and media industries. Each issue brings you a wide range of opinion, discussion, and analysis from the field of communications law. Dr Paul Wragg, Associate Professor of Law at the University of Leeds, is the journal’s Editor in Chief.

The post Call for Papers – Digital Rights in Brexit: Changes and Challenges appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This guest post was written by Jamie Grace, Senior Lecturer in Law at Sheffield Hallam University. This post therefore reflects the views of the author, and not those of the ILPC.

The use of algorithmically-informed decision-making in public protection contexts in the UK justice system does appear to be proliferating; and this is problematic.

The UN Special Rapporteur on Privacy has commented that in the context of surveillance, algorithmic processing of personal information is less intrusive than human processing of the same personal information. But this position overlooks the transparency problems of algorithms and their opacity in their workings, and the potential injustices with regard to ‘trade-offs’ in algorithmic weightings driven by particular policy choices, or the risks of potential exacerbation of discrimination through the use of skewed data.

There are concerns that UK policing could soon be awash with ‘algorithmic impropriety’. Big(ger) data and machine learning-based algorithms combine to produce opportunities for better intelligence-led management of offenders, but also creates regulatory risks and some threats to civil liberties – even though these can be mitigated. In constitutional and administrative law terms, the use of predictive intelligence analysis software to serve up ‘algorithmic justice’ presents varying human rights and data protection problems based on the manner in which the output of the tool concerned is deployed. But regardless of exact context, in all uses of algorithmic justice in policing there are linked fears; of risks around potential fettering of discretion, arguable biases, possible breaches of natural justice, and troubling failures to take relevant information into account. The potential for ‘data discrimination’ in the growth of algorithmic justice is a real and pressing problem.

Of course there are growing efforts in terms of the modelling of good-practice for regulating algorithms, machine learning and the applications of ‘big data’ technologies. A community of academics and data scientists working on ‘Fairness, Accountability and Transparency in Machine Learning’ (FAT/ML) have published five ‘Principles for Accountable Algorithms’ as well as a ‘Social Impact Statement for Algorithms’, for example. And the Data Protection Act 2018 in the UK requires the Home Office to publish annual ‘privacy impact assessments’ in the roll-out of any technology such as its new-generation, joined-up ‘Law Enforcement Data Service‘ (LEDS). However, it was perhaps an astute observation by the Council of Europe that doctrinal law might be better regulation overall, in relation to the risks of machine learning algorithms in the ways that they affect human rights values, compared to any combination of non-binding ethical frameworks and self-regulation. The Council of Europe have also made the observation that ‘meta-norms’ in the deployment of machine learning may need more time to evolve in practice.

I’ve been involved in writing a piece of research (Oswald et al, 2018) that sets out a model of algorithmic accountability in policing contexts for UK forces, known as ‘ALGO-CARE’ and which is based around the following principles:

  • Advisory
    • Lawful
    • Granularity
    • Ownership
    • Challengeable
    • Accuracy
    • Responsible
    • Explainable

The National Police Chiefs’ Council have now recommended to UK police forces that they adopt the ALGO-CARE model as an interim safeguard in determining whether and how to deploy AI in operational or strategic ways.

But one particularly significant issue in the field of algorithmic justice has begun to emerge: the lack of transparency over the development and likely scale of future use of machine-learning technologies by UK police forces.

This issue of a lack of transparency applies equally to recidivism-prediction tools drawing on ‘big data’, and ‘hotspots’ patrolling software, through to automated facial recognition technologies. A lack of meaningful public engagement by forces over the use of these tools is a troubling trend so far.

To that end, and with support and input from a number of researchers and academic colleagues at a range of institution, I decided to host an event on public engagement the police use of technology impacting privacy rights, on Wednesday 27 March 2019, at the IALS. Delegates represented half a dozen UK universities and as many UK police organisations.

The schedule of the event was as follows (and readers should feel free to contact the presenters directly for their slides/written papers in relation to their ongoing work):

  • Alexander Babuta (Research Fellow, Royal United Services Institute) – Machine Learning and Predictive Policing: Human Rights, Law and Ethics
  • Dr Nora Ni Loideain (Director of information Law and Policy Centre, Institute of Advanced Legal Studies, and Faculty of Law, University of Cambridge) – Predictive policing and legal and technical mechanisms for oversight
  • Tom McNeil (Solicitor and Strategic Adviser to the PCC & Board Member, Office of the West Midlands Police and Crime Commissioner) – Discussing independent data ethics committees
  • Dr. Joe Purshouse (Lecturer in Criminal Law, University of East Anglia) – Privacy, Crime Control and Police Use of Automated Facial Recognition Technology
  • Christine Rinik (Senior Lecturer in Law, University of Winchester) – Datafication in policing?  Concerns, opportunities and recommendations regarding use of data-driven tools
  • Jamie Grace (Senior Lecturer in Law, and Fellow of the Sheffield Institute for Policy Studies, Sheffield Hallam University) – Taking ALGO-CARE: Moving UK police forces away from potential ‘algorithmic impropriety’ and toward ‘data justice’ standards

A report that captures the literature on police engagement with the public over the use of technology, as well as the input of attendees at the 27 March event, will be forthcoming from the Helena Kennedy Centre for International Justice.

I’d like to thank my fellow contributors to the event. If you would like to discuss the ALGO-CARE framework for adopting algorithmic policing approaches, or the research that underpins it, please do contact me at: j.grace@shu.ac.uk

NB: This blog draws in part on the following academic papers:

  • J. Grace, ‘Human rights, regulation and the right to restrictions on algorithmic police intelligence analysis tools in the UK’, available online as a draft paper at: http://ssrn.com/abstract=3303313

The post Police use of technology impacting privacy rights appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
It took nearly five years into the internet’s life before anyone made a concerted effort to archive it. Much of our earliest online activity has disappeared.

This post was originally written by Stephen Dowling for BBC Future.

In 2005, student Alex Tew had a million-dollar brainwave.

The 20-year-old was playing around with ideas to pay for a looming three-year business degree; Tew was already worrying that the overdraft he had would mushroom. So he scribbled on a pad: “How to become a millionaire.”

Twenty minutes later he had what he thought was the answer.

Tew set up a website called the Million Dollar Homepage. The site’s model was almost obscenely simple: on it was a million pixels of ad space, the pixels available to buy in blocks of 100 at $1 a pixel. Once you bought them they were yours forever. When the millionth pixel was sold, Tew would be a millionaire. At least, that was the plan.

The Million Dollar Homepage launched on 26 August 2005, after Tew had spent the grand sum of 50 euros on registering the domain and setting up the hosting. Advertisers bought pixels and provided a link, tiny image and a short amount of text for when the cursor hovered over their image.

After little more than a month, thanks to word-of-mouth and ever-increasing media attention, Tew’s homepage had raised more than $250,000 (£140,000). In January 2006, the last 1,000 pixels were sold at auction for $38,100 (£21,500); Tew had indeed made his million.

The Million Dollar Homepage is still online, nearly a decade and a half after it was created. Many of the customers – which included the likes of the UK’s The Times newspaper, travel service Cheapflights.com, online portal Yahoo! and rock duo Tenacious D – have had 15 years of advertising off that one-off payment. The site still has several thousand viewers every day; it has probably been a very good investment.

The Million Dollar Homepage is now full of links to sites which no longer exist (Credit: Million Dollar Homepage)

Tew, who now runs the meditation and mindfulness app Calm, indeed became a millionaire. But the homepage he created has also become something else: a living museum to an earlier internet era. Fifteen years may not seem a long time, but in terms of the internet it is like a geological age. Some 40% of the links on the Million Pixel Homepage now link to dead sites. Many of the others now point to entirely new domains, their original URL sold to new owners.

The Million Dollar Homepage shows that the decay of this early period of the internet is almost invisible. In the offline world, the closing of, say, a local newspaper is often widely reported. But online sites die, often without fanfare, and the first inkling you may have that they are no longer there is when you click on a link to be met with a blank page.

***

Around a decade ago, I spent two years working on a rock music blog and on the music section of AOL, the sprawling internet pioneer now owned by US phone company Verizon. I edited or wrote hundreds of live reviews, music news stories, artists interviews and listicles. Facebook and Twitter were already massive audience drivers, and smartphones were connecting us to the Web between work and home; surfing the Web had become a round-the-clock activity.

If Brewster Kahle hadn’t set up the Internet Archive and started saving things, without waiting for anyone’s permission, we’d have lost everything – Dame Wendy Hall

You could, quite reasonably, assume that if I ever needed to show proof of my time there it would only be a Google search away. But you’d be wrong. In April 2013, AOL abruptly closed down all its music sites – and the collective work of dozens of editors and hundreds of contributors over many years. Little of it remains, aside from a handful of articles saved by the Internet Archive, a San Francisco-based non-profit foundation set up in the late 1990s by computer engineer Brewster Kahle.

It is the most prominent of a clutch of organisations around the world trying to rescue some of the last vestiges of the first decade of humanity’s internet presence before it disappears completely.

Dame Wendy Hall, the executive director of the Web Science Institute at the University of Southampton, is unequivocal about the archive’s work: “If it wasn’t for them we wouldn’t have any” of the early material, she says. “If Brewster Kahle hadn’t set up the Internet Archive and started saving things – without waiting for anyone’s permission – we’d have lost everything.”

AOL shut its music sites in 2013, deleting years of music coverage from around the world (Credit: Getty Images)

Dame Wendy says archives and national libraries had experience saving books, newspapers and periodicals because print had been around so long. But the arrival of the internet – and how quickly it became a mass form of communication and expression – may have taken them by surprise. The attempts to archive the internet have, in many areas, been playing catch-up ever since. “The British Library had to have a copy of every local newspaper published,” she says. As the newspapers have gone from print to the Web, the archiving takes a different form. Are these websites as vital a resource as the papers which preceded them?

Newspaper archives are vulnerable, too, to being lost when the publications are closed down or merged with other titles. “Most newspapers, I imagine, will have some sort or archive,” she says. “But that can be lost unless it is archived properly.”

Who’s going to pay for it? We produce so much more material than we used to – Dame Wendy Hall

One major problem with trying to archive the internet is that it never sits still. Every minute – every second – more photos, blog posts, videos, news stories and comments are added to the pile. While digital storage has fallen drastically in price, archiving all this material still costs money. “Who’s going to pay for it?” asks Dame Wendy. “We produce so much more material than we used to.”

In the UK, the role of digital conservation has partly fallen to the British Library. The library runs the UK Web Archive, which has been collecting websites by permission since 2004. The archive’s engagement manager Jason Webber says the problem is much bigger than most people realise.

Very little of the content from the earliest days of the Web – the era of messageboards and internet cafes – now remains (Credit: Getty Images)

“It’s not only the early material. Most of the internet is not being stored,” he says.

“The Internet Archive first started archives pages in 1996. That’s five years after the first webpages were set up. There’s nothing from that era that was ever copied from the live web.” Even the first web page set up in 1991 no longer exists; the page you can view on the World Wide Web Consortium is a copy made a year later.

For much of the first five years of the Web, much of the material published in Britain ended with the designation .ac.uk – academic articles written by academics. It was only in 1996 that the Web started seeing more general sites being set up, as commercial websites started outnumbering academic ones.

I think there’s been very low level of awareness that anything is missing – Jason Webber

The British Library does one “domain crawl” every year – saving anything that is published in the UK. “We try and get everything, but we do only do it once a year. But the cap for a lot of these sites is set at 500MB; that covers a lot of smaller sites, but you only have to have a few videos in there and that limit gets reached pretty quickly.” News websites like BBC News, however, do get crawled more often. The library, Webber says, has tried to build as complete picture as possible of events such as Brexit, the London 2012 Olympics and the 100th anniversary of World War One.

“I think there’s been very low level of awareness that anything is missing,” Webber says. “The digital world is very ephemeral, we look at our phones, the stuff on it changes and we don’t really think about it. But now people are becoming more aware of how much we might be losing.”

But, Webber says, the only material organisations have the right to gather is publicly viewable; an even bigger amount of culturally or historically important data is sitting on people’s archives, like their hard drives. But few of us are keeping those for posterity.

“The British Library is full of letters between people. There are exchanges between politicians, or love letters, and these things are really important to some people.”

Archives knew the importance of saving newspapers but were slower to react to the rise in online material (Credit: Getty Images)

We consider the material we post onto social networks as something that will always be there, just a click of a keyboard away. But the recent loss of some 12 years of music and photos on the pioneering social site MySpace – once the most popular website in the US – shows that even material stored on the biggest of sites may not be safe.

And even Google’s services are not immune. Google+, the search giant’s attempt at a Facebook-rivalling social network, closed on 2 April. Did all its users back up the photos and memories they shared on it?

“Putting your photos on Facebook is not archiving them, because one day Facebook won’t exist,” says Webber. If you have any doubt about the temporary nature of the Web, take a few minutes to trawl through the Million Dollar Homepage. It is the testament to how quickly our online past is fading away.

There is another side to data loss. Dame Wendy points out that not archiving stories from news websites could lead to a selective view of history – new governments choosing not to save stories or archives which have cast them in a poor light, for instance.

The political is so often tied into the technical – Jane Winters

“As soon as there’s a change of government, or restructuring of quangos, sites are closed down,” says Jane Winters, a professor of digital humanities at the University of London. “Or look at election campaigning sites, which by their nature are set up to be temporary.”

Sometimes the sites that are lost echo even more seismic changes; the deaths and births of nations themselves. “It happened with Yugoslavia; .yu was the top-level domain for Yugoslavia, and that ended when it collapsed. There’s a researcher who is trying to rebuild what was there before the break-up,” she says.

“The political is so often tied into the technical.”

There is, perhaps, a slight silver lining. “I come from a history background, and we’ve always had to deal with gaps in the historical records, some of which we know about, and some we just have no idea about.”

Dame Wendy Hall also sees parallels with the physical. When she was 15, in the late 1960s, she appeared as part of the audience in a taping of the BBC’s music show Top of the Pops.

The show was shown on Christmas Day. “The TV was on, and my mother said ‘There you are! But I missed it. And I’ve since gone to the BBC and tried to get a copy of it – they taped over it. I never got to see it.”

The post Why there’s so little left of the early internet appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This guest post was written by Marion Oswald, Senior Fellow at the Faculty of Law, Winchester University. This post therefore reflects the views of the author, and not those of the ILPC.

‘The machine decided.  It’s a wonderful thing.’

This quote is over 80 years old, and refers to technology that could be described as the original ‘black box’: the polygraph, or ‘lie detector’ as it is better known.

The person who was so enthused by the ‘machine’ was a juror in the case of People v Kenny, a New York State trial court decision in 1938.  The trial involved conflicting eye witness testimony, and according to the juror, the lie detector was ‘the deciding factor.’  Or perhaps not so much the machine itself, but its results as presented by the scientific witness. 

That witness was Father Walter Summers, a Jesuit and Head of the Department of Psychology of the Graduate School of Fordham University.  He must have cut an impressive figure as he outlined his confidence that the device was ‘100 per cent efficient and accurate in the detection of deception’.  Clearly, the judge was convinced, remarking, ‘It seems to me that this pathometer and the technique by which it is used indicate a new and more scientific approach to the ascertainment of truth in legal investigations.’

After the case, the jurors were polled by a member of the New York bar.  Although none admitted to basing their decision solely on the lie detector testimony, six jurors thought that the testimony was ‘conclusive proof’ of the defendant’s guilt or innocence, and five agreed that they had accepted it ‘without question’ (an early example of the ‘computer says no’ problem perhaps).  The judge’s assertion that the jury will ‘evaluate’ the lie detector testimony seems somewhat wishful thinking.

The lie detector out of court

Summers’ success was short-lived.  The majority of early US cases, both before and after Kenny, rejected the use of lie detector evidence in court on the basis of the Frye standard.  According to this, lie detectors were not sufficiently established in order to gain general acceptance by experts in the field, nor had their use moved out of the experimental ‘twilight zone’ to a ‘demonstrable’ stage in the sense of something that can be proved logically.  The courts expressed nervousness, not only about scientific validity, but about the test’s potential impact on established legal norms and procedures, such as the Fifth Amendment privilege against self-incrimination and the jury’s role in determining credibility. 

This did not however prevent use of the lie detector outside court forging ahead – for the assessment of evidence; vetting potential employees; in fraud investigations; and to test the fidelity of your spouse.

Technology in the twilight zone

Laudable aims accompanied the early polygraph.  It was said to be more humane than the third degree interrogation methods common at the time, and more ‘scientific’ than potentially unreliable witness testimony.  Polygraph inventors and practitioners contributed articles to legal journals to assist their cause.  Today’s machine learning is often advocated to be more consistent, accurate and even transparent than the human mind, thus providing proof in issues such as discrimination.  Both these technologies arguably support what I’d describe as ‘reformist legal realism’.  This is an approach that aims to advance productivity, efficiency and the human condition (although often narrowly defined) by an emphasis on empirical evidence and scientific methods, and a distrust of reliance on ‘principles’.   

Despite these real-world aims, lie detectors and artificial intelligence alike have become embedded in fiction, comics, TV and movies, distorting general understanding of what the technology can actually do.  The early lie detector’s ‘magic’ – its theatricality, opacity and intimidating character – benefited those who would promote its use.  It is perhaps telling that one of the most charismatic proponents of the early lie detector as evidence – psychologist and lawyer Dr. William Moulton Marston – also created the character ‘Wonder Woman’, whose lasso of truth shares the lie detector’s characteristic of benign coercion.  Yet the inventor of the first portable polygraph, Leonarde Keeler, said in 1934 that there was no such thing as a ‘lie detector’. 

Present-day artificial intelligence and machine learning can suffer from similar magical thinking.  Despite the common parlance ‘artificial intelligence’, there is no such thing as a machine that can act like a human at all times.  Neither can a machine learning tool independently ‘predict’ risk or a person’s future.  Rather the real world or a person’s life is reduced to variables, and an algorithm then trained to detect patterns or similarities based on probabilities.  The badging of the output as a prediction is a human one.  Considerable doubt exists as to the benefit, accuracy and relevance of such predictions at an individual level. 

The science behind polygraphs is based upon the assumption that deception can be correlated with physiological responses over a certain threshold.  Machine learning is based upon the premise that all relevant factors can be represented as data, measured, and analysed with significantly accurate predictive power.  But the deployment of both technologies in real contexts remains in the ‘twilight’ zone between the ‘experimental and demonstrable’ stages. 

Governing the lie detector & lessons for AI

The courts in the US, and in England, retain the authority to decide whether expert scientific testimony is based upon a scientifically valid foundation relevant to the facts at issue in a case.  However, it took 50 years from the Kenny case for legislation to be introduced in the US to prohibit most private sector employers from using lie detectors.  US Government job applicants were not so lucky, and the polygraph test is still administered widely to potential recruits.  Although use in court remains restricted, post-offence monitoring by lie detector has gained some acceptance.  Taking advantage of the popular (but highly contested) belief that lie detectors ‘work’, studies have claimed that sex offenders are more likely to make significant disclosures if they are made subject to, or threatened with, testing.

So what might we conclude from lie detector history regarding machine learning today?  That the use of machine learning, especially when backed by commercial interests, is likely to expand to fill whatever gap is available.  We already see private sector predictive tools and emotion detection marketed for use in hiring decisions, fraud detection, immigration and other screening, decisions that come with high-stakes for individuals.  In terms of governance and regulation, focus to date has been on data protection, individual consent, privacy and ethics.  Appropriate ‘scientific validity’ and relevance standards should also be applied, constructed for particular contexts, which lead to red-lines that cannot be crossed until the experimental has truly moved out of the twilight zone.     

The post Technology in the twilight zone: What early use of the lie detector in court can tell us about machine learning now appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post was originally written by the Surveillance Camera Commissioner, Tony Porter, and published on the Surveillance Camera Commissioner blog.


Once again the issue of police use of Automatic Facial Recognition (AFR) technology has come into public spotlight. This week it was part of an inquiry at the House of Commons Science and Technology Select Committee – to whom I submitted written evidence.

Many of the themes emerging from that Committee reflect those that have been around for some time. We do of course await the outcome of the legal proceedings raised by Liberty and Big Brother Watch against South Wales Police and Metropolitan Police respectively for their use of AFR. Those deliberations will provide focus regarding the legitimacy and lawfulness of the use of such technology.

In the context of regulation it appears to me that broader and more creative thought needs to be applied to the prospect of society’s increasing use of artificial intelligence linked to surveillance cameras.

The first question I ask – is it reasonably foreseeable that the use of such integrated technology will continue to be an ever-growing phenomena? The answer is surely yes. The innumerable scope for the private sector to provide enhanced services to their consumers utilising facial recognition (shopper identity) or other predictive algorithms is immense. The momentum of the pound pushes forward this debate in the absence, arguably, of a robust regulatory or legislative regime wrapped around it.

Next – is it likely that police/public partnerships will emerge that combines this utility with a security application? Yes – it is already happening. Many will recall my intervention at a large shopping centre in the North West of England and a local police force in October 2018. There is a world of difference between an enhanced shopping experience and focused state surveillance particularly when conducted on a mass scale. This is where society needs to be careful and government need to ensure robust legislation that is fit for purpose.

I have often stated that the public do not expect an analogue police force in a digital age (but they have every right to expect clear, transparent and common-sense laws and rules to govern police conduct and use of those technologies – as indeed do the police themselves). The recommendations I made to the Committee will (should, if appropriately considered) go a long way to assuaging public concerns about the use of this technology by indicating a pragmatic way forward – a clear legislative framework, recognisable operating and technical standards, a recognition that the actual conduct of surveillance itself is a far more reaching consideration than simply data acquisition (whether it be conducted overtly or covertly), and should be supported by appropriately clear and intrusive oversight.

The establishment of the Home Office Law Enforcement Facial Images and New Biometric Modalities Oversight and Advisory Board has been a step in the right direction but more is required. The Home Secretary’s Surveillance Camera Code of Practice issued by virtue of the Protection of Freedoms Act 2012 clearly sets out within its contents that it regulates the public place use of overt surveillance camera technologies in England and Wales now and in the future.

The police and the public will understandably seek recourse to it as appropriate and indeed have every right therefore to expect it to provide clear and relevant regulatory guidance to those who need it. It was published in 2013. Things were different then. I have advised Government consistently since 2016 that it needs to change and evolve. Government (in June 2018) committed, within its Biometric Strategy, to review the Surveillance Camera Code of Practice.

I’ve had conversations with the Home Office on the review but progress has been glacial. If public reassurance is to be gained this needs to progress quickly. The Code sets standards and principles as well as signposting broader legislative considerations which apply. It is after all operated under the Home Secretary’s responsibilities and is precisely where more effort is required.

The post The debate on automatic facial recognition continues appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This post was originally written by Graham Smith and published on Cyberleagle.

All the signs are that the government will shortly propose a duty of care on social media platforms aimed at reducing the risk of harm to users. DCMS Secretary of State Jeremy Wright wrote recently:

“A world in which harms offline are controlled but the same harms online aren’t is not sustainable now…”. 

The House of Lords Communications Committee invoked a similar ‘parity principle’:

“The same level of protection must be provided online as offline.”

Notwithstanding that the duty of care concept is framed as a transposition of offline duties of care to online, proposals for a social media duty of care will almost certainly go significantly beyond any comparable offline duty of care.

When we examine safety-related duties of care owed by operators of offline public spaces to their visitors, we find that they:
(a) are restricted to objectively ascertainable injury,
(b) rarely impose liability for what visitors do to each other,
(c) do not impose liability for what visitors say to each other. 

The social media duties of care that have been publicly discussed so far breach all three of these barriers. They relate to subjective harms and are about what users do and say to each other. Nor are they restricted to activities that are unlawful as between the users themselves.

The substantive merits and demerits of any proposed social media duty of care will no doubt be hotly debated. But the likely scope of a duty of care raises a prior rule of law issue. The more broadly a duty of care is framed, the greater the risk that it will stray into impermissible vagueness.

The rule of law objection to vagueness was spelt out by the House of Lords in R v Rimmington, citing the US case of Grayned:

“Vagueness offends several important values … A vague law impermissibly delegates basic policy matters to policemen, judges and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application.”  

Whilst most often applied to criminal liability, the objection to vagueness is more fundamental than that. It is a constitutional principle that applies to the law generally. Lord Diplock referred to it in a 1975 civil case (Black-Clawson):

“The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it.”

Certainty is a particular concern with a law that has consequences for individuals’ speech. In the context of a social media duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what of their speech is liable to be the subject of preventive or mitigating action by a platform operator subject to the duty of care.

With all this in mind, I propose a ten point rule of law test by which the government’s proposals, when they appear, may be evaluated. These tests are not about the merits or demerits of the content of any proposed duty of care as such, although of course how the scope and substance of any duty of care is defined will be central to the core rule of law questions of certainty and precision.

These tests are in the nature of a precondition: is the duty of care framed with sufficient certainty and precision to be acceptable as law, particularly bearing in mind potential consequences for individual speech?

It is, for instance, possible for scope to be both broad and clear. That would pass the rule of law test, but might still be objectionable on its merits. But if the scope does not surmount the rule of law threshold of certainty and precision it ought to fall at that first hurdle.

My proposed tests are whether there is sufficient certainty and precision as to:

  1. Which operators are and are not subject to the duty of care.
  2. To whom the duty of care is owed.
  3. What kinds of effect on a recipient will and will not be regarded as harmful.
  4. What speech or conduct by a user will and will not be taken to cause such harm.
  5. If risk to a hypothetical recipient of the speech or conduct in question is sufficient, how much risk suffices and what are the assumed characteristics of the notional recipient.
  6. Whether the risk of any particular harm has to be causally connected (and if so how closely) to the presence of some particular feature of the platform.
  7. What circumstances would trigger an operator’s duty to take preventive or mitigating steps.
  8. What steps the duty of care would require the operator to take to prevent or mitigate harm (or a perceived risk of harm).
  9. How many steps required by the duty of care would affect users who would not be harmed by the speech or conduct in question.
  10. Whether a risk of collateral damage to lawful speech or conduct (and if so how great a risk of how extensive damage), would negate the duty of care.

These tests are framed in terms of harms to individuals. Some may object that ‘harm’ should be viewed collectively. From a rule of law perspective it should hardly need saying that constructs such as (for example) harm to society or harm to culture are hopelessly vague.

One likely riposte to objections of vagueness is that a regulator will be empoweredto decide on the detailed rules. Indeed it will no doubt be argued that flexibility on the part of a regulator, given a set of high level principles to work with, is beneficial. There are at least two objections to that.

First, the regulator is not an alchemist. It may be able to produce ad hoc and subjective applications of vague precepts, and even to frame them as rules, but the moving hand of the regulator cannot transmute base metal into gold. Its very raison d’etre is flexibility, discretionary power and nimbleness. Those are a vice, not a virtue, where the rule of law is concerned, particularly when freedom of individual speech is at stake.

Second, if the vice of vagueness is potential for arbitrariness, then it is unclear how Parliament delegating policy matters to an independent regulator is any more acceptable than delegating them to a policeman, judge or jury. It compounds, rather than cures, the vice.

Close scrutiny of any proposed social media duty of care from a rule of law perspective can help ensure that we make good law for bad people rather than bad law for good people.

The post A Ten Point Rule of Law Test for a Social Media Duty of Care appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Is inequality holding back innovation in STEM?

In the domain of STEM, where the glass ceiling is still a reality, how can women become the drivers of success in A.I., natural sciences and medicine via gender-inclusive research?

Gender and STEM: An Overview

In recent years, the issue of gender bias and discrimination has been making headlines across front pages and TV screens globally. If societal and political movements have managed to shed light on the glaring controversies and problems in the world of cinema, business and politics, the science sector has remained somehow silently off the radar.

“Women in science, especially from the older generation, want to talk about their research, not about themselves per se. So this makes it in science more difficult to talk about the [gender] issue, because women are not very outspoken about it,” says Alexandra Palt, chief corporate responsibility officer and executive vice president of the Fondation L’Oréal, whose initiative For Women in Science, in partnership with Unesco, has been supporting women’s achievements in the field and advocating for closing the gender gap in STEM since 1998.

Academia in general is becoming a less male-dominated field, but it continues to work on a hierarchical system which can often reproduce or mimic gender biases in implicit and unconscious ways.

— Dr. Rachel Adams, researcher on gender and A.I. at the Institute of Advanced Legal Studies, University of London

While the proportion of women in science has been growing in past decades, progress has been happening at a snail’s pace, making the gender gap disparities in STEM as real as ever, with women still facing obstacles in accessing appropriate education programs, getting funding for new research or finding career growth opportunities. The figures speak for themselves and, interestingly, the gender gap grows with the level of seniority — if 49 percent of high school students are girls, less than 30 percent of senior researchers are women, while in the E.U. only 11 percent of senior academic roles in science are held by women. Ultimately, to date, just 3 percent of Nobel Prizes in science have been awarded to women.

Key Challenges for Gender Analysis

The main hurdles faced by women working in science echo those seen in other sectors: progressing to leadership roles, equal pay and inclusive working environments. But the gender issue in STEM goes even further, as the current gender gap might indeed be depriving the world of possible science advancements from the skills and intellectual perspectives of women, putting a brake on both the speed and quality of innovation.

As Palt explains, when it comes to science we need to address not just the moral and social issue of gender equality. It is also about achieving the best possible research and ensuring beneficial outcomes for everyone — man or woman — throughout the world.

“It’s very important that there’s diversity in science, but we have to go beyond that,” says Dr. Londa Schiebinger, echoing Palt. “We need to teach scientists the true effects of gender analysis so that they can create research that works for everyone,” adds the professor of history of science and director of Gendered Innovations at Stanford.

This is especially true in the fields of medicine and modern technology. Doing medical research based solely on men — unless it’s regarding a gender-specific condition — is misleading and could cost lives and money. As Schiebinger points out, in the late 1990s, 10 drugs were withdrawn from the U.S. market because of life-threatening health effects. Eight of them posed greater health risks for women than for men, which raises the controversial question: Do all drugs work equally on men and women, and do men get better medical treatment?

“Drugs metabolize differently in men and women,” says Schiebinger. “If you don’t realize that, you miss how drugs may be harmful to women because male has generally been our model for drug development.”

A notorious example is cardiovascular disease (CVD), which historically has been considered primarily a men’s condition, with key clinical trials conducted exclusively on men. As a result, a lot of women were mis- or undiagnosed, and therefore less likely to receive bypass surgery or other standard treatments. Only during the last two decades has the awareness of how CVD affects women differently from men been growing. Today, CVD remains the major cause of death in women, and the top cause of death for 43 percent of women in the E.U.

We need to teach scientists the true effects of gender analysis so that they can create research that works for everyone.

— Dr. Londa Schiebinger, professor of history of science and director of Gendered Innovations, Stanford

Another sector where unbalanced gender research might have a direct impact on scientific outcomes — and the way we function as a society going forward — is artificial intelligence. The tech sector, in which A.I. research falls, has been historically male-dominated when it comes to senior, decision-making positions, and also “in terms of the dominance of what can be understood to be a male way of thinking,” according to Dr. Rachel Adams, an expert in the interaction between gender and artificial intelligence at the Institute of Advanced Studies at the University of London.

“One of my particular areas of research is the way in which A.I. virtual personal assistants are gendered female and the concerns this raises in terms of the way in which we see women’s roles in society,” she explains.

These gender stereotypes can be found in other mundane daily interactions. “For example, Google Translate defaults to the masculine pronoun because ‘he said’ is more commonly found on the web than ‘she said.’ This is where gender analysis kicks in,” says Schiebinger.

And the examples don’t stop there: In voice recognition and machine learning, male-centered R&D has led to A.I. applications’ systematically discriminating against or making biased decisions when it comes to anything from image database recognition and preselection of university candidates to bank loan applicants. And if those A.I. algorithms cannot fight against prejudices and with A.I. surely pervading our lives, it’s vital that A.I. research and innovations are programmed by men and women, argues Palt.

Looking Forward: How to Bridge the Gender Gap

Palt and her peers are unanimous that visibility, advocacy and tight women’s networks could help bridge the gender gap in science globally.

“The way of thinking is changing gradually — people understand that equal opportunity is very important,” says Dr. Maki Kawai, director general of the Institute for Molecular Science in Tokyo and 2019 L’Oréal-UNESCO For Women in Science laureate. However, she admits that her field is still overwhelmingly male-dominated: “In my institute, less than 10 percent of the principal researchers are women. In physics and engineering, female grad students in Japan are less than 20 percent.”

“We need to encourage girls to consider science as a career through better structured educational programs,” says Schiebinger, referencing the U.S.’s Harvey Mudd College, which reinvented its curriculum under its president Maria Klawe to be more directly focused and attractive to female students. The result? Today, more than half of its computer-science majors are women.

Such an approach — albeit still a rarity — can indeed boost the number of women entering science and academia. But what are the viable ways of enabling more women to progress to leadership roles and advancement in science to ultimately deliver meaningful change in the field?

The way of thinking is changing gradually — people understand that equal opportunity is very important.

— Dr. Maki Kawai, director general of the Institute for Molecular Science, Tokyo and 2019 L’Oréal-UNESCO For Women in Science laureate

Schiebinger’s three fixes have been widely explored by industry leaders for the past three decades: “Fix the numbers of women” by focusing on more organizations and government bodies helping increase the funding of women’s research, setting up mentor networks and teaching women leadership skills. “Fix the institutions” by promoting gender equality in careers and implementing reforms that overcome gender bias in hiring and promotion. And, finally, “fix the knowledge” by integrating sex and gender analysis into research. According to Schiebinger, this newest area of policy intervention is the most important for the future of science and innovation. The European Commission called for sex and gender analysis in public-funded research back in 2014, and the Deutsche Forschungsgemeinschaft in Germany will be announcing similar requirements later in 2019.

However, another omnipresent soft-power approach remains: strengthened networks of women in science that would encourage more women to pursue their scientific aspirations by sharing stories and expertise with younger generations. And this shouldn’t exclude men — quite the opposite. In 2018, the Fondation L’Oréal launched its Men for Women in Science initiative, which included the celebrated mathematician Cédric Villani and the geneticist Axel Kahn among the first 50 male scientists from France, Spain, Morocco and Japan to have pledged their support.

“Women who succeed are often criticized as being too much,” says Palt. “Something I want to teach these young women is to tell the difference between people who give you feedback to make you grow and people who give you feedback to drag you down, because of unconscious gender bias. Sharing this through mentorship, through networking, is extremely important. If I had known all this when I was 28, it would have saved me years of self-doubt and questioning.”

This article was originally written by 4 Women In Science and published by the New York Times.

The post Gender in the World of Science appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

On 22nd January the Forum on Geopolitics (POLIS) at the University of Cambridge hosted ‘A Nightmare Scenario: Technology and Democracy’, a lecture that addressed the effects technology could have on the functioning of contemporary democracy. Each of the speakers shared with the public their own nightmares – dystopian scenarios that democratic societies may face – as technologies play an ever-central role in every aspect of our lives.

The Lecture was chaired by Charles Arthur, a freelance Tech Journalist and former technology editor at The Guardian. The panel’s speakers included: Silkie Carlo, Director of Big Brother Watch; John Naughton, Emeritus Professor of the Public Understanding of Technology at the Open University and Director of the Press Fellowship Programme at Wolfson College; and the technology columnist of The Observer, David Runciman, Professor of Politics at the University of Cambridge’s Department of Politics and International Studies and Dr. Nóra Ni Loideain, ILPC’s Director.

Dr Nóra Ni Loideain’s speech concentrated on discussions around the value of data in improving society’s wellbeing and individuals’ living conditions. Her speech started off with a response to a recent article written by musician and entrepreneur will.i.am’s on Data as Property (DaP) in The Economist. According to the artist, who is part of the Global AI Committee at the World Economic Forum, the Data Protection should be regarded to as an inalienable human right, and each individual should hold the fundamental liberty to retain and share data upon giving informed consent. In will.i.am’s views, the value of data lies in its economic potential, and therefore individuals should be fairly compensated for it. Data as Property would help to strike a balance between the multi-billion businesses of tech giants – ‘data monarchs’ – and the free labour that we perform online and that generates boundless amounts of data.

Dr Nóra Ni Loideain expressed her agreement with the concept of Data Protection as an inalienable human right, but her views departed from the artist’s endorsement of the concept of Data as Property owned solely by an individual. If personal data were to be treated as such, and considered only in terms exchange of financial compensation without considering the implications for wider society or other individuals affected by the sharing/revealing of this data (e.g. genomic data), significant harms would result to both individuals and wider society. On this path, society could regress to a stage in which human activity, and ultimately human beings, would be considered solely as commodities, subject to exploitation and valuable only where it generates financial revenue.

The concept of Data as Property, as a matter of fact, ignores centuries of legal history that defines Data Protection: the concept goes well beyond that of Data Privacy and is underpinned by the fundamental principles of dignity and liberty. Data Protection also guarantees the protection of other inalienable human rights key to the functioning of a democratic society, such as freedom of expression, the right to equality, and the right to non-discrimination measures.

As seductive as it sounds, the idea of DaP as outlined by will.i.am, constitutes a significant threat to civil liberties of the right to access informationthe freedom of information and the freedom of expression. Under today’s conditions, the so-called ‘data monarchs’ exert crucial influence on these civil liberties by gathering and mining data that creates the invisible environment in which our lives unfold. Considering the ever-increasing convergence of public and private sectors, personal data should rather be combined and aggregated in a way that creates common interest-oriented data infrastructures that can significantly improve the efficiency and sustainability of communities and societies.

Dr Nóra Ni Loideain called for closer scrutiny and enforcement of Data Privacy and Data Protection safeguards that consider the intersection of the public and the private sector (eg such as in the area of the private sector’s role in predictive policing), and pointed out that enhanced accountability and access to all data processing stages is needed to ensure that regulatory frameworks be developed fairly and transparently, bearing in mind the impact that data has on many levels of human lives.

Another critique that Dr Nóra Ni Loideain made of will.i.am’s statements is that the vision of Data as Property does not challenge the status quo in terms of the current challenges facing how Data Protection and Privacy are currently and how should they should be regulated in future. The artist’s account, in fact, fails to recognize that governmentscommunitiesacademia and civil society alike should all play a central role in designing and evaluating fair and transparent regulatory frameworks, so as to set an agenda that is attentive of the critical value of data in today’s society. In other words, this is not a conversation to be had solely between the individual and industry. Such reform must also take into account the existing imbalance in power dynamics between individuals and organisations that arises in many current cases of gathering and sharing of personal data.

Dr Nóra Ni Loideain concluded her lecture by delineating her own nightmare scenario: a context that would condone the perpetration of sterile measures whereby any use of personal data for public benefit and inalienable human rights and civil liberties are swept aside, reinforced by a culture of profit-driven Data Protection and Data Privacy policies. In this scenario, compliance, enforcement and oversight of the relevant legal frameworks would be paper-based – far from meaningful or legible and amount to no more than a rubber stamp.

If you have attended the event and are in possession of pictures of the panel, please do get in touch with maddalena.esposito@sas.ac.uk for inclusion to this blog post.

The post Event: ‘A Nightmare Scenario: Technology and Democracy’ at POLIS, University of Cambridge appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Dr Rachel Adams, Early Career Researcher at the Information Law and Policy Centre (ILPC), has co-authored a chapter in the new NUPRI publication ‘Internet Governance in the Global South – History, Theory and Contemporary Debates.’ The chapter is entitled Deconstructing the Paradoxes of South Africa’s Emerging Discourse and Framework on ICTs and Internet Governance. The publication is free to download here.
This blog post was originally written for and posted on NUPRI.

The new NUPRI publication Internet Governance in the Global South – History, Theory and Contemporary Debates, is now available on NUPRI. The book was organized by NUPRI research coordinator Daniel Oppermann and presents a number of contributions developed by professors and researchers from different parts of the world. The book addresses a variety of topics concerning Internet Governance in the Global South, including theoretical and conceptual aspects, experience with the Domain Name System in Southern countries, decolonization, access, privacy, cybersecurity and more. Find below a list of chapters from the book and an extract from the preface.

Content:

From Bandung to the DNS (Daniel Oppermann)

The Difficult Path to the Insertion of the Global South in Internet Governance (Jean-Marie Chenou, Juan Sebastián Rojas Fuerte)

Governança Global da Internet: Aspectos Conceituais, Questões da Agenda Contemporânea e Prospectos para o Estudo do Tema (Diego Canabarro)

Prolegomenon to the Decolonization of Internet Governance (Syed Mustafa Ali)

Do Século XX para o Século XXI: da Revolução Mundial do Cidadão Comum para a Revolução Informacional do Capital Humano (Alexandre Arns Gonzales)

ICANN, New gTLDs and the Global South (Paul White)

Los ccTLDs y los Dilemas del Desarrollo Comercial del DNS en América Latina. Reflexiones para el Sur Global (Carolina Aguerre)

Deconstructing the Paradoxes of South Africa’s Emerging Discourse and Framework on ICTs and Internet Governance (Colin Darch, Rachel Adams, Ke Yu)

Governança da Internet a partir da Periferia: Integrando a Amazônia Brasileira aos Debates sobre a Governança da Internet (Luisa Lobato)

Examining the Intersections of Counter-Terrorism Laws and Internet Governance in Ethiopia (Tewodros Workneh)

A Política Externa Brasileira na Governança da Internet: do Direito à Privacidade ao Direito à Participação (Thaíse Kemer)

Extract from the Preface:

The publication is divided into two parts. The first part concentrates on a number of historical and theoretical or conceptual approaches to Internet Governance. The second part has a strong focus on contemporary debates concerning selected issues of the field.

The historical and theoretical contributions are initiated by a discussion regarding the Global South as a region, its historical formation in the context of decolonization, the debates on the New International Economic Order (NIEO), the New International Information and Communication Order (NIICO), and the political turn to a neoliberal agenda in which Internet Governance was developed (Oppermann). The challenges of Southern countries to participate politically and economically in this environment are then addressed and contextualized through different theoretical frameworks including International Political Economy and global International Relations, combined with a discussion on strategies and ambitions of countries in the Global South to advance their own insertion in Internet Governance (Chenou, Rojas Fuerte). Internet Governance itself as a concept from a historical perspective, including processes of institutionalization, is then addressed by Canabarro, together with a discussion of the NetMundial meeting in Brazil as a consequence of the Snowden revelations and the NSA affair. The extensive global surveillance of Internet users including governments and other organizations by the USA and some of their allies in other parts of the world increased the debates on online privacy and also on topics including power, influence and global constellations that brought questions about new forms of colonialism on the agenda. Colonization in the digital age is a topic of growing importance, especially but not only in the South, and so is the discussion on decolonization. Emanating from the debate on decolonial computing, Ali is addressing Internet Governance and the need for its decolonization. He does so by critically analyzing the North-centric discourse of Internet Governance, thus bringing a new perspective to the debates. He is then followed by Gonzales, who develops a theoretical debate on ideology, the information revolution and its impacts on and correlations with a number of manifestations that occurred in several countries in the year 2011, including Egypt, Tunisia, and others.

The debates on historical and theoretical approaches are then followed by contributions on contemporary Internet Governance issues in the Global South. This part is initiated by two chapters discussing economic and political challenges related to the Domain Name System in Southern countries. While White is discussing generic top level domains and ICANN’s new gTLD program in the Global South, Aguerre is focussing on the ccTLD environment in the South, in particular in Latin America. They are followed by a chapter on South Africa’s policy framework on ICT and Internet Governance, mostly represented by the 2016 ICT White Paper, which in combination with the 2015 draft cybersecurity bill forms the current foundation for many Internet Governance debates in the country. In this context, the three authors (Darch, Adams, and Yu) also reflect on the questions of governmental control, multilateralism, and multistakeholderism as forms of governance and participation. The following chapter picks up the topic of participation in Internet Governance processes, albeit from a different perspective. Lobato addresses the problem of regional inequality within countries, pointing out the situation of less connected rural areas in Amazonia, in the North of Brazil. She discusses central aspects like infrastructure, access costs, and digital illiteracy and also presents possible solutions for regional integration like access programs and major national events in the regions like the Brazilian Internet Forum which took place in the North of the country in 2013. How lower national access rates are no obstacle for putting the Internet on the national political and security agenda is then clarified by Workneh and the case of Ethiopia. With a national Internet access rate of about 15% and confronted with infrastructural challenges to increase this number, the country is currently following an Internet securitization debate in the context of a dispute over political opposition that often falls under the label of “terrorism”. In this chapter, Workneh discusses how the Northern discourse on a so-called “global war on terrorism” impacts the right to freedom of speech online in Ethiopia and how it increases concerns over online participation and privacy rights. Privacy and participation are also addressed in the concluding chapter of this publication developed by Kemer. She discusses privacy rights in the context of International Law and the Universal Declaration of Human Rights, followed by an analysis of the standpoints of the Brazilian government under Dilma Rousseff on privacy and online participation after the NSA surveillance activities were revealed.

The post New Publication: Deconstructing the Paradoxes of South Africa’s Emerging Discourse and Framework on ICTs and Internet Governance appeared first on Information Law & Policy Centre.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Date

7th March 2019, 17:00 to 19:00

Institute

Institute of Advanced Legal Studies

Type

Seminar

Venue

Conference Room, Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR

Description

Our daily interactions with AI-driven technologies – whether seen or unseen – are becoming increasingly normalised. The use of AI virtual personal assistants (VPAs) in the home are one such feature. Yet, while recent policy documents on AI are quick to note the potential ethical impact of such technologies, little thorough critique has examined how these technologies work to create and reproduce asymmetries of power that fall across, in particular, lines of gender.

The female characterisation of Siri (Apple), Alexa (Amazon), and Cortana (Microsoft) – AI domestic assistants designed to make home life more efficient; the unprecedented walk-out in protest of Google’s treatment of women in November 2018; and the biases toward women ingrained in Amazon’s AI-driven recruitment system, all point to the critical need to undertake this kind of inquiry and to critical consider how the development and use of AI technologies intersect with issues of gender.

In response, the ILPC will be hosting an evening seminar on the 7th March to discuss these and other issues relating to the intersection of women, AI and the law. We seek to canvass issues including biases in algorithmic processing; the invisibility of women’s labour in the production and even design of technology; representation of women in ICTs; and the gendered design of AI technologies, such as VPAs.

Panellists:

Dr Nina Power, Senior Lecturer in Philosophy, University of Roehampton

Dr Sarah Dillon, Director of AI: Narratives and Justice, Leverhulme Centre for the Future of Intelligence

Dr Reuben Binns, Lecturer, Department of Computer Science, University of Oxford, Research Fellow, Information Commissioner’s Office

Discussant: Dr Rachel Adams, Early Career Researcher, Information Law and Policy Centre, Institute of Advanced Legal Studies

Chair: Dr Nóra Ni Loideain, Director and Lecturer in Law, Information Law and Policy Centre, Institute of Advanced Legal Studies

The seminar will be followed by a wine reception.

Registration to the event is available here.

The post Women and AI: Harms, Impacts and Remedies appeared first on Information Law & Policy Centre.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview