Loading...

Follow Cyberleagle on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Nearly twenty five years after the advent of the Web, and longer since the birth of the internet, we still hear demands that the internet should be regulated - for all the world as if people who use the internet were not already subject to the law. The May 2017 Conservative manifesto erected a towering straw man: “Some people say that it is not for government to regulate when it comes to technology and the internet. We disagree.”  The straw man even found its way into the title of the current House of Lords Communications Committee inquiry: "The Internet: to regulate or not to regulate?".

The choice is not between regulating or not regulating.  If there is a binary choice (and there are often many shades in between) it is between settled laws of general application and fluctuating rules devised and applied by administrative agencies or regulatory bodies; it is between laws that expose particular activities, such as search or hosting, to greater or less liability; or laws that visit them with more or less onerous obligations; it is between regimes that pay more or less regard to fundamental rights; and it is between prioritising perpetrators or intermediaries.

Such niceties can be trampled underfoot in the rush to do something about the internet. Existing generally applicable laws are readily overlooked amid the clamour to tame the internet Wild West, purge illegal, harmful and unacceptable content, leave no safe spaces for malefactors and bring order to the lawless internet.

A recent article by David Anderson Q.C. asked the question 'Who governs the Internet?' and spoke of 'subjecting the tech colossi to the rule of law'. The only acceptable answer to the ‘who governs?’ question is certainly 'the law'. We would at our peril confer the title and powers of Governor of the Internet on a politician, civil servant, government agency or regulator. But as to the rule of law, we should not confuse the existence of laws with disagreement about what, substantively, those laws should consist of. Bookshops and magazine distributors operate, for defamation, under a liability system with some similarities to the hosting regime under the Electronic Commerce Directive. No-one has, or one hopes, would suggest that as a consequence they are not subject to the rule of law.

It is one thing to identify how not to regulate, but it would be foolish to deny that there are real concerns about some of the behaviour that is to be found online. The government is currently working towards a White Paper setting out proposals for legislation to tackle “a range of both legal and illegal harms, from cyberbullying to online child sexual exploitation”. What is to be done about harassment, bullying and other abusive behaviour that is such a significant contributor to the current furore?

Putting aside the debate about intermediary liability and obligations, we could ask whether we are making good enough use of the existing statute book to target perpetrators. The criminal law exists, but can be seen as a blunt instrument. It was for good reason that the Director of Public Prosecutions issued lengthy prosecutorial guidelines for social media offences.

Occasionally the idea of an ‘Internet ASBO’ has been floated. Three years ago a report of the All-Party Parliamentary Inquiry into Antisemitism recommended, adopting an analogy with sexual offences prevention orders, that the Crown Prosecution Service should undertake a “review to examine the applicability of prevention orders to hate crime offences and if appropriate, take steps to implement them.” 

A possible alternative, however, may lie elsewhere on the statute book. The Anti-Social Behaviour, Crime and Policing Act 2014 contains a procedure for some authorities to obtain a civil anti-social behaviour injunction (ASBI) against someone who has engaged or threatens to engage in anti-social behaviour, meaning “conduct that has caused, or is likely to cause, harassment, alarm or distress to any person”. That succintly describes the kind of online behaviour complained of.

Nothing in the legislation restricts an ASBI to offline activities. Indeed over 10 years ago The Daily Telegraph reported an 'internet ASBO' made under predecessor legislation against a 17 year old who had been posting material on the social media platform Bebo, banning him from publishing material that was threatening or abusive and promoted criminal activity.  

ASBIs raise difficult questions of how they should be framed and of proportionality, and there may be legitimate concerns about the broad terms in which anti-social behaviour is defined. Nevertheless the courts to which applications are made have the societal and institutional legitimacy, as well as the experience and capability, to weigh such factors.

The Home Office Statutory Guidance on the use of the 2014 Act powers (revised in December 2017) makes no mention of their use in relation to online behaviour.  That could perhaps usefully be revisited. Another possibility might be to explore extending the ability to apply for an ASBI beyond the authorities, for instance to some voluntary organisations. 

Whilst the debate about how to regulate internet activities and the role of intermediaries is not about to go away, we should not let that detract from the importance of focusing on remedies against the perpetrators themselves.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Right now the ECommerce Directive – or at any rate the parts that shield hosting intermediaries from liability for users’ content - is under siege. The guns are blazing from all directions: The Prime Minister’s speech in Davos, Culture Secretary Matt Hancock’s speech at the Oxford Media Conventionon 12 March 2018 and the European Commission’s Recommendation on Tackling Illegal Content Online all take aim at the shield, or at its linked bar on imposing general monitoring obligations on conduits, caches and hosts. The proposed EU Copyright Directive is attacking from the flanks.

The ECommerce Directive is, of course, part of EU law. As such the UK could, depending on what form Brexit takes, diverge from it post-Brexit. The UK government has identified the Directive as a possible divergence area and Matt Hancock's Department for Digital, Culture, Media and Sport (DCMS) is looking at hosting liability.
The status quo

Against this background it is worth looking behind the polarised rhetoric that characterises this topic and, before we decide whether to take a wrecking ball to the Directive's liability provisions, take a moment to understand how they work.  As so often with internet law, the devil revealed by the detail is a somewhat different beast from that portrayed in the sermons.
We can already sense something of that disparity. In her Davos speech Theresa May said:
“As governments, it is also right that we look at the legal liability that social media companies have for the content shared on their sites. The status quo is increasingly unsustainable as it becomes clear these platforms are no longer just passive hosts.”
If this was intended to question existing platform liability protections, it was a curious remark. Following the CJEU decisions in LVMH v Google France and L’Oreal v eBay, if a hosting platform treats user content non-neutrally it will not have liability protection for that content. By non-neutrally the CJEU means that the operator "plays an active role of such a kind as to give it knowledge of, or control over, those data".

So the status quo is that if a platform does not act neutrally as a passive host it is potentially exposed to legal liability.
By questioning the status quo did the Prime Minister mean to advocate greater protection for platforms who act non-neutrally than currently exists? In the febrile atmosphere that currently surrounds social media platforms that seems unlikely, but it could be the literal reading of her remarks. If not, is it possible that the government is taking aim at a phantom?
Matt Hancock's speech on 12 March added some detail:

"We are looking at the legal liability that social media companies have for the content shared on their sites. Because it’s a fact on the web that online platforms are no longer just passive hosts.
But this is not simply about applying publisher or broadcaster standards of liability to online platforms.
There are those who argue that every word on every platform should be the full legal responsibility of the platform. But then how could anyone ever let me post anything, even though I’m an extremely responsible adult?
This is new ground and we are exploring a range of ideas… including where we can tighten current rules to tackle illegal content online… and where platforms should still qualify for ‘host’ category protections."
Sectors, platforms and activities

The activities of platforms are often approached as if they constitute a homogenous whole: the platform overall is either a passive host or it is not. Baroness Kidron, opening the House of Lords social media debate on 11 January 2018, went further, drawing an industry sector contrast between media companies and tech businesses:
“Amazon has set up a movie studio. Facebook has earmarked $1 billion to commission original content this year. YouTube has fully equipped studios in eight countries."
She went on:  

"The Twitter Moments strand exists to “organize and present compelling content”. Apple reviews every app submitted to its store, “based on a set of technical, content, and design criteria”. By any other frame of reference, this commissioning, editing and curating is for broadcasting or publishing.”
However the ECommerce Directive does not operate at a business sector level, nor at the level of a platform treated as a whole. It operates at the level of specific activities and items of content. If an online host starts to produce its own content like a media company, then it will not have the protection of the Directive for that activity. Nor will it have protection for user content that it selects and promotes so as to have control over it.  Conversely if a media or creative company starts to host user-generated content and treats it neutrally, it will have hosting protection for that activity.  

In this way the Directive adapts to changes in behaviour and operates across business models. It is technology-neutral and business sector-agnostic. A creative company that develops an online game or virtual world will have hosting protection for what users communicate to each other in-world and for what they make using the tools provided to them.
The line that the Directive draws is not between media and tech businesses, nor between simple and complex platforms, but at the fine-grained level of individual items of content. The question is always whether the host has intervened at the level of a particular item of content to the extent that (in the words of one academic)[1], it might be understood to be their own. If it does that, then the platform will not have hosting protection for that item of content. It will still have protection for other items of user-generated content in relation to which it has remained neutral. 
This analysis can be illustrated by an app such as one that an MP might provide for the use of  constituents. Videos made by the MP would be his or her own content, not protected by the hosting provisions. If the app allows constituents to post comments to a forum, those would attract hosting protection. If the MP selected and promoted a comment as Constituent Comment of the Day, he or she would have intervened sufficiently to lose hosting protection for that comment.

This activity-based drawing of the line is not an accident. It was the declared intention of the promoters of the Directive. The European Commission said in its Proposal for the Directive back in 1998:
"The distinction as regards liability is not based on different categories of operators but on the specific types of activities undertaken by operators. The fact that a provider qualifies for an exemption from liability as regards a particular act does not provide him with an exemption for all his other activities." 
Courts in Ireland (Mulvaney v Betfair), the UK (Kaschke v Gray, England and Wales Cricket Board v Tixdaq) and France (TF1 v Dailymotion) have reached similar conclusions (albeit in Tixdaqonly a provisional conclusion).  Most authoritatively, the CJEU in L'Oreal v eBay states that a host that has acted non-neutrally in relation to certain data cannot rely on the hosting protection in the case of those data (judgment, para [116]).

The report of the Committee on Standards in Public Life on "Intimidation in Public Life" also discussed hosting liability.  It said:
“Parliament should reconsider the balance of liability for social media content. This does not mean that the social media companies should be considered fully to be the publishers of the content on their sites. Nor should they be merely platforms, as social media companies use algorithms that analyse and select content on a number of unknown and commercially confidential factors.”
Analysing and selecting user content so as to give the operator control over the selected content would exclude that content from hosting protection under the ECommerce Directive. The Committee's suggestion that such activities should have a degree of protection short of full primary publisher liability would seem to involve increasing, not decreasing, existing liability protection. That is the opposite of what, earlier in the Report, the Committee seemed to envisage would be required: “The government should seek to legislate to shift the balance of liability for illegal content to the social media companies away from them being passive ‘platforms’ for illegal content.”

Simple and complex platforms
The question of whether a hosting platform has behaved non-neutrally in relation to any particular content is also unrelated to the simplicity or complexity of the platform. The Directive has been applied to vanilla web hosting and structured, indexed platforms alike.  That is consistent with the contextual background to the Directive, which included court decisions on bulletin boards (in some ways the forerunners of today’s social media sites) and the Swedish Bulletin Boards Act 1998.

The fact that the ECD encompasses simple and complex platforms alike leads to a final point: the perhaps underappreciated variety of activities that benefit from hosting protection.  They include, as we have seen, online games and virtual worlds. They would include collaborative software development environments such as GitHub. Cloud-based word processor applications, any kind of app with a user-generated content element, website discussion forums, would all be within scope. By focusing on activities defined in a technology-neutral way the Directive has transcended and adapted to many different evolving industries and kinds of business.
The voluntary sector

Nor should we forget the voluntary world. Community discussion forums are (subject to one possible reservation) protected by the hosting shield.  The reservation is that the ECD covers services of a kind ‘normally provided for remuneration’. The reason for this is that the ECD was an EU internal market Directive, based on the Services title of the TFEU. As such it had to be restricted to services with an economic element. 
In line with EU law on the topic the courts have interpreted this requirement generously. Nevertheless there remains a nagging doubt about the applicability of the protection to purely voluntary activities.  The government could do worse than consider removing the "normally provided for remuneration" requirement so that the Mumsnets, the sports fan forums, the community forums of every kind can clearly be brought within the hosting protection.



[1]              C. Angelopoulos, 'On Online Platforms and the Commission’s New Proposal for a Directive on Copyright in the Digital Single Market' (January 2017).
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The High Court gave judgment this morning on Liberty’s challenge to the mandatory communications data retention provisions of the Investigatory Powers Act (IPAct). 

The big questions in the Liberty case were:
  • What does the government have to do make the IPAct comply with EU law following the Tele2/Watsondecision of the CJEU?
  • Has the government done enough in its proposed amendments to the IPAct, designed to address two admitted grounds of non-compliance with EU law?
  • When does it have to make changes?


In brief, the court has made a finding of non-compliance with EU law limited to the two grounds admitted by the government.  The court declared that Part 4 of the Investigatory Powers Act 2016 is incompatible with fundamental rights in EU law in that in the area of criminal justice:
(1) access to retained data is not limited to the purpose of combating “serious crime”; and
(2) access to retained data is not subject to prior review by a court or an independent administrative body.

As to timing to make changes, Liberty argued for no later than 31 July 2018 and the government for no earlier than 1 April 2019. The court decided that 1 November 2018 would be a reasonable time in which to amend the legal framework (albeit with a suggestion that practical implementation might take longer). In the meantime the existing IPAct data retention regime remains in effect, although lacking the two limitations and safeguards that have led to the admitted non-compliance with EU law.

The court observed, having noted that the question of appropriate remedy took the court into ‘deep constitutional waters’:
“… we are not prepared to contemplate the grant of any remedy which would have the effect, whether expressly or implicitly, of causing chaos and which would damage the public interest.
Nor do we consider that any coercive remedy is either necessary or appropriate. This is particularly so in a delicate constitutional context, where what is under challenge is primary legislation and where the Government proposes to introduce amending legislation which, although it will be in the form of secondary legislation rather than primary, will be placed before Parliament for the affirmative resolution procedure to be adopted.
On the other hand it would not be just or appropriate for the Court simply to give the Executive a carte blanche to take as long as it likes in order to secure compliance with EU law. The continuing incompatibility with EU law is something which needs to be remedied within a reasonable time. As long ago as July 2017 the Defendants conceded that the existing Act is incompatible with EU law in two respects.”

Turning to the main remaining grounds relied upon by Liberty:

1. Perhaps of greatest significance, the court rejected Liberty’s argument that the question of whether the legislation fell foul of the Tele2/Watson prohibition on general and indiscriminate retention of communications data should be referred to the CJEU. It noted a number of differences from the Swedish legislation considered in Tele2/Watson and concluded:

“In the light of this analysis of the structure and content of Part 4 of the 2016 Act, we do not think it could possibly be said that the legislation requires, or even permits, a general and indiscriminate retention of communications data. The legislation requires a range of factors to be taken into account and imposes controls to ensure that a decision to serve a retention notice satisfies (inter alia) the tests of necessity in relation to one of the statutory purposes, proportionality and public law principles.” The court declined to refer the point to the CJEU.

2. The question of whether national security is within the scope of the CJEU Watson decision would be stayed pending the CJEU’s decision in the reference from the Investigatory Powers Tribunal in the Privacy International case. The court declined to make a reference to the CJEU in these proceedings.

3. Liberty argued that a ‘seriousness’ threshold should apply to all other objectives permitted under Article 15(1) of the EU ePrivacy Directive, not just to crime. The court held that other than for criminal offences the fact that national legislation does not impose a “seriousness” threshold on a permissible objective for requiring the retention of data (or access thereto) does not render that legislation incompatible with EU law and that necessity and proportionality were adequate safeguards. It declined to refer the point to the CJEU.

4. A highly technical point about whether the CJEU Watson decision applied to ‘entity data’ as defined in the IPAct, or only to ‘events data’, was resolved in favour of the government.

5. Liberty argued that retention purposes concerned with protecting public health, tax matters, and regulation of financial services/markets and financial stability should be declared incompatible. The court declined to grant a remedy since the government intends to remove those purposes anyway.

6. As to whether mandatorily retained data has to be held within the EU, the court stayed that part of the claim pending the CJEU’s decision in the IPT reference in the Privacy International case.

7. The part of the claim regarding notification of those whose data has been accessed was also stayed pending the CJEU’s decision in the IPT reference in the Privacy International case.

By way of background to the decision, the IPAct was the government’s replacement for DRIPA, the legislation that notoriously was rushed through Parliament in 4 days in July 2014 following the CJEU’s nullification of the EU Data Retention Directive in Digital Rights Ireland.

DRIPA expired on 31 December 2016. But even as the replacement IPAct provisions were being brought into force it was obvious that they would have to be amended to comply with EU law, following the CJEU decision in Tele2/Watson issued on 21 December 2016.

A year then passed before the government published a consultation on proposals to amend the IPAct, admitting that the IPAct was non-compliant with EU law on the two grounds of lack of limitation to serious crime and lack of independent prior review of access requests. 

That consultation closed on 18 January 2018. Today’s judgment noted the government’s confirmation that legislation is due to be considered by Parliament before the summer recess in July 2018.

In the consultation the government set out various proposals designed to comply with Tele2/Watson:

-         A new body (the Office of Communications Data Authorisations) would be set up to give prior independent approval of communications data requests. These have been running at over 500,000 a year.

-         Crime-related purposes for retaining or acquiring events data would be restricted to serious crime, albeit broadly defined.

-         Removal of retention and acquisition powers for public health, tax collection and regulation of financial markets or financial stability.

The government's proposals were underpinned by some key interpretations of Tele2/Watson. The government contended in the consultation that:

-         Tele2/Watson does not apply to national security, so that requests by MI5, MI6 and GCHQ would still be authorised internally. That remains an outstanding issue pending the Privacy International reference to the CJEU from the IPT.

-         The current notice-based data retention regime is not 'general and indiscriminate'. It considered that Tele2/Watson's requirement for objective targeted retention criteria could be met by requiring the Secretary of State to consider, when giving a retention notice to a telecommunications operator, factors such as whether restriction by geography or by excluding a group of customers are appropriate.  Today’s Liberty decision has found in the government’s favour on that point. Exclusion of national security apart, this is probably the most fundamental point of disagreement between the government and its critics.

-         Tele2/Watson applies to traffic data but not subscriber data (events data but not entity data, in the language of the Act). Today’s decision upholds the government’s position on that.

-         Tele2/Watson does not preclude access by the authorities to mandatorily retained data for some non-crime related purposes (such as public safety or preventing death, injury, or damage to someone's mental health). That was not an issue in today’s judgment.

As to notification, the government considered that the existing possibilities under the Act are sufficient. It also considered that Tele2/Watson did not intend to preclude transfers of mandatorily retained data outside the EU where an adequate level of protection exists. These remain outstanding issues pending the Privacy International reference to the CJEU from the IPT.


Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The fallout from the Count Dankula ‘Nazi pug’ video prosecution shows no sign of abating.  While many have condemned the conviction as an assault on freedom of speech, others are saying that the law does not go far enough.  They argue that the criminal law only catches these incidents after the event when the harm has already been done. How can we prevent the harm being done in the first place?

“It is like pollution”, said one commentator. “We apply the precautionary principle to environmental harm, and we should do the same to prevent the toxic effects of tasteless, offensive and unfunny jokes on the internet. Freedom of speech is paramount, but we must not let that get in the way of doing what is right for society.”

The internet has only exacerbated the problem, say government sources. “So-called jokes going viral on social media are a scourge of society. Social media platforms have the resources to weed this out. They must do more, but so must society. Of course we have no quarrel with occasional levity, but serious humour such as satire is too dangerous to be left to the unregulated private sector. We would like to see this addressed by a self-regulatory code of conduct, but we are ready to step in with legislation if necessary.”

One professional comedian said: ‘This reaches a crisis point on 1 April each year, when tens of thousands of self-styled humourists try their hand at a bit of amateur prankstering. Who do they think they are fooling? An unthinking quip can have devasting consequences for the poor, the vulnerable, and for society at large. This is no joke. Controversial humour should be in the hands of properly qualified and trained responsible professionals.”

An academic added: “Humour is a public good. You only have to look at the standard of jokes on the internet to realise that the market is, predictably, failing to supply quality humour. We are in a race to the bottom. Since humour can also have significant negative externalities, the case for regulation is overwhelming.”

So there appears to be a growing consensus. Will we see a professional corps of licensed comedians?  Will amateur jokers find themselves in jail? Has this blogger succeeded only in proving that parody should be left to those who know what they are doing? Only time will tell.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
A preview of some of the UK internet legal developments that we can expect in 2018. Any future EU legislation will be subject to Brexit considerations and may or may not apply in the UK.

EU copyright reform In 2016 the European Commission published proposals for

-         a Directive on Copyright in the Digital Single Market. As it navigates the EU legislative process the proposal continues to excite controversy, mainly over the proposed publishers’ ancillary right and the clash between Article 13 and the ECommerce Directive's intermediary liability provisions.  

-         a Regulation extending the country of origin provisions of the Satellite and Cable Broadcasting Directiveto broadcasters' ancillary online transmissions. Most of the Commission’s proposal was recently rejected by the European Parliament.

-         legislation to mandate a degree of online content portability within the EU. The Regulation on cross-border portability of online content services in the internal market was adopted on 14 June 2017 and will apply from 20 March 1 April 2018.
EU online business As part of its Digital Single Market proposals the European Commission published a proposal for a Regulation on "Geo-blocking and other forms of discrimination". It aims to prevent online retailers from discriminating, technically or commercially, on the basis of nationality, residence or location of a customer. Political agreement was reached in November 2017 [and the Regulation was adopted on 28 February 2018. The Regulation will apply from 3 December 2018]. 

Telecoms privacy The proposed EU ePrivacy Regulation continues to make a choppy voyage through the EU legislative process.

Intermediary liability On 28 September 2017 the European Commission published a Communication on Tackling Illegal Content Online.  This is a set of nominally voluntary guidelines under which online platforms would adopt institutionalised notice and takedown/staydown procedures and proactive content filtering processes, based in part on a system of 'trusted flaggers'. The scheme would cover every kind of illegality from terrorist content, through copyright to defamation. The Commission aims to determine by May 2018 whether additional legislative measures are needed. [The Commission followed up on 1 March 2018 with a Recommendation on Measures to Effectively Tackle Illegal Content Online.]
Politicians have increasingly questioned the continued appropriateness of intermediary liability protections under the Electronic Commerce Directive. The UK Committee on Standards in Public Life has suggestedthat Brexit presents an opportunity to depart from the Directive. The government has published its Internet Safety Strategy Green Paper. More to come in 2018.

The hearing of the appeal to the UK Supreme Court in Cartier on who should bear the cost of complying with site blocking injunctions [was] heard [at the end of February] 2018.
TV-like regulation of the internet The review of the EU Audio Visual Media Services Directive continues. The Commission proposal adopted on 25 May 2016 would further extend the Directive's applicability to on-demand providers and internet platforms.

Pending CJEU copyright cases More copyright references are pending in the EU Court of Justice. Issues under consideration include whether the EU Charter of Fundamental Rights can be relied upon to justify exceptions or limitations beyond those in the Copyright Directive; and whether a link to a PDF amounts to publication for the purposes of the quotation exception (Spiegel Online GmbH v Volker Beck, C-516/17). Another case on the making available right (Renckhoff, C-161/17) is pending. It is also reportedthat the Dutch Tom Kabinet case on secondhand e-book trading has been referred to the CJEU.
ECommerce Directive Two cases involving Uber are before the CJEU, addressing in different contexts whether Uber’s service is an information society service within the Electronic Commerce Directive. Advocate General Szpunar gave an Opinionin Asociación Profesional Élite Taxi v Uber Systems Spain, C-434/15 on 11 May 2017 and in Uber France SAS, Case C‑320/16 on 4 July 2017. [The CJEU gave judgment in Uber Spain on 20 December 2017, holding that the service was a transport service and not an information society service.][The Austrian Supreme Court has referred to the CJEU questions on whether a hosting intermediary can be required to prevent access to similar content and on extraterritoriality (C-18/18 - Glawischnig-Piesczek).] 
Online pornography The Digital Economy Act 2017 grants powers to a regulator (recently formally proposed to be the British Board of Film Classification) to determine age control mechanisms for internet sites that make ‘R18’ pornography available; and to direct ISPs to block such sites that either do not comply with age verification or contain material that would not be granted an R18 certificate. The DCMS has publisheddocuments including draft guidance to the Age Verification Regulator.

Cross-border liability and jurisdictionIlsjan (Case C-194/16) is another CJEU reference on the Article 7(2) (ex-Art 5(3)) tort jurisdiction provisions of the EU Jurisdiction Regulation. The case concerns a claim [by a legal person] for correction and removal of harmful comments. It asks questions around mere accessibility as a threshold for jurisdiction (as found in Pez Hejduk) and the eDate/Martinez ‘centre of interests’ criterion for recovery in respect of the entire harm suffered throughout the EU. The AG Opinion in Ilsjanwas delivered on 13 July 2017. [The CJEU gave judgment on 17 October 2017. It held that a claim in relation to rectification, removal and the whole of the damage could be brought in the Member State in which the legal person had its centre of interests. Since an action for rectification and removal is indivisible it cannot be brought in each Member State in which the information is or was accessible.] 
The French CNIL/Google case on search engine de-indexing has raised significant issues on extraterritoriality, including whether Google can be required to de-index on a global basis. The Conseil d'Etat has referred various questions about this to the CJEU. [See also C-18/18 Glawischnig-Piesczek.]

Online state surveillance The UK’s Investigatory Powers Act 2016 (IP Act), partially implemented in 2016 and 2017, is expected to come fully in force in 2018. However the government has acknowledged that the mandatory communications data retention provisions of the Act are unlawful in the light of the Watson/Tele2 decision of the CJEU. It has launched a consultationon proposed amendments to the Act, including a new Office for Communications Data Authorisation to approve requests for communications data . Meanwhile a reference to the CJEU from the Investigatory Powers Tribunal questions whether the Watsondecision applies to national security, and if so how.
The IP Act (in particular the bulk powers provisions) may also be indirectly affected by cases in the CJEU (challenges to the EU-US Privacy Shield), in the European Court of Human Rights (various NGOs challenging the existing RIPA bulk interception regime) and by a judicial review by Privacy International of an Investigatory Powers Tribunal decision on equipment interference powers. However in that case the Court of Appeal has heldthat the Tribunal decision is not susceptible of judicial review.  One of the CJEU challenges to the EU-US Privacy Shield was held by the General Court on 22 November 2017 to be inadmissible for lack of standing.
Liberty's challenge by way of judicial review to the IP Act bulk powers and data retention powers is pending. [A hearing in relation to data retention powers took place on 27 and 28 February 2018.]
Compliance of the UK’s surveillance laws with EU Charter fundamental rights will be a factor in any data protection adequacy decision that is sought once the UK becomes a non-EU third country post-Brexit.

[Here is an updated mindmap of challenges to the UK surveillance regime.]



[Update 18 Dec. Replaced 'EU law' in last para with 'EU Charter fundamental rights'.] [Updated 5 March 2018, including addition of mindmap; and 6 March 2018 to add CJEU referral in C-18/18 Glawischnig-Piesczek.]
[Updated 28 March 2018 to correct starting date of Portability Regulation to reflect corrigendum to the Regulation.]
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I am typing this on the transatlantic flight to Canada, destination the Global Internet and Jurisdiction Conference in Ottawa (#OttawaGIJC).  This seems like a good time to set down some propositions that, even if they do not command universal agreement, I believe are central to the debate about internet and jurisdiction.

First, something about what we mean by jurisdiction. We shouldn’t get too hung up on this, because jurisdiction is mostly used just as shorthand for cross border legal liability. But lawyers use the single word jurisdiction to mean several different things and, if we are not careful, we can end up talking at cross-purposes.

In a nutshell, there is:
  • Prescriptive jurisdiction. This is simply the assertion, however theoretical, as to the reach of a local law. If the UK had passed a law making it illegal for UK citizens to read the banned book Spycatcher when visiting the USA, that would be an assertion of prescriptive jurisdiction. Another example is when a court makes an order requiring something to be done (or not done) in a foreign country.
  • Adjudicative jurisdiction. This is when a court determines that it has the ability (or not) to hear and decide a case. It also describes a court determining which country’s law it should apply to the case before it. In civil litigation between private parties the court may end up applying its own law or a foreign law. In a criminal prosecution the court will either apply its own country’s law or refuse jurisdiction.
  • Enforcement jurisdiction. This engages territoriality issues most strongly, since taking enforcement measures in another country is rather like sending troops across the border – an invasion of territorial sovereignty unless done with the consent of the other state. 
  • Investigatory jurisdiction. Sometimes regarded as a subset of enforcement jurisdiction, investigatory jurisdiction has become such an important topic for the internet (think of the Microsoft Warrant case pending before the US Supreme Court, or the Yahoo case that went up to the Belgian Supreme Court and back three times) that it deserves its place as a separate category.The issue here is when and how it is legitimate for law enforcement or intelligence agencies, with or without court orders, to demand that communications (or their associated data) held overseas be handed over.  As with enforcement jurisdiction this involves direct assertion of state power within the territory of another country – either where the data is held, where a foreign company is established, or both.

It gets more complicated than that. International law has developed principles around when it is legitimate for a state to act extraterritorially.  They form the background to a discussion of how jurisdiction should apply to the internet. But we should not necessarily be hamstrung by the existing order of things when debating the question of what rules should look like in the internet age.

Yes, the internet is different. Time was when people would say that the internet was just another new medium. We have coped with cross-border issues as a result of international communications and satellite broadcasting, so why do we need new rules for the internet? 

The internet is not wholly different, but some of its features are sufficiently different to demand reconsideration of the old rules. 

Individuals by the million are authors and publishers as well as readers. Their content is instantly accessible worldwide. Conversely, the effect of a national law or court injunction is amplified by the internet. An order can have effects in other countries that the same instrument would not have in the offline world. Cloud computing means that data is volatile. It may not stay in the same country from one second to the next, and fragments of one item of content may be split between data centres in different countries. All these things render the internet not only different, but materially so.

It’s not about whose law is better. If you arrive at a jurisdiction conference determined to demonstrate the superiority of your own local, national or regional law over that of every other country, then you are at the wrong conference. Jurisdiction rules are about resolving friction between different legal systems in as agnostic a way as possible, not about ensuring that the best (in someone’s view) law wins.

It’s not about global harmonisation. Perhaps you harbour an ambition of achieving global consensus about the substantive laws that should apply to the internet. That may be a noble goal (albeit there is also merit in diversity of laws) but it is a different project.  Jurisdiction rules presuppose different laws in different countries, albeit admittedly it is easier to reach agreement on jurisdictional rules when the underlying laws are more closely aligned. Nevertheless, while a jurisdiction project can aim to create international norms at the level of metalaw (rules about rules), creating uniform substantive law is not its goal.

Comity is not enough. Resolving jurisdictional frictions is often seen through the prism of comity. Comity has two aspects: the need as far as possible to recognise a legitimate assertion of state power by foreign countries, even if that may have some cross-border spillover effects; and conversely, the need to avoid treading on the toes of other foreign states when making orders that may have effects in other countries (but in both cases bearing in mind that spillover effects are likely to be greater on the internet than offline).

However, comity is a state-centric concept. It treats states as proxies for the rights and interests of their citizens. Extraterritorial legislation and court orders not only engage the sensitivities of other states, but directly affect individuals in other countries, engaging their universally recognised fundamental rights of, for instance, privacy or freedom of expression. 

Those individuals’ interests stand to be considered separately from the question of whether the sensitivities of another state are likely to be engaged by a particular legal instrument. Failure to engage in separate consideration can lead to the kind of reasoning adopted in the Equustek case, where the Supreme Court of Canada concluded that since protection of intellectual property was the kind of interest that another state would be expected to protect, its sensitivities would not be engaged and there was no issue of comity.

The SCC did not go on to ask whether, and if how the freedom of expression interests (the right to receive and seek out information) of citizens of other countries might be engaged by the particular assertion of rights in that case. That is of particular relevance in intellectual property cases since intellectual property rights are themselves generally territorial, so that a person may own rights only in some countries and not others; or may own rights of different scope in different countries.

We need brakes as well as accelerators. The jurisdictional problems of the internet manifest themselves in both underreach and overreach. There are situations where arrangements between states are no longer providing adequate means to obtain evidence to support criminal investigations. We can no longer assume that the evidence relating to a domestic crime will be held domestically. It could as easily be in a data centre abroad.  That would suggest a need to improve procedures for obtaining cross-border evidence.

Conversely, we have situations in which domestic legislatures, agencies and courts are at risk of overreaching in the cause of giving maximum effect to their local laws. That can result in the de facto imposition of those laws in countries with different laws. The concern here is the need for jurisdictional self-restraint.

The challenge is to forge rules that enable cross-border reach when appropriate, yet prevent the exercise of jurisdiction when not appropriate. The same kinds of rules are unlikely to achieve both. An approach that enables a court to weigh up and balance a series of factors in deciding whether or not to make an extraterritorial order may have desirable flexibility for the first case. But where risk of jurisdictional overreach is concerned a multi-factorial approach may be more enabling than restraining. Hard stops are likely to be required.

Peaceful co-existence requires compromise. The premise of jurisdiction rules is that nation states have different laws.  The objective where the internet is concerned should be to achieve peaceful co-existence between conflicting national regimes while protecting to the greatest possible extent universal values such as freedom of expression and privacy.

Peaceful co-existence cannot be achieved without compromise. That means taking a broader view than simply a laser-like focus on securing the effectiveness of one country or region’s most cherished laws. It may mean accepting that your country’s citizens can, if they try hard enough, find somewhere on the internet content that complies with another country’s laws and not your own.

(For more on this final topic see Cyberborders and the Right to Travel in Cyberspace, my chapter in The Net and the Nation State (2017 CUP, ed Uta Kohl).)


Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


As full implementation of the Investigatory Powers Act (IPAct) draws closer we can usefully ponder some of its more ticklish points of interpretation. These will serve to delineate the IPAct's powers, crystallise the legislation's procedural requirements and determine who can be compelled to do what.

Unlike its predecessor, the Regulation of Investigatory Powers Act 2000 (RIPA), the IPAct comes with expectations of openness and transparency.  The Act itself exposes a panoply of powers to the public gaze.  But despite its 300 pages of detail, decisions will still have to be made about the meaning of some provisions and how they are to be applied.

Previously such legal interpretations have tended to come to light, if at all, as a consequence of the Snowden revelations or during litigation brought by civil liberties organisations. Examples include the meaning of ‘external’ communications under RIPA, the legal basis for thematic interception warrants under RIPA, and the use of S.94 Telecommunications Act 1984 powers to acquire bulk communications data from telecommunications companies.

In the field of surveillance, hidden legal interpretations influencing how powers are wielded are in substance as much part of the law as the statute that grants the powers.  This can be problematic when a cornerstone of the rule of law is that laws should be publicly promulgated. People should be able to know in advance the kind of circumstances in which the powers are liable to be used and understand the manner of their exercise. According to jurisprudential taste, secret law is either bad law or not law at all.

The new Investigatory Powers Commissioner has an opportunity to bring to public view legal interpretations that will mould the use of the IPAct's surveillance powers. 

Most IPAct powers require approval by a Judicial Commissioner or, as now proposed for communications data acquisition, a new Office for Communications Data Authorisations. The Judicial Commissioner or other reviewer may have to form a view about some provision of the Act when approving a warrant or notice.  Some interpretations may have significance that goes wider than a single approval.

Under the IPAct there is scope for an adopted interpretation to be published if that can be done without breaching the Commissioner's responsibilities not to act contrary to the public interest, nor prejudice national security or the prevention or detection of serious crime or the economic well-being of the UK.

What interpretations of the IPAct will have to be considered? The most heavily debated has been the level of scrutiny that Judicial Commissioners are required to apply to Ministerial decisions to issue warrants and technical capability notices. Gratefully donning my techlaw hat, I shall leave that problem to the public and administrative law experts who have been mulling over it since the draft Bill was published in November 2015.

Approval decisions will typically involve assessments of necessity and proportionality. These will by their nature be fact-sensitive and so more difficult to make public without revealing operational matters that ought to remain secret. Nevertheless some general approaches may be capable of being made public.

Among the most likely candidates for publication will be points of statutory construction: aspects of the IPAct's language that require a view to be taken of their correct interpretation.  

I have drawn up a list of provisions that present interpretative challenges of varying degrees of significance. Some of the points are old hobbyhorses, dating back to my comments on the original draft Bill. Others are new. No doubt more will emerge as the IPAct is put into practice.

BULK INTERCEPTION

Selection for examination

What is the issue?

Under a bulk interception warrant what kinds of activities count as selection for examination of intercepted content or secondary data? While the question can be simply put, the answer is not so easy.

Why is it significant?

Selection for examination underpins three provisions of the IPAct.

First, a separate targeted examination warrant must be obtained before selecting intercepted content for examination by use of criteria (such as an e-mail address) referable to an individual known to be in the British Islands, if the purpose is to identify the content of communications sent by or intended for that individual. (S.152(4)) (However, a targeted examination warrant is not required for secondary data. As to what is meant by secondary data, see below.)

Second, it is an offence (subject to applicable knowledge and intent thresholds) to select intercepted content or secondary data for examination in breach of the Act's safeguards. (S.155)

Third, a bulk interception warrant authorising selection for examination must describe the manner in which intercepted content or secondary data will be selected for examination and the conduct by which that activity will be secured (S.136(4)(c)).

The S.136(4)(c) requirement is new compared with the equivalent provisions of RIPA. Curiously, it is not referred to in the draft Interception Code of Practice

It is important to know what activities amount to selection for examination.  This is a particular issue with automated processing.

Possible interpretations?

Examination means being read, looked at or listened to (S.263) But what activities are caught by selectionfor examination? How close a nexus does there have to be between the selection and any subsequent examination?  Does there have to be a specific intention to examine the selected item (for instance when an analyst makes a search request on a database)? Does selection for possible examination suffice?  (It is perhaps of interest that David Anderson Q.C.'s Bulk Powers Review at para 2.17 discusses under the heading of ‘Selection for Examination’ the use of strong and weak selectors to select material for “possible examination” by analysts.)

The Draft Interception Code of Practice describes a sequence of steps from obtaining the data through to examination by an analyst. It uses the term 'selection for examination' in ways that could refer to both selection by the analyst and intermediate processing steps:
"In practice, several different processing systems may be used to effect the interception and/or the obtaining of secondary data, and the selection for examination of the data so obtained. 
These processing systems process data from the communications links or signals that the intercepting authority has chosen to intercept. A degree of filtering is then applied to the traffic on those links and signals, designed to select types of communications of potential intelligence value whilst discarding those least likely to be of intelligence value. As a result of this filtering, which will vary between processing systems, a significant proportion of the communications on these links and signals will be automatically discarded. Further complex searches may then take place to draw out further communications most likely to be of greatest intelligence value, which relate to the agency’s statutory functions. These communications may then be selected for examination for one or more of the operational purposes specified in the warrant where the conditions of necessity and proportionality are met. Only items which have not been filtered out can potentially be selected for examination by authorised persons." (emphasis added)
If selection for examination encompasses only the action of an analyst querying a database then S.136(4)(c) would still require the warrant to describe the manner in which an analyst could select content or secondary data for examination. That could include describing how analysts can go about searching databases. It might also cover the operation of Query Focused Datasets (databases in which the data is organised so as to optimise particular kinds of queries by analysts).

But does selection for examination exclude all the automated processing that takes place between bulk capture and storage? There appears to be no reason in principle why automated selection should be excluded, if the selection is 'for examination'.  

Details of the kinds of automated processing applied between capture and storage are mainly kept secret.  However some clues beyond the draft Code of Practice can be obtained from the Intelligence and Security Committee Report of March 2015 and from the Bulk Powers Review.  The Bulk Powers Review describes a process that uses ‘strong selectors’ (telephone number or email address) to select items in near real time as they are intercepted:

“As the internet traffic flows along those chosen bearers, the system compares the communications against a list of strong selectors in near real-time. Any communications which match the selectors are automatically collected and all other communications are automatically discarded.”

Such selection against a list of e-mail addresses or telephone numbers of interest is not made for any purpose other than examination, or at least possible examination. But does it count as selection for examination if (as described in the Bulk Powers Review) a further triage process may be applied?

“Even where communications are known to relate to specific targets, GCHQ does not have the resources to examine them all. Analysts use their experience and judgement to decide which of the results returned by their queries are most likely to be of intelligence value and will examine only these.”

Weaker selectors may relate to subject-matter and be combined to create complex non-real time queries which determine what material is retained for possible examination after triage. Pattern matching algorithms could perhaps be used to flag up persons exhibiting suspicious behavioural traits as candidates for further investigation.

The question of which, if any, of these processes amount to selection for examination is of considerable significance to the operation of the processes mandated by the IPAct.

Secondary data

What is the issue?

'Secondary data' under the IP Act has been extended, compared with RIPA's equivalent ‘related communications data’, so as to include some elements of the content of a communication. However the definition is difficult to apply and in some respects verges on the metaphysical.  

Why is it significant?

Secondary data, despite its name, is perhaps the most important category of data within the IP Act. It is, roughly speaking, metadata acquired under a targeted, thematic or bulk interception warrant. As such it is not subject to all the usage restrictions that apply to intercepted content.

In particular, unlike for content, there is no requirement to obtain a targeted examination warrant in order to select metadata for examination by use of a selector (such as an e-mail address) referable to someone known to be in the British Islands.

The broader the scope of secondary data, therefore, the more data can be accessed without a targeted examination warrant and the more of what would normally be regarded as content will be included.

Possible interpretations?

Under S.137 of the IPAct secondary data includes:

“identifying data which -

(a) is comprised in, included as part of, attached to or logically associated with the communication (whether by the sender or otherwise),
(b) is capable of being logically separated from the remainder of the communication, and
(c) if it were so separated, would not reveal anything of what might reasonably be considered to be the meaning (if any) of the communication, disregarding any meaning arising from the fact of the communication or from any data relating to the transmission of the communication.”

Identifying data is data which may be used to identify, or assist in identifying, any person, apparatus, system or service, any event, or the location of any person, event or thing.

Identifying data is itself broadly defined. It includes offline as well as online events, such as date or location data on a photograph. However the real challenge is in understanding (c). How does one evaluate the ‘meaning’ of the communication for these purposes? If a name, or a location, or an e-mail address, or a time is extracted from the communication does that on its own reveal anything of its meaning? Is each item extracted to be considered on its own, or are the extracted items of data to be considered together?  How is the ‘meaning’ of a machine to machine communication to be evaluated? Is the test what the communication might mean to a computer or to a human being?

A list of the specific types of data that do and do not fall either side of the line can be a useful aid to understanding abstract data-related definitions such as this. Among the Snowden documents was a GCHQ internal reference list distinguishing between content and related communications data under RIPA.

TECHNICAL CAPABILITY NOTICES

Applied by or on behalf of

What is the issue?

A technical capability notice (TCN) can require a telecommunications operator to install a specified capability to assist with any interception, equipment interference or bulk acquisition warrant, or communications data acquisition notice, that it might receive in the future.

In particular a TCN can require a telecommunications operator to have the capability to remove electronic protection applied by or on behalf of that operator to any communications or data. This includes encryption. But when is encryption applied "by or on behalf of" that operator?

Why is it significant?

During the passage of the Bill through Parliament there was considerable debate about whether a TCN could be used to stop a telecommunications operator providing end to end encryption facilities to its users. The question was never fully resolved. One issue that would arise, if an attempt were made to use TCNs in that way, is whether the E2E encryption was applied by or on behalf of the operator. If not, then there would be no jurisdiction to issue a TCN in relation to that encryption facility.

Possible interpretations?

In principle, encryption could be applied by the operator, by the user, or by both. An operator would no doubt argue that under the E2E model it is providing the user only with the facility to apply encryption and that any encryption is applied by the user, not the operator.  The strength of that argument could vary depending on the precise technical arrangements in a particular case.

MANDATORY DATA RETENTION

Obtaining data by generation

What is the issue?

The IP Act empowers the Secretary of State, with the approval of a Judicial Commissioner, to give a communications data retention notice to a telecommunications operator. A notice can require the operator to retain specified communications data for up to 12 months.

A data retention notice may, in particular, include:

“requirements or restrictions in relation to the obtaining (whether by collection, generation or otherwise), generation or processing of (i) data for retention, or (ii) retained data.”

This provision makes clear that a requirement to retain data can include obtaining or generating data for retention. But what exactly does that mean? In particular, why does ‘obtaining’ data for retention include ‘generation’?

Why is it significant?

Mandatory communications data retention is one of the most controversial aspects of the IP Act. It is under challenge in the courts and, as a result of previous legal challenges, the government is already having to consult on amendments to the Act.

The powers to require data retention are broader in every respect than those in the predecessor legislation, the Data Retention and Investigatory Powers Act 2014. They can be used against private, not just public, telecommunications operators. They cover a far wider range of data. And they can require data be obtained and generated, not just retained.

So the width of these new powers is significant, especially as telecommunications operators are required not to disclose the existence of data retention notices to which they are subject.

Possible interpretations?

What does it mean to ‘obtain’ data by ‘generation’? It apparently means something different from just generating data for retention, since that is spelt out separately. The most far reaching interpretation would be if the notice could require the operator to require a third party to generate and hand over communications data to the operator. Could that be used to compel, say, a wi-fi operator to obtain and retain a user's identity details?

There was no suggestion during the Parliamentary debates that it could be used in that way, but then the curious drafting of this provision received no attention at all.

INTERNET CONNECTION RECORDS

‘Internet service’ and ‘internet communications service’

What is the issue?

The IPAct uses both ‘internet service’ and ‘internet communications service’ in its provisions that set out the limits on public authority access to internet connection records (ICRs). However it provides no definitions. Nor are these well understood industry or technical terms.

Why is it significant?

ICRs are logs of visited internet destinations such as websites. ICRs are particularly sensitive since they can be a rich source of information about someone’s lifestyle, health, politics, reading habits and so on. The IP Act therefore places more stringent limits, compared with ordinary communications data, on the authorities that may access ICRs and for what purposes.

The Act stipulates several purposes for which, in various different circumstances, a public authority can access ICRs. They include:
  • to identify which person or apparatus is using an internet service where the service and time of use are already known. (S.62(3))
  • to identify which internet communications service is being used, and when and how it is being used, by a person or apparatus whose identity is already known. (S.62(4)(b)(i) and S.62(5)(c)(i))
  • to identify which internet service is being used, and when and how it is being used, by a person or apparatus whose identity is already known. (S.62(4)(b) (iii) and S.62(5)(c) (iii))

The second and third purposes apply identically to internet services and internet communications services. The first purpose applies only to internet services.

The purposes for which the powers can be used may therefore differ, depending on whether we are dealing with an internet service or an internet communications service. But as already noted, the Act does not tell us what either of these terms means.

Possible interpretations?

We can find clues to interpretation in the footnotes to the draft Communications Data Code of Practice

Footnote 49 says that an ‘internet service’ is a service provided over the internet. On the face of it this would seem to exclude a service consisting of providing access tothe internet. However the example illustrating S.62(3) in paragraph 9.6 of the draft Code suggests differently.

Footnote 49 goes on to say that 'internet service' includes ‘internet communication services, websites and applications.’ It also suggests examples of online travel booking or mapping services.

This explanation presents some problems.

First is the suggestion that internet communication services are a subset of internet services. If that is right then subsections 62(4)(b)(i) and 62(5)(c)(i) of the Act (above, internet communication services) are redundant, since the respective subsections (iii) already cover internet services in identical terms.

If ‘internet communication service’ is redundant, then the uncertainties with its definition may not signify since S.62 can simply be applied to any 'internet service'.

Elsewhere the draft Code suggests that the subsections (iii) relate to ‘other’ internet services (i.e. additional to internet communications services covered by subsections (i)). However that language does not appear in the Act.

Second is the suggestion that websites and applications are different from internet communications services.  On the face of it an internet communication service could mean just e-mail or a messaging service. But if so, what are we to make of ‘applications’ as something different, since many messaging services are app-based?

Last, to add to the confusion, footnote 48 of the Draft Code of Practice says that an internet communication service is a service which provides for the communication between one or more persons over the internet and ‘may include’ email services, instant messaging services, internet telephony services, social networking and web forums.

This goes wider than just e-mail and messaging services. Does it, for instance, include online games with the ability to chat to other players?  In context does ‘person’ refer only to a human being, or does it..
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The European Commission recently published a Communication on Tackling Illegal Content Online.  Its subtitle, Towards an enhanced responsibility of online platforms, summarises the theme: persuading online intermediaries, specifically social media platforms, to take on the job of policing illegal content posted by their users.

The Commission wants the platforms to perform eight main functions (my selection and emphasis):

  1. Online platforms should be able to take swift decisions on action about illegal content without a court order or administrative decision, especially where notified by a law enforcement authority. (Communication, para 3.1)
  2. Platforms should prioritise removal in response to notices received from law enforcement bodies and other public or private sector 'trusted flaggers'. (Communication, para 4.1)
  3. Fully automated removal should be applied where the circumstances leave little doubt about the illegality of the material (such as where the removal is notified by law enforcement authorities). (Communication, para 4.1)
  4. In a limited number of cases platforms may remove content notified by trusted flaggers without verifying legality themselves. (Communication, para 3.2.1)
  5. Platforms should not limit themselves to reacting to notices but adopt effective proactive measures to detect and remove illegal content. (Communication, para 3.3.1)
  6. Platforms should take measures (such as account suspension or termination) which dissuade users from repeatedly uploading illegal content of the same nature. (Communication, para 5.1)
  7. Platforms are strongly encouraged to use fingerprinting tools to filter out content that has already been identified and assessed as illegal. (Communication, para 5.2)
  8. Platforms should report to law enforcement authorities whenever they are made aware of or encounter evidence of criminal or other offences. (Communication, para 4.1)

It can be seen that the Communication does not stop with policing content. The Commission wants the platforms to act as detective, informant, arresting officer, prosecution, defence, judge, jury and prison warder: everything from sniffing out content and deciding whether it is illegal to locking the impugned material away from public view and making sure the cell door stays shut. When platforms aren’t doing the detective work themselves they are expected to remove users’ posts in response to a corps of ‘trusted flaggers’, sometimes without reviewing the alleged illegality themselves. None of this with a real judge or jury in sight. 

In May 2014 the EU Council adopted its Human Rights Guidelines on Freedom of Expression Online and Offline.  The Guidelines say something about the role of online intermediaries.  Paragraph 34 states:
“The EU will … c) Raise awareness among judges, law enforcement officials, staff of human rights commissions and policymakers around the world of the need to promote international standards, including standards protecting intermediaries from the obligation of blocking Internet content without prior due process.” (emphasis added)
A leaked earlier draft of the Communication referenced the Guidelines. The reference was removed from the final version.  It would certainly have been embarrassing for the Communication to refer to that document. Far from “protecting intermediaries from the obligation of blocking Internet content without prior due process”, the premise of the Communication is that intermediaries should remove and filter content without prior due process.  The Commission has embraced the theory that platforms ought to act as gatekeepers rather than gateways, filtering the content that their users upload and read.

Article 15, where are you?
Not only is the Communication's approach inconsistent with the EU’s Freedom of Expression Guidelines, it challenges a longstanding and deeply embedded piece of EU law. Article 15 of the ECommerce Directive has been on the statute book for nearly 20 years. It prohibits Member States from imposing general monitoring obligations on online intermediaries. Yet the Communication says that online platforms should “adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices which they receive”.  It “strongly encourages” online platforms to step up cooperation and investment in, and use of, automatic detection technologies.

A similar controversy around conflict with Article 15 has been stirred up by Article 13 of the Commission’s proposed Digital Single Market Copyright Directive, also in the context of filtering.

Although the measures that the Communication urges on platforms are voluntary (thus avoiding a direct clash with Article 15) that is more a matter of form than of substance. The velvet glove openly brandishes a knuckleduster: the explicit threat of legislation if the platforms do not co-operate.
“The Commission will continue exchanges and dialogues with online platforms and other relevant stakeholders. It will monitor progress and assess whether additional measures are needed, in order to ensure the swift and proactive detection and removal of illegal content online, including possible legislative measures to complement the existing regulatory framework. This work will be completed by May 2018.”
The platforms have been set their task and the big stick will be wielded if they don't fall into line. It is reminiscent of the UK government’s version of co-regulation: 
“Government defines the public policy objectives that need to be secured, but tasks industry to design and operate self-regulatory solutions and stands behind industry ready to take statutory action if necessary.” (e-commerce@its.best.uk, Cabinet Office 1999.)
So long as those harnessed to the task of implementing policy don’t kick over the traces, the uncertain business of persuading a democratically elected legislature to enact a law is avoided.

The Communication displays no enthusiasm for Article 15.  It devotes nearly two pages of close legal analysis to explaining why, in its view, adopting proactive measures would not deprive a platform of hosting protection under Article 14 of the Directive.  Article 15, in contrast, is mentioned in passing but not discussed.  

Such tepidity is the more noteworthy when the balance between fundamental rights reflected in Article 15 finds support in, for instance, the recent European Court of Human Rights judgment in Tamiz v Google ([84] and [85]). Article 15 is not something that can or should be lightly ignored or manoeuvred around.

For all its considerable superstructure - trusted flaggers, certificated standards, reversibility safeguards, transparency and the rest - the Communication lacks solid foundations. It has the air of a castle built on a chain of quicksands: presumed illegality, lack of prior due process at source, reversal of the presumption against prior restraint, assumptions that illegality is capable of precise computation, failure to grapple with differences in Member States' laws, and others.

Whatever may be the appropriate response to illegal content on the internet – and no one should pretend that this is an easy issue – it is hard to avoid the conclusion that the Communication is not it.

The Communication has already come in for criticism (here, here, here, here and here). At risk of repeating points already well made, this post will take an issue by issue dive into its foundational weaknesses.

To aid this analysis I have listed 30 identifiable prescriptive elements of the Communication. They are annexed as the Communication's Action Points (my label). Citations in the post are to that list and to paragraph numbers in the Communication.

Index to Issues and Annex











Presumed illegal
Underlying much of the Communication’s approach to tackling illegal content is a presumption that, once accused, content is illegal until proven innocent. That can be found in:
  • its suggestion that content should be removed automatically on the say-so of certain trusted flaggers (Action Points 8 and 15, paras 3.2.1 and 4.1);
  • its emphasis on automated detection technologies (Action Point 14, para 3.3.2);
  • its greater reliance on corrective safeguards after removal than preventative safeguards before (paras 4.1, 4.3);
  • the suggestion that platforms’ performance should be judged by removal rates (the higher the better) (see More is better, faster is best below);
  • the suggestion that in difficult cases the platform should seek third party advice instead of giving the benefit of the doubt to the content (Action Point 17, para 4.1);
  • its encouragement of quasi-official databases of ‘known’ illegal content, but without a legally competent determination of illegality (Action Point 29, para 5.2). 

Taken together these add up to a presumption of illegality, implemented by prior restraint.

In one well known case a tweet provoked a criminal prosecution, resulting in a magistrates’ court conviction. Two years later the author was acquitted on appeal. Under the trusted flagger system the police could notify such a tweet to the platform at the outset with every expectation that it would be removed, perhaps automatically (Action Points 8 and 15, paras 3.2.1, 4.1).  A tweet ultimately found to be legal would most probably have been removed from public view, without any judicial order. 

Multiply that thousands of times, factor in that speedy removal will often be the end of the matter if no prosecution takes place, and we have prior permanent restraint on a grand scale.

Against that criticism it could be said that the proposed counter notice arrangements provide the opportunity to reverse errors and doubtful decisions.  However by that time the damage is done.  The default has shifted to presumed illegality, inertia takes over and many authors will simply shrug their shoulders and move on for the sake of a quiet life.  

If the author does object, the material would not automatically be reinstated. The Communication suggests that reinstatement should take place if the counter-notice provides reasonable grounds to consider that removed content is not illegal. The burden has shifted to the author to establish innocence.

If an author takes an autonomous decision to remove something that they have previously posted, that is their prerogative. No question of interference with freedom of speech arises.  But if the suppression results from a state-fostered system that institutionalises removal by default and requires the author to justify its reinstatement, there is interference with freedom of speech of both the author and those who would otherwise have been able to read the post. It is a classic chilling effect.

Presumed illegality does not feature in any set of freedom of speech principles, offline or online.  Quite the opposite. The traditional presumption against prior restraint, forged in the offline world, embodies the principle that accused speech should have the benefit of the doubt. It should be allowed to stay up until proved illegal. Even then, the appropriate remedy may only be damages or criminal sanction, not necessarily removal of the material from public view.  Only exceptionally should speech be withheld from public access pending an independent, fully considered, determination of legality with all due process.

Even in these days of interim privacy injunctions routinely outweighing freedom of expression, presumption against prior restraint remains the underlying principle. The European Court of Human Rights observed in Spycatcher that “the dangers inherent in prior restraints are such that they call for the most careful scrutiny on the part of the Court”.

In Mosley v UK the ECtHR added a gloss that prior restraints may be "more readily justified in cases which demonstrate no pressing need for immediate publication and in which there is no obvious contribution to a debate of general public interest". Nevertheless the starting point remains that the prior restraint requires case by case justification. All the more so for automated prior restraint on an industrial scale with no independent consideration of the merits.

Due process at source
A keystone of the Communication is the proposed system of 'trusted flaggers' who offer particular expertise in notifying the presence of potentially illegal content: “specialised entities with specific expertise in identifying illegal content, and dedicated structures for detecting and identifying such content online” (para 3.2.1)

Trusted flaggers “can be expected to bring their expertise and work with high quality standards, which should result in higher quality notices and faster take-downs” (para 3.2.1).  Platforms would be expected to fast-track notices from trusted flaggers. The Commission proposes to explore with industry the potential of standardised notification procedures.

Trusted flaggers would range from law enforcement bodies to copyright owners. The Communication names the Europol Internet Referral Unit for terrorist content and the INHOPE network of reporting hotlines for child sexual abuse material as trusted flaggers. It also suggests that civil society organisations or semi-public bodies are specialised in the reporting of illegal online racist and xenophobic material.

The emphasis is notably on practical expertise rather than on legal competence to determine illegality. This is especially significant for the proposed role of law enforcement authorities as trusted flaggers. The police detect crime, apply to court for arrest or search warrants, execute them, mostly hand over cases to prosecutors and give evidence in court. They may have practical competence in combating illegal activities, but they do not have legal competence to rule on legality or illegality (see below Legal competence v practical competence).

The system under which the Europol IRU sends takedown notices to platforms is illustrative. Thousands - in the case of the UK's similar Counter Terrorism Internet Referral Unit (CTIRU) hundreds of thousands - of items of content are taken down on the say-so of the police, with safeguards against overreaching dependent on the willingness and resources of the platforms to push back. 

It is impossible to know whether such systems are ‘working’ or not, since there is (and is meant to be) no public visibility and evaluation of what has been removed. 

As at July 2017 the CTIRU had removed some 270,000 items since 2010.  A recent freedom of information request by the Open Rights Group for a list of the kind of “statistical records, impact assessments and evaluations created and kept by the Counter Terrorism Internet Referrals Unit in relation to their operations” was rejected on grounds that it would compromise law enforcement by undermining the operational effectiveness of the CTIRU and have a negative effect on national security.

In the UK PIPCU (the Police Intellectual Property Crime Unit) is the specialist intellectual property enforcement unit of the City of London Police. One of PIPCU’s activities is to write letters to domain registrars asking them to suspend domains used for infringing activities. Its registrar campaign led to this reminder from a US arbitrator of the distinction between the police and the courts:
“To permit a registrar of record to withhold the transfer of a domain based on the suspicion of a law enforcement agency, without the intervention of a judicial body, opens the possibility for abuse by agencies far less reputable than the City of London Police. Presumably, the provision in the Transfer Policy requiring a court order is based on the reasonable assumption that the intervention of a court and judicial decree ensures that the restriction on the transfer of a domain name has some basis of “due process” associated with it.”
A law enforcement body not subject to independent due process (such as applying for a court order) is at risk of overreach, whether through over-enthusiasm for the cause of crime prevention, succumbing to groupthink or some other reason. Due process at source is designed to prevent that. Safeguards at the receiving end do not perform the same role of keeping official agencies in check.

The Communication suggests (Action Point 8, 3.2.1) that in ‘a limited number of cases’ platforms could remove content notified by certain trusted flaggers without verifying legality themselves. 

What might these 'limited cases' be?  Could it apply to both state and private trusted flaggers? Would it apply to any kind of content and any kind of illegality, or only to some? Would it apply only where automated systems are in place? Would it apply only where a court has authoritatively determined that the content is illegal? Would it apply only to repeat violations? The Communication does not tell us. Where it would apply, absence of due process at source takes on greater significance.

Would it perhaps cover the same ground as Action Point 15, under which fully automated deletion should be applied where "circumstances leave little doubt about the illegality of the material", for example (according to the Communication) when notified by law enforcement authorities?

When we join the dots of the various parts of the Communication the impression is of significant expectation that instead of considering the illegality of content notified by law enforcement, platforms may assume that it is illegal and automatically remove it.

The Communication contains little to ensure that trusted flaggers make good decisions. Most of the safeguards are post-notice and post-removal and consist of procedures to be implemented by the platforms. As to specific due process obligations on the notice giver, the Communication is silent.

The contrast between this and the Freedom of Expression Guidelines noted earlier is evident. The Guidelines emphasise prior due process. The Communication emphasises ex post facto remedial safeguards to be put in place by the platforms. Those are expected to compensate for absence of due process on the part of the authority giving notice in the first place.

Legal competence v practical competence
The Communication opens its section on notices from state authorities by referring to courts and competent authorities able to issue binding orders or administrative decisions requiring online platforms to remove or block illegal content. Such bodies, it may reasonably be assumed, would incorporate some element of due process in their decision-making prior to the issue of a legally binding order.

However we have seen that the Communication abandons that limitation, referring to ‘law enforcement and other competent authorities’. A ‘competent authority’ is evidently not limited to bodies embodying due process and legally competent to determine illegality.

It includes bodies such as the police, who are taken to have practical competence through familiarity with the subject matter. Thus in the Europol EU Internet Referral Unit “security experts assess and refer terrorist content to online platforms”.

It is notable that this section in the earlier leaked draft did not survive the final edit:
“In the EU, courts and national competent authorities, including law enforcement authorities, have the competence to establish the illegality of a given activity or information online.” (emphasis added)
Courts are legally competent to establish legality or illegality, but law enforcement bodies are not. 

In the final version the Commission retreats from the overt assertion that law enforcement authorities are..
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
[Updated with answers, 1 January 2018]

15 questions to illuminate the festive season. Answers in the New Year. (Remember that this is an English law blog). 

Tech teasers 

1. How many data definitions does the Investigatory Powers Act 2016 (IP Act) contain?

Twenty-one: Communications data, Relevant communications data, Entity data, Events data, Internet connection record, Postal data, Private information, Secondary data, Systems data, Related systems data, Equipment data, Overseas-related equipment data, Identifying data, Target data, Authorisation data, Protected data, Personal data, Sensitive personal data, Targeted data, Content, and Data. 

2. A technical capability notice (TCN) under the IP Act could prevent a message service from providing end to end encryption to its users. True, False or Maybe?

Maybe. A TCN could require the provider to have a capability to remove electronic protection applied by it if, among other things, that is technically feasible. The most significant question is whether the message service provider is regarded as itself applying the E2E encryption. If so, then a TCN could possibly be used to require such a provider to adopt a different model. If the user is regarded as applying the encryption then a TCN could not be used. 

3. Under the IP Act a TCN requiring installation of a permanent equipment interference capability could be served on a telecommunications operator but not a device manufacturer. True, False or Maybe?

True. Device manufacturers are outside the scope of TCNs. If a device manufacturer provides a telecommunications service (for instance where a phone manufacturer also provides its own messaging service) then it could be within scope, but only for its telecommunications service activities. 

4. Who made a hash of a hashtag?

In an interview in March 2017 Home Secretary Amber Rudd famously referred to the need for assistance from those who ‘understand the necessary hashtags’.  A week later a Home Office Minister explainedthat she had intended to refer to image hashing, not hashtags. So strictly speaking she made a hashtag of a hash.

Brave new world

5. Who marked the new era of post-Snowden transparency by holding a private stakeholder-only consultation on a potentially contentious IP Act draft Statutory Instrument?

As required by the IP Act the Home Secretary consulted various specified stakeholders on draft technical capability regulations (see 2 and 3 above) prior to laying them before Parliament for approval. The consultation was conducted privately, excluding the general public and civil society groups. However the Open Rights Group obtained and published a copy of the draft regulations.

6. Who received an early lesson in the independence of the new Investigatory Powers Commissioner?

GCHQ. Its November 2017 approach to the Investigatory Powers Commissioner to discuss the possibility of a protocol for reducing evidential issues in Investigatory Powers Tribunal or other cases was politely but firmly rebuffed

The penumbra of ECJ jurisdiction
  
7. The EU Court of Justice (CJEU) judgment in Watson/Tele2 was issued 22 days after the IP Act received Royal Assent. How long elapsed before the Home Office published proposals to amend the Act to take account of the decision?

344 days. The Consultationwas published on 30 November 2017.

8. The Investigatory Powers Tribunal has recently made a referral to the CJEU. What is the main question that the CJEU will have to answer about the scope of its Watson decision?  

Paraphrased, the main question is whether national security is excluded from the Watsondecision as being outside the scope of EU law.

9. What change was made in the IP Act’s bulk powers, compared with S.8(4) RIPA, that would render the CJEU’s Q.8 answer especially significant?

In the IP Act the purposes for which the bulk powers may be exercised are all framed by reference to national security. In RIPA (as amended by DRIPA 2014) the serious crime purpose does not have to be related to national security. 

10. After Brexit we won't need to worry about CJEU surveillance judgments, even if we exit the EU with no deal. True, False or Maybe? 

False, at least if the UK wishes to have a data protection adequacy determination that would enable EU countries to transfer personal data to the UK. As the USA discovered in Schrems, a third country’s surveillance regime can be a significant factor in an adequacy determination.

Copyright offline and online

11. Tweeting a link to infringing material is itself an infringement of copyright. True, False or Maybe?  

Maybe, depending on whether (a) you know that the material is infringing; or (b) you are linking for financial gain, in which case you would be rebuttably presumed to know. This is the result of the CJEU’s decision in GS Media.

12. Reading an infringing copy of a paper book is not copyright infringement. Viewing an infringing copy online is. True, False or Maybe?

True, at least if what you do online is sufficiently deliberate and knowing.  EU copyright law treats screen and buffer copies as engaging the reproduction right. The CJEU in Filmspeler held that the user of a multimedia player add-on containing links to infringing movies infringed the reproduction right by viewing an infringing copy accessed via the link.  This was because, as a rule, the purchaser of such a player deliberately and in full knowledge of the circumstances accessed a free and unauthorised offer of protected works. This took the activity outside the Copyright Directive’s exception for transient and temporary copies. The same reasoning can be applied to an online book.

13. Whereas selling a set-top box equipped with PVR facilities is legal, providing a cloud-based remote PVR service infringes copyright. True, False or Maybe?

True. Established by the CJEU in VCAST, 29 November 2017.

14. Format-shifting infringes copyright. True, False or Maybe?

True.  Seven years after the Hargreaves Review identified this as an aspect of copyright that puts the law into confusion and disrepute, format shifting remains an infringement.

15. Illegal downloading is a crime. True, False or Maybe?

False. A user who downloads without the permission of the copyright owner commits a civil infringement of copyright, but without more that is not a crime.  In 2014 PIPCU (the Police Intellectual Property Crime Unit) deployed replacement website ads proclaiming that ‘Illegal downloading is a crime’. PIPCU later explained this on the basis that “Downloading falls within s.45 of the Serious Crime Act 2007 if it encourages s.107 CDPA 1988 offences”. 


Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Over the last four months the Law Commission of England and Wales has been consulting on the topic of Making a Will, focusing on testamentary capacity and formalities.  Chapter 6 of the Consultation is about Electronic Wills. This is my submission on that topic, from the perspective of a tech lawyer who knows little of the law of wills but has grappled many times with the interaction of electronic transactions and formalities requirements.

Introductory Remarks

Overview
1. The question at the core of Chapter 6 of the Consultation is how to give effect to testamentary intentions in an increasingly electronic environment. This has at least five aspects, which inevitably conflict with each other to an extent:
  • Providing a reasonable degree of certainty that the testator intended the document in question to have the significant legal effects of a will. This is achieved by requiring a degree of formality and solemnity.
  • Ensuring that formalities do not act as a deterrent to putative testators whether through complexity, cost, consumption of time or uncertainty as to how to achieve compliance.
  • Minimising the risk of a testator’s intentions being defeated by an actual failure to comply with formalities or an inability to demonstrate that formalities were in fact complied with.
  • Providing protection against fraud, tampering and forgery, either of the body of the document or of the signature(s) appended to it.
  • Providing for all the above over the potentially long period of time between execution of the will and its being admitted to probate.
2. The tensions between these requirements necessitate a balance to be drawn that will not perfectly satisfy any of them, as is the case with the current regime designed for an offline environment.

Signatures versus other formalities

3. Although the focus of electronic transactions regimes tends to be on signatures, signatures should not be addressed in isolation from other relevant formalities[1]. As the Consultation Paper recognises, there is interaction and dependency between signature, form, medium and process. Although the Consultation Paper does not categorise them as such, for wills formalities of all four kinds exist:
  • Signature: the need for signatures and the (possible) requirement that the signature be handwritten (Consultation Paper 6.20 to 6.30)
  • Form: the caselaw requirement for an attestation clause if a strong presumption of due execution is to arise (Consultation Paper 5.11 to 5.12; confusion around the witness attestation requirement is addressed elsewhere in the Consultation paper.)
  • Medium: the requirement that the will be in writing (Consultation Paper 6.15 to 6.19)
  • Process: the presence and simultaneity requirements for witnessing (Consultation Paper 6.32); and the practical filing requirements for admission to probate (6.97).
4. However the Consultation Paper is not always convincing about the relative importance of these formalities.  Thus in bringing home to the testator the seriousness of the transaction, the ceremony of gathering two witnesses in the same room simultaneously to witness the testator’s signature would seem likely to be more significant than whether or not the signature is handwritten (cfConsultation Paper 6.48, 6.64). If it had to be done in the presence of two witnesses appending a signature to an electronic document using (for instance) a tablet would surely be no less a matter of consequence than applying a handwritten signature to a paper document.

5. The overall purpose of giving effect to the testator’s intention where electronic methods are involved may be achievable by an appropriate combination of all four kinds of formality. Not all (or even most of) the heavy lifting has necessarily to be done by the signature itself, any more than with a traditional paper will.

The function of a signature

6. The function of a signature is generally threefold: (1) to indicate assent to or an intention to be bound by the contents of the document, (2) to identify the person so doing and (3) to authenticate the document. (There are variations on these functions. For instance the signature of a witness does not indicate assent or an intention to be bound, but instead is intended to verify the signature of the party to the document.)

7. The difference between the identification and authentication functions can be seen if we consider the different kinds of repudiation that may occur. Identification protects against the claim: ‘That is not my (or X’s) signature’.  Authentication protects against the claim: ‘That is my (or X’s) signature, but that is not the document that I (or X) signed’.

Strengths and weaknesses of electronic signatures

8. As the Consultation Paper notes, ordinary electronic signatures (typed names, copied scans) are poor identifiers and authenticators. Nevertheless English law, in keeping with its historically liberal attitude to formalities requirements generally, rightly regards such signatures as adequate in most cases in which a signature is required by statute. Manuscript signatures are better, but not perfect, identifiers and authenticators. A properly formed manuscript signature is better than a mark, but both are valid.

9. At the other end of the scale of sophistication, certificate-based digital signatures are very good (far better than manuscript signatures) at authenticating the signed document.  However they remain relatively poor at assuring the identity of the person who applied the digital signature. This is because however sophisticated may be the signature technology, access to the signature creation device will (in the absence of a biometric link) be secured by a password, a PIN, or something similar. As the Consultation Paper rightly points out these are weak forms of assurance (Consultation Paper 6.60 to 6.68). This aspect can be improved by adopting methods such as two factor authentication of the user. It may or may not be apparent after the event whether such a technique was used.

Common traps in legislating for electronic transactions

Over-engineering and over-estimating the reliability of non-electronic systems

10. The Consultation Paper refers to the apparently stillborn attempt to legislate for electronic wills in Nevada. I am not familiar with the particular legislation in question, but will offer some general comments about the temptation for legislation to impose over-engineered technical solutions.

11. Over-engineering is a natural consequence of over-estimating the reliability of non-electronic systems and thus, in the name of equivalence, attempting to design in a level of assurance for the electronic system that does not exist in the non-electronic sphere.  As the Australian Electronic Commerce Expert Group stated in its 1998 Report to the Attorney-General[2]:
“There is always the temptation, in dealing with the law as it relates to unfamiliar and new technologies to set the standards required of a new technology higher than those which currently apply to paper and to overlook the weaknesses that we know to inhere in the familiar.”
12. Over-engineering occurred in the early days of digital signatures, when complex statutes were passed in some jurisdictions (the Utah Digital Signatures Act being the earliest and best known example) in effect prescribing the use of PKI digital signatures in an attempt to achieve a guarantee of non-repudiation far beyond that provided by manuscript signatures. These kinds of rules were found to be unnecessary for everyday purposes and have tended to be superseded by facilitative legislation such as the US ESign Act.

Over-technical formalities requirements

13. Over-technical formalities requirements are a potential source of concern. This is for two reasons. 

14. First, they increase the chance that a putative testator or a witness will make an error in trying to comply with them. As the Sixth Interim Report of the Law Revision Committee said in 1937 in relation to the Statute of Frauds:
" 'The Act', in the words of Lord Campbell . . . 'promotes more frauds than it prevents'. True it shuts out perjury; but it also and more frequently shuts out the truth. It strikes impartially at the perjurer and at the honest man who has omitted a precaution, sealing the lips of both. Mr Justice FitzJames Stephen ... went so far as to assert that 'in the vast majority of cases its operation is simply to enable a man to break a promise with impunity, because he did not write it down with sufficient formality.’ " 
15. Second, a person attempting to satisfy the formalities requirements must be able to understand how to comply with them without resort to expert technical assistance, and to be confident that they have in fact complied. A formalities specification that requires the assistance of an IT expert to understand it will deter people from using the procedure and increase the incidence of disputes for those who do so. Injustice will be caused if the courts are filled with disputes about whether the right kind of electronic signature has been used and where there is no real doubt about the identity of the testator and the authenticity of the will.

Over-technology-specific

16. As a general rule technology-neutral legislation is preferable to technology-specific legislation.

17. This is for two reasons. First, technology-specific legislation can be overtaken by technological developments, with the result either that it is uncertain whether a new technology complies with the requirements, or that the legislation may clearly exclude the new technology even though functionally it performs as well or better than the old technology. Second, technology-specific legislation tends to lock in particular technology vendors rather than opening the market to all whose offerings are able to provide the required functionality (cf Consultation paper 6.36 and 6.37).

18. Against that, however, is the concern that if legislation is drafted at a very high level of abstraction in order to accommodate possible future technologies, it carries the price of uncertainty as to whether any given technology does or does not comply with the formalities requirements. That is most undesirable, for the reasons set out above.

19. Reconciling these opposing considerations is no easy task. Indeed it may be impossible to achieve a wholly satisfactory resolution. Nevertheless the competing considerations should be recognised and addressed.

Validity versus evidence

20. Validity and evidence have to be considered separately. Validity is not a matter of evidential value. Whilst the overall purpose of a formality requirement may be to maximise evidential value and to deter fraud (cf Lim v Thompson), the formality requirement itself stands separate as a rule of validity. 

Commentary on Chapter 6 of Consultation Paper

21. In the light of the introductory discussion above I offer the following comments on some aspects of Chapter 6. I will start with Enabling Electronic Wills (6.33 to 6.43), since that contains some of the most fundamental discussion.

Enabling Electronic Wills (6.33 to 6.43)

6.34 ‘It is highly likely that their use will become commonplace in the future’.

22. Since ‘the future’ is an indeterminate period this is probably a reasonably safe prediction. However, with apologies to Victor Hugo, there is nothing as feeble as an idea whose time has yet to come.

23. Science fiction films from the 1950s and 1960s routinely showed video communicators – an idea that stubbornly refused to take off for another 50 years. Even now video tends to be used for the occasions when seeing the other person is an actual benefit rather than a hindrance – special family occasions, business conferencing, intimate private exchanges for example.

24. Electronic wills have something of that flavour: possible in principle, but why do it when paper has so many advantages: 
  • (Reasonably) Permanent
  • Cheap
  • (Reasonably) secure
  • (Reasonably) private
  • Serious (ceremonial)
  • (Relatively) simple to comply with
25. By contrast electronic wills, as technology currently stands, would be inherently:
  • Impermanent
  • Costly
  • Insecure
  • Less private
  • Casual
  • Complicated to comply with
26. We cannot exclude the possibility that the effort and expense required to overcome, or at least mitigate, these disadvantages may at the present time be out of proportion to the likely benefit. It is perhaps no surprise that stakeholders report little appetite for electronic wills. We should beware the temptation to force the premature take-up of electronic wills simply because of a perception that everything should be capable of being done electronically.    

27. Whilst predictions in this field are foolish, one way in which technology might enable electronic wills in the future is the development (perhaps from existing nascent e-paper technologies) of cheap durable single-use tablets on which an electronic document and accompanying testator and witness signature details could be permanently inscribed and viewed electronically.

28. This is not to say that legislation should not be re-framed now to facilitate the development of appropriate forms of electronic will. Ideally such legislation should capture the essential characteristics of the desired will-making formalities in a technology-neutral but understandable way, rather than prescribe or enable the prescription of detailed systems. In theory it would not even matter if currently there is no technology that can comply with those characteristics electronically.  Such legislation would allow for the possible future development of as yet unknown compliant technologies.

29. However as already discussed, achieving that aim while at the same time leaving a putative testator with no room for doubt about whether a particular technology does or does not satisfy the requirements of the law is not an easy task. It is also pertinent to consider how the presumption of due execution might apply in an electronic context. With paper the presumption arises from matters apparent on the face of the will (Consultation Paper, 5.11). The more technical and complex the formalities requirements for an electronic will, the less will it be possible for compliance with those formalities to be apparent on the face of the document.

6.34 ‘We have focused on electronic signatures’

30. As already indicated, to focus on electronic signatures to the exclusion of the other relevant formalities is, I would suggest, an invitation to error. In reality the Consultation Paper does, of necessity, refer to the other formalities. However it would be preferable explicitly to recognise the interdependence of the four categories of formality and to consider them as a coherent whole.

6.35 ‘First and most importantly, electronic signatures must be secure’

31. This, it seems to me, risks falling into the related traps of over-engineering and of over-estimating the reliability of non-electronic systems (see [10] above).

32. Nor am I sure that the paragraph adequately separates the three functions of a signature discussed above: assent to terms/intention to be bound, identification and authentication.

33. The statement that an electronic signature must provide “strong evidence that a testator meant formally to endorse the relevant document” elides all three functions. The next sentence “electronic signatures must reliably link a signed will to the person who is purported to have signed it” elides the second and third functions. We then have the statement “Handwritten signatures perform this function well”. It is unclear which function or functions are being referred to. Handwritten signatures do not perform each function equally well.

34. It is true that a (genuine) handwritten signature, buttressed by the surrounding formality of double witnessing, is strong evidence of intention to be bound.

35. A well-formed handwritten signature (a ‘distinctive mark’, in the words of the Consultation Paper) provides reasonably strong evidence of identity, assuming that comparison handwriting can be found (something not required by the Wills Act and so more in the nature of a working assumption - cf para 6.53 of the Consultation paper). A mark (which is permissible under the Wills Act) does not do so. The witnesses (if available) are also relevant to proof of identity.

36. Parenthetically, one wonders whether the evidential weight assumed to be provided by signatures may have changed over the period since the enactment of the Wills Act 1837. The use of marks may have been more widespread than today and forensic techniques must have been less advanced. Do we now attribute greater reassurance to the use of a handwritten signature than was originally the case?  At any event, given the wide degree of latitude allowed to the form of a handwritten signature the degree of assurance cannot be regarded as uniform across all handwritten signatures.

37. A handwritten signature is weak evidence of linkage to the document. The signature is present only on the page on which it appears. Proof of the integrity of the whole document (if required) would depend on factors that have nothing to do with the signature (e.g. analysis of the paper and typescript ink).

38. Manuscript signatures provide a degree of evidential value for some relevant facts, but they are by no means perfect. It is of course true that a typed signature is of less evidential value than most manuscript signatures. Conversely, as discussed above ([9]) even the most sophisticated electronic signature is only as secure as its weakest link: the password or PIN (or combinations of such), or other mechanisms, that the testator has used to protect the signature key.

39. Notwithstanding its common usage I would tend to avoid the use of the word ‘secure’ in relation to electronic signatures without making clear which function or functions of a signature are being referred to and what precisely is meant, in that context, by ‘secure’.

40. Eliding the related roles of signatures and other formalities is apt to cause unnecessary confusion and, I would suggest, risks unintentionally placing too much of the formalities burden on the electronic signature.

6.35 ‘We have worked on the basis that electronic signatures should be no less secure than handwritten signatures’

41. On the face of it this is unexceptional. However, on closer inspection it suffers in two respects.

42. The first, already mentioned, comes from considering the signature in isolation from other formalities. In principle an electronic signature could permissibly be less secure than a manuscript signature if other formalities were sufficiently strong to compensate. For instance (without necessarily recommending this) the view could be taken that a notarised typewritten electronic signature would be acceptable (if a satisfactory way of notarising electronic documents had been found). The electronic signature itself would be less secure than the manuscript signature, but the combination of formalities could be adequate. Use of a notary instead of witnesses would avoid the authorisation problem identified at Consultation Paper 6.84.

43. The second is that when we break down the functions of the signature, as above [6], then factor in the variations in ‘security’ provided by the range of permissible handwritten signatures, it is quite unclear what is meant by the level of ‘security’ of a handwritten signature.  The temptation (see [11] above) is to over-estimate the security of a handwritten signature when making a comparison of this kind.

6.35 ‘It is essential that a legal mechanism exists for determining which electronic signatures are sufficiently secure, and which are not.’

44. Security (whatever may be meant by that in context) is one aspect of an electronic signature. Given what I have said above about the respective merits of technology-neutral and technology-specific legislation, it is probably inevitable that if the electronic signature itself is to bear any of the formalities burden, there will have to be some definition of which kinds of signature qualify and which do not. This is, however, a potential minefield.  It is almost impossible to define different kinds of signatures at any level of generality in a way that enables a lay person to understand, or that enables an IT expert to say with certainty, what qualifies and what doesn’t. One only has to look at eiDAS and the Electronic Signatures Directive before it to appreciate that.  The ability to be certain that one has complied with the necessary formalities of making a will is surely a sine qua non.

45. At risk of over-repetition it is the whole bundle of formalities, not just the signature, that requires a clear set of legal rules for the electronic environment.

6.36 'There is a risk that narrowly specifying types of valid electronic will could be counterproductive.'

46. Agreed. However see comments above ([16] to [19]) regarding the difficulties of drawing a viable balance between technology-specific and technology-neutral. Also, it is possible (although I have not investigated the matter) that the problem with the existing attempts mentioned in the Consultation might have been over-engineering rather than technology-specificity. Although the two often go hand in hand and over-engineering is always technology-specific, the converse is not necessarily true. A requirement of paper is technology-specific, but not over-engineered.

6.38

47. If the principles of clear and understandable requirements for all relevant formalities are adhered to, it ought to follow that any technical method that complies with those formalities is permissible. If all that is being said here is that the requirements must not be so abstract as to create uncertainty as to what does and does not comply, that must be correct (see above [18]).

48. If perhaps this paragraph is recognising that formalities other than the signature itself are relevant, then I would endorse that (see above [3]). Even so this paragraph appears to treat the other formalities as something of an afterthought. This is in my view not a good approach. The better approach is to treat all the formalities as a coherent, interdependent whole.

49. If the last sentence is saying that the law should set out a clear set of formalities for electronic wills, that is one thing. If it is suggesting the establishment of some kind of regulatory body to oversee will-making, that is another matter. Similarly it is unclear what is intended by the reference in 6.39 to ‘regulating’ electronic wills.

6.39 and 6.40

50. See comments on 6.45 below.

6.41

51. Witnessing requirements are one of the related formalities discussed above ([3]). Again, however, I believe it is an error to view witnessing requirements as a secondary issue, to be considered consequentially upon the introduction of..
Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview