Loading...

Follow Cyberleagle on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Before the publication of the Online Harms White Paper on 8 April 2019 I proposed a Ten Point Rule of Law test to which it might usefully be subjected.

The idea of  the test is less to evaluate the substantive merits of the government’s proposal – you can find an analysis of those here – but more to determine whether it would satisfy fundamental rule of law requirements of certainty and precision, without which something that purports to be law descends into ad hoc command by a state official.

Here is an analysis of the White Paper from that perspective. The questions posed are whether the White Paper demonstrates sufficient certainty and precision in respect of each of the following matters.
1.    Which operators are and are not subject to the duty of care
The White Paper says that the regulatory framework should apply to “companies that allow users to share or discover user-generated content, or interact with each other online.”
This is undoubtedly broad, but on the face of it is reasonably clear.  The White Paper goes on to provide examples of the main types of relevant service:
-             Hosting, sharing and discovery of user-generated content (e.g. a post on a public forum or the sharing of a video).
-             Facilitation of public and private online interaction between service users (e.g. instant messaging or comments on posts).
However these examples introduce a significant element of uncertainty. Thus, how broad is ‘facilitation’? The White Paper gives a clue when it mentions ancillary services such as caching. Yet it is difficult to understand the opening definition as including caching.
The White Paper says that the scope will include “social media companies, public discussion forums, retailers that allow users to review products online, along with non-profit organisations, file sharing sites and cloud hosting providers.”  In the Executive Summary it adds messaging services and search engines into the mix. Although the White Paper does not mention them, online games would clearly be in scope as would an app with social or discussion features.
Applicability to the press is an area of significant uncertainty. Comments sections on newspaper websites, or a separate discussion forum run by a newspaper such as in the Karim v Newsquest case would on the face of it be in scope. However, in a letter to the Society of Editors the Secretary of State has said:
“… as I made clear at the White Paper launch and in the House of Commons, where these services are already well regulated, as IPSO and IMPRESS do regarding their members' moderated comment sections, we will not duplicate those efforts. Journalistic or editorial content will not be affected by the regulatory framework.”
This exclusion is nowhere stated in the White Paper. Further, it does not address the fact that newspapers are themselves users of social media. They have Facebook pages and Twitter accounts, with links to their own websites. As such, their own content is liable to be affected by a social media platform taking action to suppress user content in performance of its duty of care.
The verdict on this section might have been ‘extremely broad but clearly so’. However the uncertainty introduced by ‘facilitation’, and by the lack of clarity about newspapers, results in a FAIL.
2.      To whom the duty of care is owed
The answer to this appears to be ‘no-one’. That may seem odd, especially when Secretary of State Jeremy Wright referred in a recent letter to the Society of Editors to “a duty of care between companies and their users”, but what is described in the White Paper is not in fact a duty of care at all.
The proposed duty would not provide users with a basis on which to make a damages claim against the companies for breach, as is the case with a common law duty of care or a statutory duty of care under, say, the Occupiers’ Liability Act 1957.
Nor, sensibly, could the proposed duty do so since its conception of harm strays beyond established duty of care territory of risk of physical injury to individuals, into the highly contestible region of speech harms and then on into the unmappable wilderness of harm to society.
Thus in its introduction to the harms in scope the White Paper starts by referring to online content or activity that ‘harms individual users’, but then goes on: “or threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities to foster integration.”
In the context of disinformation it refers to “undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”
Whatever (if anything) these abstractions may mean, they are not the kind of thing that can properly be made the subject of a legal duty of care in the offline world sense of the phrase.
The proposed duty of care is something quite different: a statutory framework giving a regulator discretion to decide what should count as harmful, what kinds of behaviour by users should be regarded as causing harm, what rules should be put in place to counter it, and which operators to prioritise.
From a rule of law perspective the answer to the question posed is that it does seem clear that the duty would be owed to no one. In that limited sense it probably rates a PASS, but only by resisting the temptation to change that to FAIL for the misdescription of the scheme as creating a duty of care.
Nevertheless, the fact that the duty is of a kind that is owed to no-one paves the way for a multitude of FAILs for other questions.
3.      What kinds of effect on a recipient will and will not be regarded as harmful
This is an obvious FAIL. The White Paper has its origins in the Internet Safety Strategy Green Paper, yet does not restrict itself to what in the offline world would be regarded as safety issues.  It makes no attempt to define harm, apparently leaving it up to the proposed Ofweb to decide what should and should not be regarded as harmful. Some examples given in the White Paper suggest that effect on the recipient is not limited to psychological harms, or even distress.
This lack of precision is exacerbated by the fact that the kinds of harm contemplated by the White Paper are not restricted to those that have an identifiable effect on a recipient of the information, but appear to encompass nebulous notions of harm to society.
4.      What speech or conduct by a user will and will not be taken to cause such harm
The answer appears to be, potentially, “any”. The WP goes beyond defined unlawfulness into undefined harm, but places no limitation on the kind of behaviour that could in principle be regarded as causing harm. From a rule of law perspective of clarity this may be a PASS, but only in the sense that the kind of behaviour in scope is clearly unlimited.
5.      If risk to a hypothetical recipient of the speech or conduct in question is sufficient, how much risk suffices and what are the assumed characteristics of the notional recipient
FAIL. There is no discussion of either of these points, beyond emphasising many times that children as well as adults should be regarded as potential recipients (although whether the duty of care should mean taking steps to exclude children, or to tailor all content to be suitable for children, or a choice of either, or something else, is unclear). The White Paper makes specific reference to children and vulnerable users, but does not limit itself to those.
6.      Whether the risk of any particular harm has to be causally connected (and if so how closely) to the presence of some particular feature of the platform
FAIL. The White Paper mentions, specifically in the context of disinformation, the much discussed amplification, filter bubble and echo chamber effects that are associated with social media. More broadly it refers to ‘safety by design’ principles, but does not identify any design features that are said to give rise to a particular risk of harm.
The safety by design principles appear to be not about identifying and excluding features that could be said to give rise to a risk of harm, but more focused on designing in features that the regulator would be likely to require of an operator in order to satisfy its duty of care.
Examples given include clarity to users about what forms of content are acceptable, effective systems for detecting and responding to illegal or harmful content, including the use of AI-based technology and trained moderators; making it easy for users to report problem content, and an efficient triage system to deal with reports.
7.      What circumstances would trigger an operator's duty to take preventive or mitigating steps
FAIL. The specification of such circumstances would left up to the discretion of Ofweb, in its envisaged Codes of Practice or, in the case of terrorism or child sexual exploitation and abuse, the discretion of the Home Secretary via approval of OfWeb’s Codes of Practice.
The only concession made in this direction is that the government is consulting on whether Codes of Practice should be approved by Parliament. However it is difficult to conclude that laying the detailed results of a regulator’s ad hoc consideration before Parliament for approval, almost certainly on a take it or leave it basis, has anything like the same democratic or constitutional force as requiring Parliament to specify the harms and the nature of the duty of care with adequate precision in the first place.
8.      What steps the duty of care would require the operator to take to prevent or mitigate harm (or a perceived risk of harm)
The White Paper says that legislation will make clear that companies must do what is reasonably practicable. However that is not enough to prevent a FAIL, for the same reasons as 7. Moreover, it is implicit in the White Paper section on Fulfilling the Duty of Care that the government has its own views on the kinds of steps that operators should be taking to fulfil the duty of care in various areas. This falls uneasily between a statutorily defined duty, the role of an independent regulator in deciding what is required, and the possible desire of government to influence an independent regulator.
9.      How any steps required by the duty of care would affect users who would not be harmed by the speech or conduct in question
FAIL. The White Paper does not discuss this, beyond the general discussion of freedom of expression in the next question.
10.   Whether a risk of collateral damage to lawful speech or conduct (and if so how great a risk of how extensive damage), would negate the duty of care
The question of collateral damage is not addressed, other than implicitly in the various statements that the government’s vision includes freedom of expression online and that the regulatory framework will “set clear standards to help companies ensure safety of users while protecting freedom of expression”.
Further, “the regulator will have a legal duty to pay due regard to innovation, and to protect users’ rights online, taking particular care not to infringe privacy or freedom of expression.” It will “ensure that the new regulatory requirements do not lead to a disproportionately risk averse response from companies that unduly limits freedom of expression, including by limiting participation in public debate.”
Thus consideration of the consequence of a risk of collateral damage to lawful speech it is left up to the decision of a regulator, rather than to the law or a court. The regulator will presumably, by the nature of the proposal, be able to give less weight to the risk of suppressing lawful speech that it considers to be harmful. FAIL.
Postscript It may said against much of this analysis that precedents exist for appointing a discretionary regulator with power to decide what does and does not constitute harmful speech.
Thus, for broadcast, the Communications Act 2003 does not define “offensive or harmful” and Ofcom is largely left to decide what those mean, in the light of generally accepted standards.
Whatever the view of the appropriateness of such a regime for broadcast, the White Paper proposals would regulate individual speech. Individual speech is different. What is a permissible regulatory model for broadcast is not necessarily justifiable for individuals, as was recognised in the US Communications Decency Act case (Reno v ACLU) in the early 1990s. The US Supreme Court found that:
“This dynamic, multi-faceted category of communication includes not only traditional print and news services, but also audio, video and still images, as well as interactive, real-time dialogue. Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer. As the District Court found, ‘the content on the internet is as diverse as human thought’ ... We agree with its conclusion that our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.’
In these times it is hardly fashionable, outside the USA, to cite First Amendment jurisprudence. Nevertheless, the proposition that individual speech is not broadcast should carry weight in a constitutional or human rights court in any jurisdiction.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Last Monday, having spent the best part of a day reading the UK government's Online Harms White Paper, I concluded that if the road to hell was paved with good intentions, this was a motorway.

Nearly two weeks on, after full and further consideration, I have found nothing to alter that view. This is why.

The White Paper

First, a reminder of what the White Paper proposes. The government intends to legislate for a statutory ‘duty of care’ on social media platforms and a wide range of other internet companies that "allow users to share or discover user-generated content, or interact with each other online". This could range from public discussion forums to sites carrying user reviews, to search engines, messaging providers, file sharing sites, cloud hosting providers and many others. 

The duty of care would require them to “take more responsibility for the safety of their users and tackle harm caused by content or activity on their services”. This would apply not only to illegal content and activities, but also to lawful material regarded as harmful.

The duty of care would be overseen and enforced by a new regulator, Ofweb, armed with power to fine companies for non-compliance.
Ofweb would set out rules in Codes of Practice that the intermediary companies should follow to comply with their duty of care. For terrorism and child sexual abuse material the Home Secretary would have direct control over the relevant Codes of Practice.

Users would get a guaranteed complaints mechanism to the intermediary companies. The government is consulting on the possibility of appointing designated organisations who would be able to make ‘super-complaints’ to the regulator.

Whilst framed as regulation of tech companies, the White Paper’s target is the activities and communications of online users. Ofweb would regulate social media and internet users at one remove. It would be an online sheriff armed with the power to decide and police, via its online intermediary deputies, what users can and cannot say online.

Which lawful content would count as harmful is not defined. The White Paper provides an ‘initial’ list of content and behaviour that would be in scope: cyberbullying and trolling; extremist content and activity; coercive behaviour; intimidation; disinformation; violent content; advocacy of self-harm; promotion of Female Genital Mutilation (FGM).

This is not a list that could readily be transposed into legislation, even if that were the government’s intention. Some of the topics - FGM, for instance – are more specific than others. But most are almost as unclear as ‘harmful’ itself. For instance the White Paper gives no indication as to what would amount to trolling. It says only that ‘cyberbullying, including trolling, is unacceptable’. It could as well have said ‘behaving badly is unacceptable’.

In any event the White Paper leaves the strong impression that the legislation would eschew even that level of specificity and build the regulatory structure simply on the concept of ‘harmful’.

The White Paper does not say in terms how the ‘initial’ list of content and behaviour in scope would be extended. It seems that the regulator would decide:
“This list is, by design, neither exhaustive nor fixed. A static list could prevent swift regulatory action to address new forms of online harm, new technologies, content and new online activities.” [2.2]
In that event Ofweb would effectively have the power to decide what should and should not be regarded as harmful.

The White Paper proposes some exclusions: harms suffered by companies as opposed to individuals, data protection breaches, harms suffered by individuals resulting directly from a breach of cyber security or hacking, and all harms suffered by individuals on the dark web rather than the open internet.

Good intentions

The White Paper is suffused with good intentions. It sets out to forge a single sword of truth and righteousness with which to assail all manner of online content from terrorist propaganda to offensive material.

However, flying a virtuous banner is no guarantee that the army is marching in the right direction. Nor does it preclude the possibility that specialised units would be more effective.

The government presents this all-encompassing approach as a virtue, contrasted with:
“a range of UK regulations aimed at specific online harms or services in scope of the White Paper, but [which] creates a fragmented regulatory environment which is insufficient to meet the full breadth of the challenges we face” [2.5].
An aversion to fragmentation is like saying that instead of the framework of criminal offences and civil liability, focused on specific kinds of conduct, that make up our mosaic of offline laws we should have a single offence of Behaving Badly.

We could not contemplate such a universal offence with equanimity. A Law against Behaving Badly would be so open to subjective and arbitrary interpretation as to be the opposite of law: rule by ad hoc command. Assuredly it would fail to satisfy the rule of law requirement of reasonable certainty. By the same token we should treat with suspicion anything that smacks of a universal Law against Behaving Badly Online.

In placing an undefined and unbounded notion of harm at the centre of its proposals for a universal duty of care, the government has set off down that path.

Three degrees of undefined harm

Harm is an amorphous concept. It changes shape according to the opinion of whoever is empowered to apply it: in the government’s proposal, Ofweb.

Even when limited to harm suffered by an individual, harm is an ambiguous term. It will certainly include objectively ascertainable physical injury – the kind of harm to which comparable offline duties of care are addressed.

But it may also include subjective harms, dependent on someone’s own opinion that they have suffered what they regard as harm. When applied to speech, this is highly problematic. One person may enjoy reading a piece of searing prose. Another may be distressed. How is harm, or the risk of harm, to be determined when different people react in different ways to what they are reading or hearing? Is distress enough to render something harmful? What about mild upset, or moderate annoyance? Does offensiveness inflict harm? At its most fundamental, is speech violence? 

‘Harm’ as such has no identifiable boundaries, at least none that would pass a legislative certainty test.

This is particularly evident in the White Paper’s discussion of Disinformation. In the context of anti-vaccination the White Paper notes that “Inaccurate information, regardless of intent, can be harmful”.

Having equated inaccuracy with harm, the White Paper contradictorily claims that the regulator and its online intermediary proxies can protect users from harm without policing truth or accuracy:
“We are clear that the regulator will not be responsible for policing truth and accuracy online.” [36] 
“Importantly, the code of practice that addresses disinformation will ensure the focus is on protecting users from harm, not judging what is true or not.” [7.31]
The White Paper acknowledges that:
“There will be difficult judgement calls associated with this. The government and the future regulator will engage extensively with civil society, industry and other groups to ensure action is as effective as possible, and does not detract from freedom of speech online” [7.31]
The contradiction is not something that can be cured by getting some interested parties around a table. It is the cleft stick into which a proposal of this kind inevitably wedges itself, and from which there is no escape.

A third variety of harm, yet more nebulous, can be put under the heading of ‘harm to society’. This kind of harm does not depend on identifying an individual who might be directly harmed. It tends towards pure abstraction, malleable at the will of the interpreting authority.

Harms to society feature heavily in the White Paper, for example: content or activity that:
“threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration.”
Similarly:
“undermine our democratic values and debate”;

“encouraging us to make decisions that could damage our health, undermining our respect and tolerance for each other and confusing our understanding of what is happening in the wider world.”
This kind of prose may befit the soapbox or an election manifesto, but has no place in or near legislation.

Democratic deficit

One particular concern is the potential for a duty of care supervised by a regulator and based on a malleable notion of harm to be used as a mechanism to give effect to some Ministerial policy of the day, without the need to obtain legislation.

Thus, two weeks before the release of the White Paper Health Secretary Matt Hancock suggested that anti-vaxxers could be targeted via the forthcoming duty of care.

The White Paper duly recorded, under “Threats to our way of life”, that “Inaccurate information, regardless of intent, can be harmful – for example the spread of inaccurate anti-vaccination messaging online poses a risk to public health.” [1.23]

If a Secretary of State decides that he wants to silence anti-vaxxers, the right way to go about it is to present a Bill to Parliament, have it debated and, if Parliament agrees, pass it into law. The structure envisaged by the White Paper would create a channel whereby an ad hoc Ministerial policy to silence a particular group or kind of speech could be framed as combating an online harm, pushed to the regulator then implemented by its online intermediary proxies. Such a scheme has democratic deficit hard baked into it.

Perhaps in recognition of this, the government is consulting on whether Parliament should play a role in developing or approving Ofweb’s Codes of Practice. That, however, smacks more of sticking plaster than cure.

Impermissible vagueness

Building a regulatory structure on a non-specific notion of harm is not a matter of mere ambiguity, where some word in an otherwise unimpeachable statute might mean one thing or another and the court has to decide which it is. It strays beyond ambiguity into vagueness and gives rise to rule of law issues.

The problem with vagueness was stated was spelt out by the House of Lords in R v Rimmington, citing the US case of Grayned:
"Vagueness offends several important values … A vague law impermissibly delegates basic policy matters to policemen, judges and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application."
Whilst most often applied to criminal liability, the objection to vagueness is more fundamental than that. It is a constitutional principle that applies to the law generally. Lord Diplock referred to it in a 1975 civil case (Black-Clawson):
"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it."
Certainty is a particular concern with a law that has consequences for individuals' speech. In the context of a social media duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what of their speech is liable to be the subject of preventive or mitigating action by a platform operator subject to the duty of care.

If the duty of care is based on an impermissibly vague concept such as ‘harm’, then the legislation has a rule of law problem. It is not necessarily cured by empowering the regulator to clothe the skeleton with codes of practice and interpretations, for three reasons: 

First, impermissibly vague legislation does not provide a skeleton at all – more of a canvas on to which the regulator can paint at will; 

Second, if it is objectionable for the legislature to delegate basic policy matters to policemen, judges and juries it is unclear why it is any less objectionable to do so to a regulator; 

Third, regulator-made law is a moveable feast.

All power to the sheriff

From a rule of law perspective undefined harm ought not to take centre stage in legislation.

However if the very idea is to maximise the power and discretion of a regulator, then inherent vagueness in the legislation serves the purpose very well. The vaguer the remit, the more power is handed to the regulator to devise policy and make law.

John Humphrys, perhaps unwittingly, put his finger on it during the Today programme on 8 April 2019 (4:00 onwards). Joy Hyvarinen of Index on Censorship pointed out how broadly Ofcom had interpreted harm in its 2018 survey, to which John Humphrys retorted: “You deal with that by defining [harm] more specifically, surely". 

That would indeed be an improvement. But what interest would a government intent on creating a powerful regulator, not restricted to a static list of in-scope content and behaviour, have in cramping the regulator’s style with strict rules and carefully limited definitions of harm? In this scheme of things breadth and vagueness are not faults but a means to an end.

There is a precedent for this kind of approach in broadcast regulation. The Communications Act 2003 refers to 'offensive and harmful', makes no attempt to define them and leaves it to Ofcom to decide what they mean. Ofcom is charged with achieving the objective: 
“that generally accepted standards are applied to the contents of television and radio services so as to provide adequate protection for members of the public from the inclusion in such services of offensive and harmful material”.
William Perrin and Professor Lorna Woods, whose work on duties of care has influenced the White Paper, say of the 2003 Act that: 
"competent regulators have had little difficulty in working out what harm means" [37]. 
They endorse Baroness Grender’s contribution to a House of Lords debate in November 2018, in which she asked: 
"Why did we understand what we meant by "harm" in 2003 but appear to ask what it is today?"
The answer is that in 2003 the legislators did not have to understand what the vague term 'harm' meant because they gave Ofcom the power to decide. It is no surprise if Ofcom has had little difficulty, since it is in reality not 'working out what harm means' but deciding on its own meanings. It is, in effect, performing a delegated legislative function.

Ofweb would be in the same position, effectively exercising a delegated power to decide what is and is not harmful.

Broadcast regulation is an exception from the norm that speech is governed only by the general law. Because of its origins in spectrum scarcity and the perceived power of the medium, it has been considered acceptable to impose stricter content rules and a discretionary style of regulation on broadcast, in addition to the general laws (defamation, obscenity and so on) that apply to all speech.

That does not, however, mean that a similar approach is appropriate for individual speech. Vagueness goes hand in hand with arbitrary exercise of power. If this government had set out to build a scaffold from which to hang individual online speech, it could hardly have done better.

The duty of care that isn’t

Lastly, it is notable that as far as can be discerned from the White Paper the proposed duty of care is not really a duty of care at all.

A duty of care properly so called is a legal duty owed to identifiable persons. They can claim damages if they suffer injury caused by a breach of the duty. Common law negligence and liability under the Occupiers’ Liability Act 1957 are examples. These are typically limited to personal injury and damage to physical property; and only rarely impose a duty on, say, an occupier, to prevent visitors injuring each other. An occupier owes no duty in respect of what visitors to the property say to each other.

The absence in the White Paper of any nexus between the duty of care and individual persons would allow Ofweb’s remit to be extended beyond injury to individuals and into the nebulous realm of harms to society. That, as discussed above, is what the White Paper proposes.

Occasionally a statute creates something that it calls a duty of care, but which in reality describes a duty owed to no-one in particular, breach of which is (for instance) a criminal offence.

An example is s.34 of the Environmental Protection Act 1990, which creates a statutory duty in respect of waste disposal. As would be expected of such a statute, s.34 is precise about the conduct that is in scope of the duty. In contrast, the White Paper proposes what is in effect a universal online ‘Behaving Badly’ law.

Even though the Secretary of State referred in a recent letter to the Society of Editors to “A duty of care between companies and their users”, the ‘duty of care’ described in the White Paper is something quite different from a duty of care properly so called.

The White Paper’s duty of care is a label applied to a regulatory framework that would give Ofweb discretion to decide what user communications and activities on the internet should be deemed harmful, and the power to enlist proxies such as social media companies to sniff and snuff them out, and to take action against an in scope company if it does not comply.

This is a mechanism for control of individual speech such as would not be contemplated offline and is fundamentally unsuited to what individuals do and say online.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
All the signs are that the government will shortly propose a duty of care on social media platforms aimed at reducing the risk of harm to users.
DCMS Secretary of State Jeremy Wright wrote recently:
"A world in which harms offline are controlled but the same harms online aren’t is not sustainable now…". 
The House of Lords Communications Committee invoked a similar 'parity principle':
"The same level of protection must be provided online as offline."
Notwithstanding that the duty of care concept is framed as a transposition of offline duties of care to online, proposals for a social media duty of care will almost certainly go significantly beyond any comparable offline duty of care.

When we examine safety-related duties of care owed by operators of offline public spaces to their visitors, we find that they:
(a) are restricted to objectively ascertainable injury,
(b) rarely impose liability for what visitors do to each other,
(c) do not impose liability for what visitors say to each other. 

The social media duties of care that have been publicly discussed so far breach all three of these barriers. They relate to subjective harms and are about what users do and say to each other. Nor are they restricted to activities that are unlawful as between the users themselves.

The substantive merits and demerits of any proposed social media duty of care will no doubt be hotly debated. But the likely scope of a duty of care raises a prior rule of law issue. The more broadly a duty of care is framed, the greater the risk that it will stray into impermissible vagueness.

The rule of law objection to vagueness was spelt out by the House of Lords in R v Rimmington, citing the US case of Grayned:
"Vagueness offends several important values … A vague law impermissibly delegates basic policy matters to policemen, judges and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application."  

Whilst most often applied to criminal liability, the objection to vagueness is more fundamental than that. It is a constitutional principle that applies to the law generally. Lord Diplock referred to it in a 1975 civil case (Black-Clawson):
"The acceptance of the rule of law as a constitutional principle requires that a citizen, before committing himself to any course of action, should be able to know in advance what are the legal consequences that will flow from it."

Certainty is a particular concern with a law that has consequences for individuals' speech. In the context of a social media duty of care the rule of law requires that users must be able to know with reasonable certainty in advance what of their speech is liable to be the subject of preventive or mitigating action by a platform operator subject to the duty of care.

With all this in mind, I propose a ten point rule of law test by which the government’s proposals, when they appear, may be evaluated. These tests are not about the merits or demerits of the content of any proposed duty of care as such, although of course how the scope and substance of any duty of care is defined will be central to the core rule of law questions of certainty and precision.

These tests are in the nature of a precondition: is the duty of care framed with sufficient certainty and precision to be acceptable as law, particularly bearing in mind potential consequences for individual speech?

It is, for instance, possible for scope to be both broad and clear. That would pass the rule of law test, but might still be objectionable on its merits. But if the scope does not surmount the rule of law threshold of certainty and precision it ought to fall at that first hurdle.

My proposed tests are whether there is sufficient certainty and precision as to:

1.    Which operators are and are not subject to the duty of care.
2.      To whom the duty of care is owed.
3.      What kinds of effect on a recipient will and will not be regarded as harmful.
4.      What speech or conduct by a user will and will not be taken to cause such harm.
5.      If risk to a hypothetical recipient of the speech or conduct in question is sufficient, how much risk suffices and what are the assumed characteristics of the notional recipient.
6.      Whether the risk of any particular harm has to be causally connected (and if so how closely) to the presence of some particular feature of the platform.
7.      What circumstances would trigger an operator's duty to take preventive or mitigating steps.
8.      What steps the duty of care would require the operator to take to prevent or mitigate harm (or a perceived risk of harm).
9.      How any steps required by the duty of care would affect users who would not be harmed by the speech or conduct in question.
10.   Whether a risk of collateral damage to lawful speech or conduct (and if so how great a risk of how extensive damage), would negate the duty of care.

These tests are framed in terms of harms to individuals. Some may object that ‘harm’ should be viewed collectively. From a rule of law perspective it should hardly need saying that constructs such as (for example) harm to society or harm to culture are hopelessly vague.

One likely riposte to objections of vagueness is that a regulator will be empowered to decide on the detailed rules. Indeed it will no doubt be argued that flexibility on the part of a regulator, given a set of high level principles to work with, is beneficial. There are at least two objections to that.

First, the regulator is not an alchemist. It may be able to produce ad hoc and subjective applications of vague precepts, and even to frame them as rules, but the moving hand of the regulator cannot transmute base metal into gold. Its very raison d'etre is flexibility, discretionary power and nimbleness. Those are a vice, not a virtue, where the rule of law is concerned, particularly when freedom of individual speech is at stake.

Second, if the vice of vagueness is potential for arbitrariness, then it is unclear how Parliament delegating policy matters to an independent regulator is any more acceptable than delegating them to a policeman, judge or jury. It compounds, rather than cures, the vice.

Close scrutiny of any proposed social media duty of care from a rule of law perspective can help ensure that we make good law for bad people rather than bad law for good people.



Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
A bumper crop of pending litigation and legislative initiatives for the coming year (without even thinking about Brexit).

EU copyright reform

-         The proposed Directiveon Copyright in the Digital Single Market is currently embroiled in trialogue discussions between Commission, Council and Parliament. It continues to excite controversy over the publishers’ ancillary right and the clash between Article 13 and the ECommerce Directive's intermediary liability provisions.
-         Political agreement was reached on 13 December 2018 to a Regulation extending the country of origin provisions of the Satellite and Cable Broadcasting Directive to online radio and news broadcasts. Formal approval of a definitive text should follow in due course.
EU online business The European Commission has proposeda Regulation on promoting fairness and transparency for business users of online intermediation services. It would lay down transparency and redress rules for the benefit of business users of online intermediation services and of corporate website users of online search engines. The legislation would cover online marketplaces, online software application stores, online social media and search engines. The Council of the EU reached a common position on the draft Regulation on 29 November 2018.
Telecoms privacyThe proposed EU ePrivacy Regulation continues to make a choppy voyage through the EU legislative process.
Intermediary liability The UK government has published its Internet Safety Strategy Green Paper, the precursor to a White Paper to be published in winter 2018-2019 which will include intermediary liability, duties and responsibilities. In parallel the House of Lords Communications Committee is conducting an inquiry on internet regulation, including intermediary liability. A House of Commons Committee examining Disinformation and Fake News has also touched on the topic. Before that the UK Committee on Standards in Public Life suggestedthat Brexit presents an opportunity to depart from the intermediary liability protections of the ECommerce Directive.
On 12 September 2018 the European Commission published a Proposal for a Regulation on preventing the dissemination of terrorist content online. This followed its September 2017 Communication on Tackling Illegal Content Online and March 2018 Recommendationon Measures to Effectively Tackle Illegal Content Online. It is notable for one hour takedown response times and the ability for Member States to derogate from the ECommerce Directive Article 15 prohibition on imposing general monitoring obligations on conduits, caches and hosts.
The Austrian Supreme Court has referred to the CJEU questions on whether a hosting intermediary can be required to prevent access to similar content and on extraterritoriality (C-18/18 - Glawischnig-Piesczek). The German Federal Supreme Court has referred two cases (YouTubeand Uploaded) to the CJEU asking questions about (among other things) the applicability of the ECommerce Directive intermediary protections to UGC sharing sites.
Pending CJEU copyright cases Several copyright references are pending in the EU Court of Justice. Issues under consideration include whether the EU Charter of Fundamental Rights can be relied upon to justify exceptions or limitations beyond those in the Copyright Directive (Spiegel Online GmbH v Volker Beck, C-516/17;  Funke Medien (Case C-469/17) (Advocate General Opinion 25 October 2018 here) and PelhamCase 476/17); and whether a link to a PDF amounts to publication for the purposes of the quotation exception (Spiegel Online GmbH v Volker Beck, C-516/17). The Dutch Tom Kabinet case on secondhand e-book trading has been referred to the CJEU (Case C-263/18). The YouTubeand Uploadedcases pending from the German Federal Supreme Court include questions around the communication to the public right.
Online pornography The Digital Economy Act 2017 grants powers to a regulator (subsequently designated to be the British Board of Film Classification) to determine age control mechanisms for internet sites that make ‘R18’ pornography available; and to direct ISPs to block such sites that either do not comply with age verification or contain material that would not be granted an R18 certificate. The process of putting in place the administrative arrangements is continuing.
Cross-border liability and jurisdiction The French CNIL/Googlecase on search engine de-indexing has raised significant issues on extraterritoriality, including whether Google can be required to de-index on a global basis. The Conseil d'Etat has referredvarious questions about this to the CJEU. C-18/18 Glawischnig-Piesczek, a reference from the Austrian Supreme Court, also raises territoriality questions in the context of Article 15 of the ECommerce Directive.
In the law enforcement field the EU has proposed a Regulation on EU Production and Preservation Orders (the ‘e-Evidence Regulation’) and associated Directive that would set up a regime for some cross-border requests direct to service providers. The UK has said that it will not opt in the Regulation. US-UK bilateral negotiations on direct cross-border access to data are continuing. The Crime (Overseas Production Orders) Bill, which would put in place a mechanism enabling UK authorities to make cross-border requests under such a bilateral agreement is progressing through Parliament.
Online state surveillance The UK’s Investigatory Powers Act 2016 (IP Act), has come almost completely into force, including amendments following the Watson/Tele2decision of the CJEU. However the arrangements for a new Office for Communications Data Authorisation to approve requests for communications data have yet to be put in place.
Meanwhile a pending reference to the CJEU from the Investigatory Powers Tribunal raises questions as to whether the Watsondecision applies to national security, and if so how; whether mandatorily retained data have to be held within the EU; and whether those whose data have been accessed have to be notified.
Liberty has a pending judicial review of the IP Act bulk powers and data retention powers. It has been granted permission to appeal to the Court of Appeal on the question whether the data retention powers constitute illegitimate generalised and indiscriminate retention.
The IP Act (in particular the bulk powers provisions) may be indirectly affected by cases in the CJEU (challenges to the EU-US Privacy Shield), in the European Court of Human Rights (in which Big Brother Watch and various other NGOs challenge the existing RIPA bulk interception regime) and by a judicial review by Privacy International of an Investigatory Powers Tribunal decision on equipment interference powers.
The ECtHR gave a Chamber judgment in the BBW case on 13 September 2018. If the judgment becomes final it could affect the IP Act in as many as three separate ways. The NGOs have lodged an application for the judgment to be referred to the ECtHR Grand Chamber.
In the Privacy International equipment interference case, the Court of Appeal has heldthat the Investigatory Powers Tribunal decision is not susceptible of judicial review.  A further appeal has been heard by the Supreme Court. Judgment is awaited.
Compliance of the UK’s surveillance laws with EU Charter fundamental rights will be a factor in any data protection adequacy decision that is sought once the UK becomes a non-EU third country post-Brexit.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Never trust version 1.0 of any software. Wait until the bugs have been ironed out, only then open your wallet.

The same is becoming true of the UK’s surveillance legislation.  No sooner was the ink dry on the Investigatory Powers Act 2016 (IP Act) than the first bugs, located in the communications data retention module, were exposed by the EU Court of Justice (CJEU)’s judgment in Tele2/Watson

After considerable delay in issuing required fixes, Version 1.1 is currently making its way through Parliament. The pending amendments to the Act make two main changes. They restrict to serious crime the crime-related purposes for which the authorities may demand access to mandatorily retained data, and they introduce prior independent authorisation for non-national security demands.

It remains uncertain whether more changes to the data retention regime will be required in order to comply with the Tele2/Watson judgment.  That should become clearer after the outcome of Liberty’s appeal to the Court of Appeal in its judicial review of the Act and various pending references to the CJEU.

Meanwhile the recent Strasbourg judgment in Big Brother Watch v UK has exposed a separate set of flaws in the IP Act’s predecessor legislation, the Regulation of Investigatory Powers Act 2000 (RIPA). These were in the bulk interception and communications data acquisition modules. To the extent that the flaws have been carried through into the new legislation, fixing them may require the IP Act to be patched with a new Version 1.2.

The BBW judgment does not read directly on to the IP Act. The new legislation is much more detailed than RIPA and introduces the significant improvement that warrants have to be approved by an independent Judicial Commissioner.  Nevertheless, the BBW judgment contains significant implications for the IP Act. 

The Court found that three specific aspects of RIPA violated the European Convention on Human Rights:
  • Lack of robust end to end oversight of bulk interception acquisition, selection and searching processes
  • Lack of controls on use of communications data acquired from bulk interception
  • Insufficient safeguards on access to journalistically privileged material, under both the bulk interception regime and the ordinary communications data acquisition regime

End to end oversight

The bulk interception process starts with selection of the bearers (cables or channels within cables) that will be tapped.  It culminates in various data stores that can be queried by analysts or used as raw material for computer analytics. In between are automated processes for filtering, selecting and analysing the material acquired from the bearers. Some of these processes operate in real time or near real time, others are applied to stored material and take longer. Computerised processes will evolve as available technology develops.

The Court was concerned about lack of robust oversight under RIPA throughout all the stages, but especially selection and search criteria used for filtering. Post factum audit by the Interception of Communications Commissioner was judged insufficient.

For its understanding of the processes the Court relied upon a combination of sources: the Interception Code of Practice under RIPA, the Intelligence and Security Committee Report of March 2015, the Investigatory Powers Tribunal judgment of 5 December 2014 in proceedings brought by Liberty and others, and the Government’s submissions in the Strasbourg proceedings. The Court described the processes thus:

“…there are four distinct stages to the section 8(4) regime:

1.  The interception of a small percentage of Internet bearers, selected as being those most likely to carry external communications of intelligence value.
2.  The filtering and automatic discarding (in near real-time) of a significant percentage of intercepted communications, being the traffic least likely to be of intelligence value.
3.  The application of simple and complex search criteria (by computer) to the remaining communications, with those that match the relevant selectors being retained and those that do not being discarded.
4.  The examination of some (if not all) of the retained material by an analyst).”

The reference to a ‘small percentage’ of internet bearers derives from the March 2015 ISC Report. Earlier in the judgment the Court said:

“… GCHQ’s bulk interception systems operated on a very small percentage of the bearers that made up the Internet and the ISC was satisfied that GCHQ applied levels of filtering and selection such that only a certain amount of the material on those bearers was collected.”

Two points about this passage are worthy of comment. First, while the selected bearers may make up a very small percentage of the estimated 100,000 bearers that make up the global internet (judgment, [9]), that is not same thing as the percentage of bearers that land in the UK.

Second, the ISC report is unclear about how far, if at all, filtering and selection processes are applied not just to content but also to communications data (metadata) extracted from intercepted material. Whilst the report describes filtering, automated searches on communications using complex criteria and analysts performing additional bespoke searches, it also says:

Related CD (RCD) from interception: GCHQ’s principal source of CD is as a by-product of their interception activities, i.e. when GCHQ intercept a bearer, they extract all CD from that bearer. This is known as ‘Related CD’. GCHQ extract all the RCD from all the bearers they access through their bulk interception capabilities.” (emphasis added)

The impression that collection of related communications data may not be filtered is reinforced by the Snowden documents, which referred to several databases derived from bulk interception and which contained very large volumes of non-content events data. The prototype KARMA POLICE, a dataset focused on website browsing histories, was said to comprise 17.8 billion rows of data, representing 3 months’ collection. (The existence or otherwise of KARMA POLICE and similar databases has not been officially acknowledged, although the then Interception of Communications Commissioner in his 2014 Annual Report reported that he had made recommendations to interception agencies about retention periods for related communications data.)

The ISC was also “surprised to discover that the primary value to GCHQ of bulk interception was not in reading the actual content of communications, but in the information associated with those communications.”

If it is right that little or no filtering is applied to collection of related communications data (or secondary data as it is known in the IP Act), then the overall end to end process would look something like this (the diagram draws on Snowden documents published by The Intercept as well as the sources already mentioned):


Returning to the BBWjudgment, the Court’s concerns related to intercepted ‘communications’ and ‘material’:

“the lack of oversight of the entire selection process, including the selection of bearers for interception, the selectors and search criteria for filtering intercepted communications, and the selection of material for examination by an analyst…”

There is no obvious reason to limit those observations to content. Elsewhere in the judgment the Court was “not persuaded that the acquisition of related communications data is necessarily less intrusive than the acquisition of content” and went on:

“The related communications data … could reveal the identities and geographic location of the sender and recipient and the equipment through which the communication was transmitted. In bulk, the degree of intrusion is magnified, since the patterns that will emerge could be capable of painting an intimate picture of a person through the mapping of social networks, location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with…”.

The Court went on to make specific criticisms of RIPA’s lack of restrictions on the use of related communications data, as discussed below.

What does the Court’s finding on end to end oversight mean for the IP Act? The Act introduces independent approval of warrants by Judicial Commissioners, but does it create the robust oversight of the end to end process, particularly of selectors and search criteria, that the Strasbourg Court requires?

The March 2015 ISC Report recommended that the oversight body be given express authority to review the selection of bearers, the application of simple selectors and initial search criteria, and the complex searches which determine which communications are read. David Anderson Q.C.'s (now Lord Anderson) Bulk Powers Reviewrecords (para 2.26(g)) an assurance given by the Home Office that that authority is inherent in clauses 205 and 211 of the Bill (now sections 229 and 235 of the IP Act).

Beyond that, under the IP Act the Judicial Commissioners have to consider at the warrant approval stage the necessity and proportionality of conduct authorised by a bulk warrant. Arguably that includes all four stages identified by the Strasbourg Court (see my submission to IPCO earlier this year). If that is right, the RIPA gap may have been partially filled.

However, the IP Act does not specify in terms that selectors and search criteria have to be reviewed. Moreover, focusing on those particular techniques already seems faintly old-fashioned. The Bulk Powers Review reveals the extent to which more sophisticated analytical techniques such as anomaly detection and pattern analysis are brought to bear on intercepted material, particularly communications data. Robust end to end oversight ought to cover these techniques as well as use of selectors and automated queries.  

The remainder of the gap could perhaps be filled by an explanation of how closely the Judicial Commissioners oversee the various selection, searching and other analytical processes.

Filling this gap may not necessarily require amendment of the IP Act, although it would be preferable if it were set out in black and white. It could perhaps be filled by an IPCO advisory notice: first as to its understanding of the relevant requirements of the Act; and second explaining how that translates into practical oversight, as part of bulk warrant approval or otherwise, of the end to end stages involved in bulk interception (and indeed the other bulk powers).

Related Communications Data/Secondary Data

The diagram above shows how communications data can be obtained from bulk interception. Under RIPA this was known as Related Communications Data. In the IP Act it is known as Secondary Data. Unlike RIPA, the IP Act specifies a category of bulk warrant that extracts secondary data alone (without content) from bearers.  However, the IP Act definition of secondary data also permits some items of content to be extracted from communications and treated as communications data.

Like RIPA, the IP Act contains few specific restrictions on the use to which secondary data can be put. It may be examined for a reason falling within the overall statutory purposes and subject to necessity and proportionality. The IP Act adds the requirement that the reason be within the operational purposes (which can be broad) specified in the bulk warrant. As with RIPA, the restriction that the purpose of the bulk interception must be overseas-related does not apply at the examination stage. Like RIPA, there is a requirement to obtain specific authority (a targeted examination warrant, in the case of the IP Act) to select for examination the communications of someone known to be within the British Islands. But like RIPA this applies only to content, not to secondary data.

RIPA’s lack of restriction on examining related communications data was challenged in the Investigatory Powers Tribunal. The government argued (and did so again in the Strasbourg proceedings) that this was necessary in order to be able to determine whether a target was within the British Islands, and hence whether it was necessary to apply for specific authority from the Secretary of State to examine the content of the target’s communications.

The IPT accepted this argument, holding that the difference in the restrictions was justified and proportionate by virtue of the need to be able to determine whether a target was within the British Islands. It rejected as “an impossibly complicated or convoluted course” the suggestion that RIPA could have provided a specific exception to provide for the use of metadata for that purpose.

That, however, left open the question of all the other uses to which metadata could be put. If the Snowden documents referred to above are any guide, those uses are manifold.  Bulk intercepted metadata would hardly be of primary value to GCHQ, as described by the ISC, if its use were restricted to ascertaining whether a target was within or outside the British Islands.

The Strasbourg Court identified this gap in RIPA and held that the absence of restrictions on examining related communications data was a ground on which RIPA violated the ECHR.

The Court accepted that related communications data should be capable of being used in order to ascertain whether a target was within or outside the British Islands. It also accepted that that should not be the only use to which it could be put, since that would impose a stricter regime than for content.

But it found that there should nevertheless be “sufficient safeguards in place to ensure that the exemption of related communications data from the requirements of section 16 of RIPA is limited to the extent necessary to determine whether an individual is, for the time being, in the British Islands.”

Transposed to the IP Act, this could require a structure for selecting secondary data for examination along the following lines:
  • Selection permitted in order to determine whether an individual is, for the time being, in the British Islands.
  • Targeted examination warrant required if (a) any criteria used for the selection of the secondary data for examination are referable to an individual known to be in the British Islands, and (b) the purpose of using those criteria is to identify secondary data or content relating to communications sent by, or intended for, that individual.
  • Otherwise: selection of secondary data permitted (but subject to the robust end to end oversight requirements discussed above).

Although the Court speaks only of sufficient safeguards, it is difficult to see how this could be implemented without amendment of the IP Act.

Journalistic privilege

The Court found RIPA lacking in two areas: bulk interception (for both content and related communications data) and ordinary communications data acquisition. The task of determining to what extent the IP Act remedies the deficiencies is complex. However, in the light of the comparisons below it seems likely that at least some amendments to the legislation will be necessary.

Bulk interception
For bulk interception, the Court was particularly concerned that there were no requirements either:
  • circumscribing the intelligence services’ power to search for confidential journalistic or other material (for example, by using a journalist’s email address as a selector),
  • requiring analysts, in selecting material for examination, to give any particular consideration to whether such material is or may be involved.

Consequently, the Court said, it would appear that analysts could search and examine without restriction both the content and the related communications data of those intercepted communications.

For targeted examination warrants the IP Act itself contain some safeguards relating to retention and disclosure of material where the purpose, or one of the purposes, of the warrant is to authorise the selection for examination of journalistic material which the intercepting authority believes is confidential journalistic material. Similar provisions apply if the purpose, or one of the purposes, of the warrant is to identify or confirm a source of journalistic information.

Where a targeted examination warrant is unnecessary the Interception Code of Practice provides for corresponding authorisations and safeguards by a senior official outside the intercepting agency.

Where a communication intercepted under a bulk warrant is retained following examination and it contains confidential journalistic material, the Investigatory Powers Commissioner must be informed as soon as reasonably practicable.

Unlike RIPA, S.2 of the IP Act contains a general provision requiring public authorities to have regard to the particular sensitivity of any information, including confidential journalistic material and the identity of a journalist’s source.

Whilst these provisions are an improvement on RIPA, it will be open to debate whether they are sufficient, particularly since the specific safeguards relate to arrangements for handling, retention, use and destruction of the communications rather than to search and selection.

Bulk communications data acquisition
The IP Act introduces a new bulk communications data acquisition warrant to replace S.94 of the Telecommunications Act 1994. S.94 was not considered in the BBWcase.  The IP Act bulk power contains no provisions specifically protecting journalistic privilege. The Code of Practice expands on the general provisions in S.2 of the Act. 

Ordinary communications data acquisition
The RIPA Code of Practice required an application to a judge under PACE 1984 where the purpose of the application was to determine a source. The Strasbourg court criticised this on the basis that it did not apply in every case where there was a request for the communications data of a journalist, or where such collateral intrusion was likely.

The IP Act contains a specific provision requiring a public authority to seek the approval of the Investigatory Powers Commissioner to obtain communications data for the purpose of identifying or confirming a source of journalistic information. This provision appears to suffer the same narrowness of scope criticised by the Strasbourg Court.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Should social media platforms be subject to a statutory duty of care, akin to occupiers’ liability or health and safety, with the aim of protecting against online harms? In a series of blogposts and evidenceto the House of Lords Communications Committee William Perrin and Professor Lorna Woods suggest that the answer should be yes. They say in their evidence:

“A common comparison is that social media services are “like a publisher”. In our view the main analogy for social networks lies outside the digital realm. When considering harm reduction, social media networks should be seen as a public place – like an office, bar, or theme park. Hundreds of millions of people go to social networks owned by companies to do a vast range of different things. In our view, they should be protected from harm when they do so. [25]
The law has proven very good at this type of protection in the physical realm. Workspaces, public spaces, even houses, in the UK owned or supplied by companies have to be safe for the people who use them. The law imposes a “duty of care” on the owners of those spaces. The company must take reasonable measures to prevent harm.” [26]
The aim of this post is to explore the comparability of offline duties of care, focusing on the duties of care owed by occupiers of physical public spaces to their visitors.
From the earliest days of the internet people have looked to offline analogies in the search for legal regimes suitable for the online world. Book and print distributors, with their intermediary role in disseminating information, were an obvious model for discussion forums and bulletin boards, the forerunners of today’s social media platforms.  The liability of distributors for the content of the materials they carried was limited. The EU Electronic Commerce Directive applied a broadly similar liability model to a wide range of online hosting activities including on social media platforms.
The principle of offline and online equivalence still holds sway: whilst no offline analogies are precise, as far as possible the same legal regime should apply to comparable online and offline activities.
A print distributor is a good analogy for a social media platform because they both involve dissemination of information. However, the analogy is not perfect. Distribution lacks the element of direct personal interaction between two principals who may come into conflict, a feature that is common to both social media and a physical public place. The relationship between a social media platform and its users has some parallels with that between the occupier of a physical space and its visitors.
A physical public place is not, however, a perfect analogy. Duties of care owed by physical occupiers relate to what is done, not said, on their premises. They concern personal injury and damage to property. Such safety-related duties of care are thus about those aspects of physical public spaces that are less like online platforms.
That is not to say that there is no overlap. Some harms that result from online interaction can be fairly described as safety-related. Grooming is an obvious example. However that is not the case for all kinds of harm. It may be tempting to label a broad spectrum of online behaviour as raising issues of online safety, as the government has tended to do in its Internet Safety Strategy Green Paper. However, that conceals rather than addresses the question of what constitutes a safety-related harm.
As a historical note, when a statutory duty of care for occupiers' liability was introduced in 1957 the objective was to abolish the fine distinctions that the common law had drawn between different kinds of visitor. The legislation did not expand the kinds of harm to which the duty applied. Those remained, as they do today, limited to safety-related harms: personal injury and damage to property.
Other closer kinds of relationship, such as employer and employee, may give rise to a duty of care in respect of broader kinds of harm. So under the Health and Safety Act 1974 an employer’s duty in respect of employees is in relation to their health, safety and welfare, whereas its duty in respect of other persons is limited to their health and safety. The employer-employee relationship does not correspond to the occupier-visitor relationship that characterises the analogy between physical world public spaces and online platforms.
Non-safety related harms are generally addressed by subject-specific legislation which takes account of the nature of the wrongdoing and the harm in question.
To the extent that common law duties of care do apply to non-safety related harms, they arise out of relationships that are not analogous to a site and visitor. Thus if a person assumes responsibility to someone who relies on their incorrect statement, they may owe a duty of care in respect of financial loss suffered as a result. That is a duty owed by the maker of the statement to the person who relies upon it. There is no duty on the occupier of a physical space to prevent visitors to the site making incorrect statements to each other.
Many harms that may be encountered online (putting aside the question of whether some are properly described as harms at all) are of a different nature from the safety-related dangers in respect of which occupier-related duties of care are imposed in a physical public space.
We shall also see that unlike dangers commonly encountered in a physical place, such as tripping on a dangerous path, the kind of online harms that it is suggested should be within the ambit of a duty of care typically arise out of how users behave to each other rather than from interaction between a visitor and the occupier itself.
Duties of care arising out of occupation of a physical public place
The “operator” of a physical world place such as an office, bar, or theme park is subject to legal duties of care. In its capacity as occupier, by statute it automatically owes a duty of care to visitors in relation to the safety of the premises. It may also owe visitors a common law duty of care in some situations not covered by the statutory duty of care. In either case the duty of care relates to danger, in the sense of risk of personal injury or damage to property.
The Perrin/Woods evidence describes the principle of a duty of care:
“The idea of a “duty of care” is straightforward in principle. A person (including companies) under a duty of care must take care in relation to a particular activity as it affects particular people or things. If that person does not take care and someone comes to harm as a result then there are legal consequences. [24] …
In our view the generality and simplicity of a duty of care works well for the breadth, complexity and rapid development of social media services, where writing detailed rules in law is impossible. By taking a similar approach to corporate owned public spaces, workplaces, products etc in the physical world, harm can be reduced in social networks.” [28]
The general idea of a duty of care can be articulated relatively simply. However that does not mean that a duty of care always exists, or that any given duty of care is general in substance.
In many situations a duty of care will not exist. It may exist in relation to some kinds of harm but not others, in relation to some people but not others, or in relation to some kinds of conduct but not others.
Occupiers’ liability is a duty of care defined by statute. As such the initial common law step of deciding whether a duty of care exists is removed. The statute lays down that a duty of care is owed to visitors in respect of dangers due to the state of the premises or to things done or omitted to be done on them.
“Things done or omitted to be done” on the premises refers to kinds of activities that relate to occupancy and create a risk of personal injury or damage to property – for instance allowing speedboats on a lake used by swimmers, or operating a car park. The statutory duty does not extend to every kind of activity that people engage in on the premises.
The content of the statutory duty is to take reasonable care to see that the visitor will be reasonably safe in using the premises for the purposes for which he is invited or permitted by the occupier to be there. For some kinds of danger the duty of care may not require the occupier to take any steps at all. For instance, there is no duty to warn of obvious risks.
As to the common law, the courts some time ago abandoned the search for a universal touchstone by which to determine whether a duty of care exists. When the courts extend categories of duty of care they do so incrementally, with close regard to situations in which duties of care already exist. They take into account proximity of relationship between the persons by whom and to whom the duty is said to be owed, foreseeability of harm and whether it is fair, just and reasonable to impose a duty of care.
That approach brings into play the scope and content of the obligation said to be imposed: a duty of care to do what, and in respect of what kinds of harm? In Caparo v Dickman Lord Bridge cautioned against discussing duties of care in abstract terms divorced from factual context:
"It is never sufficient to ask simply whether A owes B a duty of care. It always necessary to determine the scope of the duty by reference to the kind of damage from which A must take care to save B harmless."
That is an especially pertinent consideration if the kinds of harm for which an online duty of care is advocated differ from those in respect of which offline duties of care exist. As with the statutory duty, common law duties of care arising from occupation of physical premises concern safety-related harms: personal injury and damage to property.
Outside the field of occupiers’ liability, a particularly close relationship with the potential victim, for instance employer and employee or school and pupil, may give rise to a more extensive duty of care.
A duty of care may sometimes be owed because of a particular relationship between the defendant and the perpetrator (as opposed to the victim). That was the basis on which a Borstal school was held to owe a duty of care to a member of the public whose property was damaged by an escaped inmate.
Vicarious liability and non-delegable duties of care can in some circumstances render a person liable for someone else's breach of duty.
However, none of these situations corresponds to the relationship between occupiers of public spaces and their visitors.
A duty of care to prevent one visitor harming another
An occupier’s duty of care may be described in broad terms as a duty to provide a reasonably safe environment for visitors.  However that bears closer examination.
The paradigm case of a visitor tripping over a dangerous paving stone or injured when using a badly maintained theme park ride does not translate well into the online environment.  The kind of duty of care that would be most relevant to a social media platform is different: a duty to take steps to prevent, or reduce the risk of, one site visitor harming another.
While that kind of duty is not unheard of in respect of physical public places, it has been applied in very specific circumstances: for instance a bar serving alcohol, a football club in respect of behaviour of rival fans or a golf club in respect of mishit balls.  These related to specific activities that created the danger in question. The duties apply to safety properly so called - risk of personal injury inflicted by one visitor on another – but not to what visitors say to each other.  
This limited kind of duty of care may be compared with the proposal in the Perrin/Woods evidence. It suggests that what is, in substance, a universal duty of care should apply to large social media platforms (over 1,000,000 users/members/viewers in the UK) in relation to:
"a)       Harmful threats – statement of an intention to cause pain, injury, damage or other hostile action such as intimidation. Psychological harassment, threats of a sexual nature, threats to kill, racial or religious threats known as hate crime. Hostility or prejudice based on a person’s race, religion, sexual orientation, disability or transgender identity. We would extend the understanding of “hate” to include misogyny.
b)      Economic harm – financial misconduct, intellectual property abuse,
c)       Harms to national security – violent extremism, terrorism, state sponsored cyber warfare
d)      Emotional harm – preventing emotional harm suffered by users such that it does not build up to the criminal threshold of a recognised psychiatric injury.  For instance through aggregated abuse of one person by many others in a way that would not happen in the physical world ([…] on emotional harm below a criminal threshold). This includes harm to vulnerable people – in respect of suicide, anorexia, mental illness etc.
e)       Harm to young people – bullying, aggression, hate, sexual harassment and communications, exposure to harmful or disturbing content, grooming, child abuse ([…])
f)         Harms to justice and democracy – prevent intimidation of people taking part in the political process beyond robust debate, protecting the criminal and trial process ([…])"
These go far wider than the safety-related harms that underpin the duties of care to which the occupants of physical world public spaces are subject.
Perrin and Woods have recognised this elsewhere, suggesting that the common law duty of care would be "insufficient" in "the majority of cases in relation to social media due, in part, to the jurisprudential approach to non-physical injury”.  However, this assumes the conclusion that an online duty of care ought to apply to broader kinds of harm. Whether a particular kind of harm is appropriate for a duty of care-based approach would be a significant question.
Offline duties of care applicable to the proprietors of physical world public spaces do not correspond to a universal duty of care to prevent broadly defined notions of harm resulting from the behaviour of visitors to each other.
It may be said that the kind of harm that is foreseeable on a social media platform is different from that which is foreseeable in a bar, a football ground or a theme park. On that basis it may be argued that a duty of care should apply in respect of a wider range of harms. However, that is an argument from difference, not similarity. The duties of care applicable to an occupier’s liability to visitors in a physical world space, both statutory and common law, are limited to safety-related harms. That is a long standing and deliberate policy.
The purpose of a duty of care
The Perrin/Woods evidence describes the purpose of duties of care in terms that they internalise external costs ([14], [18]) and make companies invest in safety by taking reasonable measures to prevent harm ([26]). Harms represent “external costs generated by production of the social media providers’ products” ([14]).
However, articulating the purpose of duties of care does not provide an answer to how we should determine what should be regarded as harmful external costs in the first place, which kind of harms should and should not be the subject of a duty of care and the extent (if any) to which a duty of care should oblige an operator to take steps to prevent actions of third party users.
There is also an assumption that consequences of user actions are external costs generated by the platform's products, rather than costs generated by users themselves. That is something like equating a locomotive emitting sparks with what passengers say to each other in the carriages.
Offline duties of care do not attempt to internalise all external costs.  Some might say that the offline regime should go further. However, an analogy with the offline duty of care regime has to start from what is, rather than from what is not.
Examples of physical world duties of care
It can be seen from the above that for the purpose of analogy the two most relevant aspects of duties of care in physical public spaces are: (1) the extent of any duty owed by the occupier in respect of behaviour by visitors towards each other and (2) the types of harm in respect of which such a duty of care applies.
Duties owed to visitors in respect of behaviour to each other
One physical world example mentioned in the Perrin/Woods paper is the bar. The common law duty of care owed by a members' bar to its visitors was considered by the Court of Appeal in Everett v Comojo.  This was a case of personal injury: a guest stabbing two other guests several times, leading to a claim that the owners of the club should have taken steps to prevent the perpetrator committing the assault.  On the facts the club was held not to have breached any duty of care that it owed. The court held that it did owe a duty of care analogous to statutory occupiers' liability. The content of the duty of care was limited. The bar was under no obligation to search guests on entry for offensive weapons. There had been no prior indication that the guest was about to turn violent. While a waitress had become concerned, and went to talk to the manager, she could not have been criticised if she had done nothing.
The judge suggested that a club with a history of people bringing in offensive weapons might have a duty to search guests at the door. In a club with a history of outbreaks of violence the duty might be to have staff on hand to control the outbreak. Some clubs might have to have security personnel permanently present.   In a club with no history the duty might only be to train staff to look out for trouble and to alert security personnel.
This variable duty of care existed in respect of personal injury in the specific situation where the serving of alcohol created a particular risk of loss of control and violence by patrons.
We can also consider the sports ground. In Cunningham v Reading Football Club Ltd the football club was found to have breached its statutory duty of care to a policeman who was injured when visiting fans broke pieces of concrete off the “appallingly dilapidated” terraces and used them as missiles. The club was found to have been well aware that the visiting crowd was very likely indeed to contain a violent element. Similar incidents involving lumps of concrete broken off from the terracing had occurred at a match played at the same ground less than four months earlier and no steps had been taken in the meantime to make that more difficult.
In a Scottish case a golf club was held liable for injuries suffered by a golfer struck by a golf ball played by a fellow golfer, on the basis of lack of warning signs in an area at risk from a mishit ball.
The Perrin/Woods evidence cites the example of a theme park. The occupier of a park owes a duty to its visitors to take reasonable care to provide reasonably safe premises – safe in the sense of danger of personal injury or damage to property. It owes no duty to check what visitors are saying to each other while strolling in the grounds.
It can be seen that what is required by a duty of care may vary with the factual circumstances. The Perrin/Woods evidence emphasises the flexibility of a duty of care according to the degree of risk, although it advocates putting that assessment in the hands of a regulator (that is another debate).
However, we should not lose sight of the fact that in the offline world the variable content of duties of care is contained within boundaries that determine whether a duty of care exists at all and in respect of what kinds of harm.

The law does not impose a universally applicable duty of care to take steps to prevent or reduce any kind of foreseeable harm that visitors may cause to each other; certainly not when the harm is said to have been inflicted by words rather than by a knife, a flying lump of concrete or an errant golf ball.
Types of harm
That brings us to the kind of harm that an online duty of care might seek to prevent.
A significant difference from offline physical spaces is that internet platforms are based on speech. That is why distribution of print information has served well as an analogy.
Where activities like grooming, harassment and intimidation are concerned, it is true that the fact that words may be the means by which they are carried out is of no greater significance online than it is offline. Saying may cross the line into doing. And an online conversation can lead to a real world encounter or take place in the context of a real world relationship outside the platform.
Nevertheless, offensive words are not akin to a knife in the ribs or a lump of concrete. The objectively ascertainable personal injury caused by an assault bears no relation to a human evaluating and reacting to what people say and write.
Words and images may cause distress. It may be said that they can cause psychiatric harm. But even in the two-way scenario of one person injuring another, there is argument over the proper boundaries of recoverable psychiatric damage by those affected, directly or indirectly. Only in the case of intentional infliction of severe distress can pure psychiatric damage be recovered.
The difficulties are compounded in the three-way scenario: a duty of care on a platform to prevent or reduce the risk of one visitor using words that cause psychiatric damage or emotional harm to another visitor. Such a duty involves predicting the potential psychological effect of words on unknown persons. The obligation would be of a quite different kind from the duty on the occupier of a football ground to take care to repair dilapidated terracing, with a known risk of personal injury by fans prising up lumps of concrete and using them as missiles.
It might be countered that the platform would have only to consider whether the risk of psychological or emotional harm exceeded a threshold. But the lower the threshold, the greater the likelihood of collateral damage by suppression of legitimate speech. A regime intended to internalise a negative externality then propagates a different negative externality created by the duty of care of regime itself.  This is an inevitable risk of extrapolating safety-related duties of care to speech-related harms.
Some of the difficulties in relation to psychiatric harm and freedom of speech are illustrated by the UK Supreme Court case of Rhodes v OPO. This claim was brought under the rule in Wilkinson v Downton, which by way of exception from the general rules of negligence permits recovery for deliberately inflicted severe distress resulting in psychiatric illness. The case was about whether the author of an autobiography should be prevented from publishing by an interlocutory injunction. The claim was that, if his child were to read it, the author would be intentionally causing distress to the child as a result of the blunt and graphic descriptions of the abuse that the author had himself suffered as a child.  The Supreme Court allowed the publication to proceed.
The Court of Appeal had held..
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
This summer marked the fiftieth anniversary of the Theatres Act 1968, the legislation that freed the theatres from the censorious hand of the Lord Chamberlain of Her Majesty’s Household. Thereafter theatres needed to concern themselves only with the general laws governing speech. In addition they were granted a public good defence to obscenity and immunity from common law offences against public morality.

The Theatres Act is celebrated as a landmark of enlightenment. Yet today we are on the verge of creating a Lord Chamberlain of the Internet. We won't call it that, of course. The Times, in its leader of 5 July 2018, came up with the faintly Orwellian "Ofnet". Speculation has recently renewed that the UK government is laying plans to create a social media regulator to tackle online harm. What form that might take, should it happen, we do not know. We will find out when the government produces a promised white paper.

When governments talk about regulating online platforms to prevent harm it takes no great leap to realise that we, the users, are the harm that they have in mind.

The statute book is full of legislation that restrains speech. Most, if not all, of this legislation applies online as well as offline. Some of it applies more strictly online than offline. These laws set boundaries: defamation, obscenity, intellectual property rights, terrorist content, revenge porn, harassment, incitement to racial and religious hatred and many others. Those boundaries represent a balance between freedom of speech and harm to others. It is for each of us to stay inside the boundaries, wherever they may be set. Within those boundaries we are free to say what we like, whatever someone in authority may think. Independent courts, applying principles, processes and presumptions designed to protect freedom of speech, adjudge alleged infractions according to clear, certain laws enacted by Parliament.

But much of the current discussion centres on something quite different: regulation by regulator. This model concentrates discretionary power in a state agency. In the UK the model is to a large extent the legacy of the 1980s Thatcher government, which started the OF trend by creating OFTEL (as it then was) to regulate the newly liberalised telecommunications market. A powerful regulator, operating flexibly within broadly stated policy goals, can be rule-maker, judge and enforcer all rolled into one.

That may be a long-established model for economic regulation of telecommunications competition, energy markets and the like. But when regulation by regulator trespasses into the territory of speech it takes on a different cast. Discretion, flexibility and nimbleness are vices, not virtues, where rules governing speech are concerned. The rule of law demands that a law governing speech be general in the sense that it applies to all, but precise about what it prohibits. Regulation by regulator is the converse: targeted at a specific group, but laying down only broadly stated goals that the regulator should seek to achieve.
As OFCOM puts it in its recent discussion paper ‘Addressing Harmful Online Content’: “What has worked in a broadcasting context is having a set of objectives laid down by Parliament in statute, underpinned by detailed regulatory guidance designed to evolve over time. Changes to the regulatory requirements are informed by public consultation.”

Where exactly the limits on freedom of speech should lie is a matter of intense, perpetual, debate. It is for Parliament to decide, after due consideration, whether to move the boundaries. It is anathema to both freedom of speech and the rule of law for Parliament to delegate to a regulator the power to set limits on individual speech.

It becomes worse when a document like the government’s Internet Safety Strategy Green Paper takes aim at subjective notions of social harm and unacceptability rather than strict legality and illegality according to the law. ‘Safety’ readily becomes an all-purpose banner under which to proceed against nebulous categories of speech which the government dislikes but cannot adequately define.

Also troubling is the frequently erected straw man that the internet is unregulated. This blurs the vital distinction between the general law and regulation by regulator. Participants in the debate are prone to debate regulation as if the general law did not exist.

Occasionally the difference is acknowledged, but not necessarily as a virtue. The OFCOM discussion paper observes that by contrast with broadcast services subject to long established regulation, some newer online services are ‘subject to little or no regulation beyond the general law’, as if the general law were a mere jumping-off point for further regulation rather than the democratically established standard for individual speech.

OFCOM goes on that this state of affairs was “not by design, but the outcome of an evolving system”. However, a deliberate decision was taken with the Communications Act 2003 to exclude OFCOM’s jurisdiction over internet content in favour of the general law alone.

Moving away from individual speech, the OFCOM paper characterises the fact that online newspapers are not subject to the impartiality requirements that apply to broadcasters as an inconsistency. Different, yes. Inconsistent, no.

Periodically since the 1990s the idea has surfaced that as a result of communications convergence broadcast regulation should, for consistency, apply to the internet. With the advent of video over broadband aspects of the internet started to bear a superficial resemblance to television. The pictures were moving, send for the TV regulator.

EU legislators have been especially prone to this non-sequitur. They are currently enacting a revision of the Audiovisual Media Services Directive that will require a regulator to exercise some supervisory powers over video sharing platforms.

However broadcast regulation, not the rule of general law, is the exception to the norm. It is one thing for a body like OFCOM to act as broadcast regulator, reflecting television’s historic roots in spectrum scarcity and Reithian paternalism. Even that regime is looking more and more anachronistic as TV becomes less and less TV-like. It is quite another to set up a regulator with power to affect individual speech. And it is no improvement if the task of the regulator is framed as setting rules about the platforms’ rules. The result is the same: discretionary control exercised by a state entity (however independent of the government it may be) over users’ speech, via rules that Parliament has not specifically legislated.

It is true, as the OFCOM discussion paper notes, that the line between broadcast and non-broadcast regulation means that the same content can be subject to different rules depending on how it is accessed. If that is thought to be anomalous, it is a small price to pay for keeping regulation by regulator out of areas in which it should not tread.

The House of Commons Media Culture and Sport Committee, in its July 2018 interim report on fake news, recommended that the government should use OFCOM’s broadcast regulation powers, “including rules relating to accuracy and impartiality”, as “a basis for setting standards for online content”. It is perhaps testament to the loss of perspective that the internet routinely engenders that a Parliamentary Committee could, in all seriousness, suggest that accuracy and impartiality rules should be applied to the posts and tweets of individual social media users.

Setting regulatory standards for content means imposing more restrictive rules than the general law. That is the regulator’s raison d’etre. But the notion that a stricter standard is a higher standard is problematic when applied to what we say. Consider the frequency with which environmental metaphors – toxic speech, polluted discourse – are now applied to online speech. For an environmental regulator, cleaner may well be better. The same is not true of speech. Offensive or controversial words are not akin to oil washed up on the seashore or chemicals discharged into a river. Objectively ascertainable physical damage caused by an oil spill bears no relation to a human being evaluating and reacting to the merits and demerits of what people say and write.

If we go further and transpose the environmental precautionary principle to speech we then have prior restraint – the opposite of the presumption against prior restraint that has long been regarded as a bulwark of freedom of expression. All the more surprising then that The Times, in its July Ofnet editorial, should complain of the internet that “by the time police and prosecutors are involved the damage has already been done”. That is an invitation to step in and exercise prior restraint.

As an aside, do the press really think that Ofnet would not before long be knocking on their doors to discuss their online editions? That is what happened when ATVOD tried to apply the Audiovisual Media Services Directive to online newspapers that incorporated video. Ironically it was The Times' sister paper, the Sun, that successfully challenged that attempt.

The OFCOM discussion paper observes that there are “reasons to be cautious over whether [the broadcast regime] could be exported wholesale to the internet”. Those reasons include that “expectations of protection or [sic] freedom of expression relating to conversations between individuals may be very different from those relating to content published by organisations”.

US district judge Dalzell said in 1996: “As the most participatory form of mass speech yet developed, the internet deserves the highest protection from governmental intrusion”. The opposite view now seems to be gaining ground: that we individuals are not to be trusted with the power of public speech, that it was a mistake ever to allow anyone ever to speak or write online without the moderating influence of an editor, and that by hook or by crook the internet genie must be stuffed back in its bottle.

Regulation by regulator, applied to speech, harks back to the bad old days of the Lord Chamberlain and theatres. In a free and open society we do not appoint a Lord Chamberlain of the Internet – even one appointed by Parliament rather than by the Queen - to tell us what we can and cannot say online, whether directly or via the proxy of online intermediaries. The boundaries are rightly set by general laws.

We can of course debate what those laws should be. We can argue about whether intermediary liability laws are appropriately set. We can consider what tortious duties of care apply to online intermediaries and whether those are correctly scoped. We can debate the dividing line between words and conduct. We can discuss the vexed question of an internet that is both reasonably safe for children and fit for grown-ups. We can think about better ways of enforcing laws and providing victims of unlawful behaviour with remedies. These are matters for public debate and for Parliament and the general law within the framework of fundamental rights. None of this requires regulation by regulator. Quite the opposite.

Nor is it appropriate to frame these matters of debate as (in the words of The Times) “an opportunity to impose the rule of law on a legal wilderness where civic instincts have been suspended in favour of unthinking libertarianism for too long”. People who use the internet, like people everywhere, are subject to the rule of law. The many UK internet users who have ended up before the courts, both civil and criminal, are testament to that. Disagreement with the substantive content of the law does not mean that there is a legal vacuum.

What we should be doing is take a hard look at what laws do and don’t apply online (the Law Commission is already looking at social media offences), revise those laws if need be and then look at how they can most appropriately be enforced.

This would involve looking at areas that it is tempting for a government to avoid, such as access to justice. How can we give people quick and easy access to independent tribunals with legitimacy to make decisions about online illegality? The current court system cannot provide that service at scale, and it is quintessentially a job for government rather than private actors. More controversially, is there room for greater use of powers such as ‘internet ASBOs’ to target the worst perpetrators of online illegality? The existing law contains these powers, but they seem to be little used.

It is hard not to think that an internet regulator would be a politically expedient means of avoiding hard questions about how the law should apply to people’s behaviour on the internet. Shifting the problem on to the desk of an Ofnet might look like a convenient solution. It would certainly enable a government to proclaim to the electorate that it had done something about the internet. But that would cast aside many years of principled recognition that individual speech should be governed by the rule of law, not the hand of a regulator.

If we want safety, we should look to the general law to keep us safe. Safe from the unlawful things that people do offline and online. And safe from a Lord Chamberlain of the Internet.



Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Today I have been transported back in time, to that surreal period following the Snowden revelations in 2013 when anyone who knew anything about the previously obscure RIPA (Regulation of Investigatory Powers Act 2000) was in demand to explain how it was that GCHQ was empowered to conduct bulk interception on a previously unimagined scale.

The answer (explained here) lay in the ‘certificated warrants’ regime under S.8(4) RIPA for intercepting external communications. ‘External’ communications were those sent or received outside the British Islands, thus including communications with one end in the British Islands.

Initially we knew about GCHQ’S TEMPORA programme and, as the months stretched into years, we learned from the Intelligence and Security Committee of the importance to GCHQ of bulk intercepted metadata (related communications data, in RIPA jargon):

“We were surprised to discover that the primary value to GCHQ of bulk interception was not in the actual content of communications, but in the information associated with those communications.” [80] (Report, March 2015)
According to a September 2015 Snowden disclosure, bulk intercepted communications data was processed and extracted into query focused datasets such as KARMA POLICE, containing billions of rows of data. David (now Lord) Anderson QC’s August 2016 Bulk Powers Review gave an indication of some techniques that might be used to analyse metadata, including unseeded pattern analysis.

Once the Investigatory Powers Bill started its journey into legislation the RIPA terminology started to fade. But today it came back to life, with the European Court of Human Rights judgment in Big Brother Watch and others v UK.

The fact that the judgment concerns a largely superseded piece of legislation does not necessarily mean it is of historic interest only. The Court held that both the RIPA bulk interception regime and its provisions for acquiring communications data from telecommunications operators violated Article 8 (privacy) and 10 (freedom of expression) of the European Convention on Human Rights. The interesting question for the future is whether the specific aspects that resulted in the violation have implications for the current Investigatory Powers Act 2016.

The Court expressly did not hold that bulk interception per se was impermissible. But it said that a bulk interception regime, where an agency has broad discretion to intercept communications, does have to be surrounded with more rigorous safeguards around selection and examination of intercepted material. [338]

It is difficult to be categoric about when the absence of a particular feature or safeguard will or will not result in a violation, since the Court endorsed its approach in Zakharov whereby in assessing whether a regime is ‘in accordance with the law’ the Court can have regard to certain factors which are not minimum requirements, such as arrangements for supervising the implementation of secret surveillance measures, any notification mechanisms and the remedies provided for by national law. [320]

That said, the Court identified three failings in RIPA that were causative of the violations. These concerned selection and examination of intercepted material, related communications data, and journalistic privilege.

Selection and examination of intercepted material

The Court held that lack of oversight of the entire selection process, including the selection of bearers for interception, the selectors and search criteria for filtering intercepted communications, and the selection of material for examination by an analyst, meant that the RIPA S. 8(4) bulk interception regime did not meet the “quality of law” requirement under Article 8 and was incapable of keeping the “interference” with Article 8 to what is “necessary in a democratic society”.

As to whether the IPAct suffers from the same failing, a careful study of the Act may lead to the conclusion that when considering whether to approve a bulk interception warrant the independent Judicial Commissioner should indeed look at the entire selection process. Indeed I argued exactly that in a submission to the Investigatory Powers Commissioner. Whether it is clear that that is the case and, even if it is, whether the legislation and supporting public documents are sufficiently clear as to the level of granularity at which such oversight should be conducted, is another matter.

As regards selectors (the Court’s greatest concern), the Court observed that while it is not necessary that selectors be listed in the warrant, mere after the event audit and the possibility of an application to the IPT was not sufficient. The search criteria and selectors used to filter intercepted communications should be subject to independent oversight. [340]

Related communications data

The RIPA safeguards for examining bulk interception product (notably the certificate to select a communication for examination by reference to someone known to be within the British Islands) did not apply to ‘related communications data’ (RCD). RCD is communications data (in practice traffic data) acquired by means of the interception.

The significance of the difference in treatment is increased when it is appreciated that it includes RCD obtained from incidentally acquired internal communications and that there is no requirement under RIPA to discard such material. As the Court noted: “The related communications data of all intercepted communications – even internal communications incidentally intercepted as a “by-catch” of a section 8(4) warrant – can therefore be searched and selected for examination without restriction.” [348]

The RCD regime under RIPA can be illustrated graphically:








In this regard the IPAct is virtually identical. We now have tweaked definitions of ‘overseas-related communications’ and ‘secondary data’ instead of external communications and RCD, but the structure is the same:


















The only substantive additional safeguard is that examination of secondary data has to be for stated operational purposes (which can be broad).

The Court accepted that under RIPA, as the government argued (and had argued in the original IPT proceedings):
“the effectiveness of the [British Islands] safeguard [for examination of content] depends on the intelligence services having a means of determining whether a person is in the British Islands, and access to related communications data would provide them with that means.” [354]
 But it went on:

“Nevertheless, it is a matter of some concern that the intelligence services can search and examine “related communications data” apparently without restriction. While such data is not to be confused with the much broader category of “communications data”, it still represents a significant quantity of data. The Government confirmed at the hearing that “related communications data” obtained under the section 8(4) regime will only ever be traffic data.  
However, … traffic data includes information identifying the location of equipment when a communication is, has been or may be made or received (such as the location of a mobile phone); information identifying the sender or recipient (including copy recipients) of a communication from data comprised in or attached to the communication; routing information identifying equipment through which a communication is or has been transmitted (for example, dynamic IP address allocation, file transfer logs and e-mail headers (other than the subject line of an e-mail, which is classified as content)); web browsing information to the extent that only a host machine, server, domain name or IP address is disclosed (in other words, website addresses and Uniform Resource Locators (“URLs”) up to the first slash are communications data, but after the first slash content); records of correspondence checks comprising details of traffic data from postal items in transmission to a specific address, and online tracking of communications (including postal items and parcels). [355] 

In addition, the Court is not persuaded that the acquisition of related communications data is necessarily less intrusive than the acquisition of content. For example, the content of an electronic communication might be encrypted and, even if it were decrypted, might not reveal anything of note about the sender or recipient. The related communications data, on the other hand, could reveal the identities and geographic location of the sender and recipient and the equipment through which the communication was transmitted. In bulk, the degree of intrusion is magnified, since the patterns that will emerge could be capable of painting an intimate picture of a person through the mapping of social networks, location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with. [356]

Consequently, while the Court does not doubt that related communications data is an essential tool for the intelligence services in the fight against terrorism and serious crime, it does not consider that the authorities have struck a fair balance between the competing public and private interests by exempting it in its entirety from the safeguards applicable to the searching and examining of content. While the Court does not suggest that related communications data should only be accessible for the purposes of determining whether or not an individual is in the British Islands, since to do so would be to require the application of stricter standards to related communications data than apply to content, there should nevertheless be sufficient safeguards in place to ensure that the exemption of related communications data from the requirements of section 16 of RIPA is limited to the extent necessary to determine whether an individual is, for the time being, in the British Islands.” [357]

 This is a potentially significant holding. In IPAct terms this would appear to require that selection for examination of secondary data for any purpose other than determining whether an individual is, for the time being, in the British Islands should be subject to different and more stringent limitations and procedures.

It is also noteworthy that, unlike RIPA, the IP Act contains provisions enabling some categories of content to be extracted from intercepted communications and treated as secondary data.

Journalistic privilege

 The Court found violations of Article 10 under both the bulk interception regime and the regime for acquisition of communications data from telecommunications service providers.

For bulk interception, the court focused on lack of protections at the selection and examination stage: “In the Article 10 context, it is of particular concern that there are no requirements – at least, no “above the waterline” requirements – either circumscribing the intelligence services’ power to search for confidential journalistic or other material (for example, by using a journalist’s email address as a selector), or requiring analysts, in selecting material for examination, to give any particular consideration to whether such material is or may be involved. Consequently, it would appear that analysts could search and examine without restriction both the content and the related communications data of these intercepted communications.” [493]

For communications data acquisition, the court observed that the protections for journalistic privilege only applied where the purpose of the application was to determine a source; they did not apply in every case where there was a request for the communications data of a journalist, or where such collateral intrusion was likely. [499]

This may have implications for those IPAct journalistic safeguards that are limited to applications made ‘for the purpose of’ intercepting or examining journalistic material or sources.



Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Nearly twenty five years after the advent of the Web, and longer since the birth of the internet, we still hear demands that the internet should be regulated - for all the world as if people who use the internet were not already subject to the law. The May 2017 Conservative manifesto erected a towering straw man: “Some people say that it is not for government to regulate when it comes to technology and the internet. We disagree.”  The straw man even found its way into the title of the current House of Lords Communications Committee inquiry: "The Internet: to regulate or not to regulate?".

The choice is not between regulating or not regulating.  If there is a binary choice (and there are often many shades in between) it is between settled laws of general application and fluctuating rules devised and applied by administrative agencies or regulatory bodies; it is between laws that expose particular activities, such as search or hosting, to greater or less liability; or laws that visit them with more or less onerous obligations; it is between regimes that pay more or less regard to fundamental rights; and it is between prioritising perpetrators or intermediaries.

Such niceties can be trampled underfoot in the rush to do something about the internet. Existing generally applicable laws are readily overlooked amid the clamour to tame the internet Wild West, purge illegal, harmful and unacceptable content, leave no safe spaces for malefactors and bring order to the lawless internet.

A recent article by David Anderson Q.C. asked the question 'Who governs the Internet?' and spoke of 'subjecting the tech colossi to the rule of law'. The only acceptable answer to the ‘who governs?’ question is certainly 'the law'. We would at our peril confer the title and powers of Governor of the Internet on a politician, civil servant, government agency or regulator. But as to the rule of law, we should not confuse the existence of laws with disagreement about what, substantively, those laws should consist of. Bookshops and magazine distributors operate, for defamation, under a liability system with some similarities to the hosting regime under the Electronic Commerce Directive. No-one has, or one hopes, would suggest that as a consequence they are not subject to the rule of law.

It is one thing to identify how not to regulate, but it would be foolish to deny that there are real concerns about some of the behaviour that is to be found online. The government is currently working towards a White Paper setting out proposals for legislation to tackle “a range of both legal and illegal harms, from cyberbullying to online child sexual exploitation”. What is to be done about harassment, bullying and other abusive behaviour that is such a significant contributor to the current furore?

Putting aside the debate about intermediary liability and obligations, we could ask whether we are making good enough use of the existing statute book to target perpetrators. The criminal law exists, but can be seen as a blunt instrument. It was for good reason that the Director of Public Prosecutions issued lengthy prosecutorial guidelines for social media offences.

Occasionally the idea of an ‘Internet ASBO’ has been floated. Three years ago a report of the All-Party Parliamentary Inquiry into Antisemitism recommended, adopting an analogy with sexual offences prevention orders, that the Crown Prosecution Service should undertake a “review to examine the applicability of prevention orders to hate crime offences and if appropriate, take steps to implement them.” 

A possible alternative, however, may lie elsewhere on the statute book. The Anti-Social Behaviour, Crime and Policing Act 2014 contains a procedure for some authorities to obtain a civil anti-social behaviour injunction (ASBI) against someone who has engaged or threatens to engage in anti-social behaviour, meaning “conduct that has caused, or is likely to cause, harassment, alarm or distress to any person”. That succintly describes the kind of online behaviour complained of.

Nothing in the legislation restricts an ASBI to offline activities. Indeed over 10 years ago The Daily Telegraph reported an 'internet ASBO' made under predecessor legislation against a 17 year old who had been posting material on the social media platform Bebo, banning him from publishing material that was threatening or abusive and promoted criminal activity.  

ASBIs raise difficult questions of how they should be framed and of proportionality, and there may be legitimate concerns about the broad terms in which anti-social behaviour is defined. Nevertheless the courts to which applications are made have the societal and institutional legitimacy, as well as the experience and capability, to weigh such factors.

The Home Office Statutory Guidance on the use of the 2014 Act powers (revised in December 2017) makes no mention of their use in relation to online behaviour.  That could perhaps usefully be revisited. Another possibility might be to explore extending the ability to apply for an ASBI beyond the authorities, for instance to some voluntary organisations. 

Whilst the debate about how to regulate internet activities and the role of intermediaries is not about to go away, we should not let that detract from the importance of focusing on remedies against the perpetrators themselves.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview