Follow Copybuzz on Feedspot

Continue with Google
Continue with Facebook


Copybuzz by Glyn Moody - 5d ago

It is quite clear now: Article 13 of the proposed copyright directive represents the most serious attack on the Internet in the EU for years. Even leaving aside the fact that Article 13 is incompatible with EU law, the main idea that top sites hosting material uploaded by members of the public should filter everything will have profoundly harmful consequences.

The problem of “false positives” – marking material as infringing on someone else’s copyright when it is not – will be a huge issue, not least because of two factors. One is that it is impossible to codify the subtle and complex rules of EU copyright law into simple algorithms that can be used to filter automatically. The other is that companies will naturally choose to err on the side of caution, preferring to block legitimate material rather than risk lawsuits for failing to stop unauthorised copies. Put these together, and sites will inevitably implement upload filtering using conservative rules that over-block.

This, in its turn, will lead EU citizens to self-censor. Once their uploads are routinely blocked because of false positives marking their original creations as infringing, many will simply give up. A related serious loss concerns the public domain, which will be badly affected, in part because there is no organisation that can help to defend the rights of people to use such material as they wish, and without being blocked for doing so.

The need to filter uploads will act as a serious brake on digital startups in the EU. Even if they are not required to implement an online censorship system immediately, new companies will have the threat of mandatory upload filters hanging over them as they grow. If they flourish, they will effectively be punished for their success by being obliged to put a filtering system in place, or perhaps be required to try to negotiate impossible licences. Why would startups choose to operate under these terms in the EU when they can avoid the problem by setting up a company in jurisdictions with laws better-suited to the digital age? Similarly, why would venture capitalists risk investing in new EU companies, which will be hamstrung by a requirement to filter everything once they grow beyond a certain size?

Another deep flaw with the approach is that the filtering systems will be costly to create. The ContentID filtering system built by Google, and often cited in this context as “proof” it can be done, is the product of over 50,000 hours of coding and some $60 million. There is no way that local EU software houses could afford to risk that kind of money on writing upload filter software. That means that only the largest – and probably US-based – companies will be able and willing to produce them. Even then, it is likely that few such systems will be produced, leading to a monopoly or oligopoly situation for each sector, with high prices that will make it even harder for EU companies to compete globally.

Moreover, the fact that upload filter systems will probably be written by US companies means that effectively the entire online culture in the EU will be controlled by foreign entities. As we know, the use of algorithmic black boxes makes it very hard for anyone – whether members of the public or government watchdogs – to monitor what is happening. In particular, it is almost impossible to spot when bias – whether intentional or not – is being introduced to the filtering. Again, foreign companies will probably choose to apply a maximalist approach to blocking material, rather than risk lawsuits alleging that they did not do enough to stop unauthorised uploads. Without any transparency, it will be very hard to spot and remedy this.

Finally, it is worth noting that allowing foreign upload filters to dominate the EU online world represents a risk to national security. Since everything that is uploaded to major sites will have to pass through the filter, that software will be carrying out algorithmic surveillance on all such material. As well as checking for unauthorised copies, it could also surreptitiously be looking for keywords that trigger secret actions, like covertly sending selected files to other locations. Since upload filters are unlikely to be open source, and thus transparent, it will be very hard to spot this kind of spying.

The above issues aren’t just my opinions. Many experts have weighed in with similar concerns. For example, last September the Max Planck Institute for Innovation and Competition wrote: “Some requirements contained in Article 13 can enable abusive behaviour, thereby threatening freedom of expression and information”. In October, over 50 NGOs representing human rights and media freedom said: “Article 13 of the proposal on Copyright in the Digital Single Market include obligations on internet companies that would be impossible to respect without the imposition of excessive restrictions on citizens’ fundamental rights.” The same month, 56 leading academics warned: “Article 13 [of the Digital Single Market Directive] is incompatible with the guarantee of fundamental rights and freedoms and the obligation to strike a fair balance between all rights and freedoms involved.” On a different note, the European Copyright Society expressed the following concern: “that proposed art. 13 will distort competition in the emerging European information market.”

If it is evident that Article 13 would be disastrous for the Internet in the EU, it is by no means clear what should be done to counter it. Despite a stream of analyses showing how damaging the upload filter will be, the latest proposals from both the Bulgarian Presidency, and from the file’s Rapporteur, display scant appreciation of just how serious the problems are. Similarly, the European Commission shows no willingness to listen to the concerns of experts or the public on the topic.

We have been here before. Six years ago, the Commission was pushing a similarly terrible idea called ACTA – the Anti-Counterfeiting Trade Agreement. It started out as a reasonable attempt to tackle fake medicines and counterfeit aircraft parts. Then, the copyright industry hijacked it, and turned it into one of its perennial attacks on the Internet. Here’s a key sentence from the final ACTA text: “Each Party shall endeavour to promote cooperative efforts within the business community to effectively address trademark and copyright or related rights infringement.” This is essentially a more vaguely-worded version of Article 13. The intent behind both is the same: to turn Internet companies into copyright police, and to force them to censor material with no real safeguards.

As with Article 13, the European Commission refused to listen to repeated explanations from all quarters as to why that clause would be devastating for the online world. Ultimately, people were forced to take extreme measures. In the run-up to the ACTA vote in the European Parliament, tens of thousands of people took to the streets in Europe’s cities, in an unprecedented mass demonstration in support of basic digital rights and against copyright maximalism. MEPs finally realised the seriousness of the situation, and that ACTA would rip the heart out of the online world. In a stunning rebuke to the European Commission and copyright lobbyists, who thought everything had been settled behind closed doors, in July 2012 MEPs threw out ACTA, with 478 voting against, 39 in favour, and 165 abstentions.

If the European Commission continues to ignore the evidence that Article 13 will cause deep and long-lasting damage to the Internet in the EU, it would do well to remember what happened with ACTA. As the Trade Commissioner responsible for the ACTA negotiations, Karel de Gucht, said after his humiliating defeat in the European Parliament, ACTA was “one of the nails in my coffin.” Does the Commission need tens of thousands of citizens to take to the streets of Europe to hammer home the point that they really don’t want Article 13?

Original version of featured image by Tomasz Sienicki.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In an interview related to the press publisher’s right (Article 11 of the proposed Directive on Copyright in the Digital Single Market) given to Golem.de, MEP Axel Voss, the Rapporteur on the file in the lead European Parliament Committee, stated that this new right was not ‘die beste Idee’ but that he couldn’t think of another one.

With the leaked proposal of his ‘compromise’ amendment on Article 13, aka the ‘censorship machine’ or ‘value gap’ proposal, he seems to be replicating a pattern: faced with a bad idea, he makes no effort to improve it and merely copy-pastes the worst bits of all worlds to give the impression of listening to everyone. Definitely ‘nicht die beste Idee’! Especially considering two European Parliament committees (including one that co-leads on this article) previously proposed alternatives that created a much better balance than the initial European Commission text (see our analysis of the IMCO and LIBE Committee Opinions).

The result: a ‘mille-feuille’ monstrosity. For those not privy to this culinary delight,  ‘The mille-feuille, vanilla slice or custard slice, also known as the Napoleon, is a French pastry whose exact origin is unknown.”, according to Wikipedia (one of the platforms that falls under the Article 13 provisions). Wikipedia adds: “Traditionally, a mille-feuille is made up of three layers of puff pastry (pâte feuilletée), alternating with two layers of pastry cream (crème pâtissière), but sometimes whipped cream or jam are substituted, when substituted with jam this is then called a Bavarian slice. ”

So what’s on offering in MEP Voss’ Bavarian slice?

Layer 1: Thou shalt licence all content – whatever a platform does, it will be directly liable for ‘communicating to the public’ the content uploaded by its users (Art 13, 1a)

Under the newly added paragraph 1a of Article 13, websites and apps that allow users to upload content must acquire copyright licenses for that content.

Concretely, that means these platforms are considered to be ‘communicating to the public’ the content that has been uploaded by their users and are hence directly liable for any content that would infringe copyright rules, as if they had uploaded it themselves. If a license is obtained, it would then cover the user too, only if he uploaded the content for ‘non-commercial’ purposes. In other words, rightholders can still sue commercial users with social media account, on top of the platforms.

Even the European Commission did not dare suggest such an impossible obligation to license all content, as it is unfeasible to comply: who do you licence what from? Not every copyrighted piece of content has collecting societies that represent its rightholders. In her initial reaction to this proposal, MEP Julia Reda notably mentions GitHub, the platform dedicated to hosting software. But the same is true for a multitude of services covered by this sweeping provision, as illustrated by EDiMA in the infographic below. Just imagine the blogging platform WordPress having to licence all the content uploaded by its bloggers? And what about massive open online course (MOOC) platforms? And recipe sharing platforms (honestly, don’t touch the food ones!)?

Source: EDiMA

Looking at the variety of content these platforms accept, the sheer impossibility of the licensing obligation is obvious, and the threat to platforms in the EU evident.

It is also important to note that whilst MEP Voss mentions some carve-outs to the filtering obligation comprised in the rest of the Article (namely for internet access providers, online marketplaces, cloud services that do not allow uploads to be publicly accessible, and research repositories), these carve-outs do not concern the licensing obligation itself. And how better to ensure no unlicensed content appears on a platform than by filtering it (this is truly becoming the ‘copyright circle of life’)?

Layer 2: On top of licensing, thou shalt filter

Removing ‘content recognition’ specific wording does not mean the obligations imposed in a text do not entail the use of automated filters. And replacing ‘large amounts’ of content by ‘significant’ does not change the ball game much: it actually makes it even more subjective (after all, when one mentions ones ‘significant other’, that is a very subjective criteria that does not necessarily imply a size of that person). It certainly still does not link the notion of a potential harm to copyright holders with the need to intervene.

Moreover, whilst content recognition is no longer explicitly stated in the Article itself, it is still mentioned in the accompanying Recitals, so I guess the elephant in the room is difficult to fully hide.

Combined with the provisions on direct liability in paragraph 1a, this filtering obligation will clearly be implemented by platforms in the most conservative manner, as companies tend to minimise any risk of liability. The chilling effects on speech and creativity in Europe will be tremendous, as will be the effect on other countries that will use this new ‘censorship model’ proposed by the EU as a great opportunity to justify their own censorship practices.

Layer 3: When looking at everything to find something, one is not looking at everything

That argument is so weak that we do not want to spend too much time on it. It basically relates to the fact that the e-Commerce Directive prohibits Member States to impose ‘general monitoring’ obligations on hosting providers. This prohibition was further clarified in different Court of Justice of the European Union (CJEU) cases, but this Article is desperately trying to bypass it through a variety of tricks, which are not even that innovative as the European Commission used that line of argumentation in the Scarlet Vs SABAM case in 2010 and got slammed by the CJEU (see CJEU Case C-360/10).

The fallacy is as follows: to pretend that no general monitoring occurs, one states that the monitoring is specific due to the fact that it looks for specific infringements. This line of thought totally ignores the fact the ‘general’ nature of the monitoring does not relate to the ‘what’ is monitored but to the ‘who’. In other words, when the obligation put forward requires monitoring all user uploads to find some specific piece of content, that is still a general monitoring obligation. This view was also confirmed in an analysis from the Max Planck Institute, that was prepared in light of the concerns expressed by a number of Member States.

The jam component: some sweet words about fundamental rights to make everyone feel better

After having proposed all of these principles, MEP Voss probably thought that the pill was a bit too bitter to swallow, and that’s where the jam component kicks in: a sprinkle of fundamental rights, a mention of privacy, some wishful thinking on complaint processes, some regrets about filters blocking lawful content based on exceptions… All very nice and well, but insufficient to counter-balance the core of the provision and its threat to EU citizens and businesses alike.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Artificial Intelligence (AI) is hot. Although its capabilities have been steadily increasing for years, it was the victory of DeepMind’s AlphaGo program over the top Go expert Lee Se-dol last year that alerted many to the rapid pace of development in the AI field. The win was even more significant than the earlier defeat of the reigning world chess champion Garry Kasparov by IBM’s Deep Blue AI system in 1997. Where Deep Blue won by searching through 200 million possible moves per second, overwhelming its opponent with brute computational force, AlphaGo won by thinking like a human – only better. It even came up with such an unprecedented and brilliant tactic that a former European Go champion said: “It’s not a human move. I’ve never seen a human play this move. So beautiful.”

Beauty aside, AI is expected to generate huge economic gains in the coming years. According to research carried out by Accenture, AI could double annual economic growth rates in 12 developed economies by 2035, and boost labour productivity by up to 40%. China aims to create a $150bn AI industry by 2030, and is already challenging the US for global leadership in the field. A report from Sinovation Ventures notes: “Today’s consensus view is that the United States and China have begun a two-way race for AI dominance, making technology a key source of trade friction between the two countries”

The Sinovation Ventures report lists four pre-requisites for AI. Two are fairly obvious: computational power and human expertise. Another – domain-specific focus – reflects the fact that today’s AI systems do not possess a generalised intelligence, but excel in narrow domains like Go. The fourth requirement is “A sea of data.” The report says: “By far the most important element is the availability of large, labelled data sets (examples include information about people who applied for loans and whether they repaid or defaulted; or people who submitted a customer complaint and whether they are satisfied or dissatisfied). AI uses these large data set as examples to teach its algorithms to optimize.”

It is the availability of large datasets for “machine learning” – training AI systems – that has taken the field from the raw, dumb power of IBM’s Deep Blue, to the “beautiful”, human-like responses of DeepMind’s AlphaGo. It is data that holds the key to success for AI companies – and for countries that wish to develop world-class AI industries. The authors of a major report on AI commissioned by the UK government, “Growing The Artificial Intelligence Industry In The UK“, agree. They write: “Growing the AI industry in terms of those developing it and deploying it requires improved access to new and existing datasets to train, develop and deploy code.” And: “Very simply, more open data in more sectors is more data to use with AI to address challenges in those sectors, increasing the scope for innovation.”

The main barrier to using data for AI work is not technical, but legal. As the report’s authors point out: “some data cannot be extracted from published research because access to that data can be restricted by contractor or copyright, making it unavailable as training data for AI. This restricts the use of AI in areas of high potential public value, and lessens the value that can be gained from published research, much of which is funded by the public.”

The report goes on to make an important point: “To date, assessments of the value of text and data mining of research and for new research do not appear to have included the potential value that can come from using data for AI.” To remedy that, the document calls on the UK government to “move towards establishing by default that for published research the right to read is also the right to mine data” – an idea supported by the UK Libraries and Archives Copyright Alliance. The report also says the UK government should recognise how much value could be added to the UK economy by making data available for AI through text and data mining (TDM), including by businesses, when it comes to framing copyright exceptions.

What applies to the UK is just as pertinent for the EU, which a 2017 report on the State of European Tech calls “home to the world’s leading AI research community“. The current proposal for TDM in the Copyright Directive does not allow companies to mine text and data freely available online. On top of that, it can be expected that it will make existing licensing arrangements even more complex and costly, instead of enabling anyone with legal access to read content also to mine that same content – that is, recognising that “the right to read is the right to mine.”

Hindering commercial AI research in this way will have a number of negative effects. It will make it harder for EU startups, especially small ones, to develop AI products that could compete against US players. Giant corporations like Google and Facebook will have ready access to key training data in the US, giving them an unfair advantage over EU companies. The current EU proposal for TDM will discourage foreign companies from setting up AI research labs in the EU, where they will not be able to use text and data for machine learning without negotiating permission first. It will also make leading AI researchers – already an extremely scarce resource – think twice about accepting posts at EU universities, since they will be unable to commercialise their work quickly, if at all.

By placing obstacles in the way of AI engineers and companies working at the leading edge of the field, the European Commission risks condemning the EU to be a backwater for what many believe will be the defining generational breakthrough for the next few decades. European companies and citizens will be forced once more to become dependent on advanced technologies developed elsewhere, instead of being able to support exciting home-grown products and services.

Once the US and China establish themselves as global leaders, there will inevitably be a new brain drain of Europe’s best and brightest young engineers to those regions, just as happened in the early days of computers and the Internet. That loss will make it well-nigh impossible to undo the harm caused by a short-sighted desire to placate a few small-scale legacy sectors like academic publishing, rather than thinking about laying the foundations for vast new industries of the future.

If the EU wishes to maximise the benefits that AI is expected to bring to its member states’ economies, it should free up data for machine learning by removing the limitations on TDM currently found in the Copyright Directive. To achieve that, it must enshrine in law that the right to read is the right to mine. Given the wide-ranging positive impact on business and society that Artificial Intelligence is predicted to bring, to do otherwise would hardly be very clever.

Featured image by Cryteria.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Bulgarian Council Presidency’s discussion paper [PDF] on Articles 11 and 13, which is being circulated to the Member State delegations in light of the 12 February meeting of the Council Intellectual Property Working Party (see agenda here), is public on the Council website.

And guess what: it is bad!

Article 11 – Press Publishers’ Right: let us just ignore option B (presumption) even though half of the Member States want it

The Bulgarian [BG] Council Presidency considers that the Committee of the Permanent Representatives (COREPER) mandated the Council Intellectual Property Working Party to explore some amendments to Option A – a neighbouring right, whilst keeping Option B – a presumption of rights – on the table.

The latter however is done in such a shameful off hand manner that even POLITICO Tech noticed in their coverage of the document that: “The compromise [Option A] was put forth despite almost half of EU countries voicing opposition to the right”.

This lack of consideration of the views around the table and the fact that no guidance was given at COREPER to go down an Option A route is honestly shocking, and one can only hope Member States do not let themselves be steam rolled in such a manner.

Sidelining Option B – the presumption of rights – in such a blatant manner is quite a harsh move, seeing that several Member States consider this to be a good way forward. The push for option A is even less justified when one takes into account that the recently uncovered 2016 study of the EC’s Joint Research Centre (JRC) demonstrates that Option A does not work: “the German and Spanish cases show that the law can create a right but market forces have valued this right at a zero price” (p. 25). This evidence comes on top of a more recently published European Parliament study that comes to a similar conclusion: “the evidence does not support a new right, it does support the introduction of a presumption” (p. 38).

Looking specifically at Option A, the following elements are considered key by the Bulgarians (aka the European Commission?):

  • the application of the criterion of size/length to grant protection to extracts of press publications;
  • whether uses performed by individual users should be covered or not by the protection granted to press publishers; and
  • the term of protection of the rights provided to press publishers.

The proposed amendments however go beyond these elements, as outlined below regarding the (lack of) need for originality, even though that element was still very much under discussion at previous meetings of Council.

Uses of extracts of press publications by service providers: Snippets, hyperlinks…and how short is short? (2nd sub-paragraph of Article 11 & Recital 34a [new])

The Presidency wants to acknowledge the increasingly important economic value of such uses and clarify that ‘should have the right to authorise or prohibit such uses of extracts’: the fact that this should be a right could hence mean that the use of technologies that enable this form of control (robots.txt and more advanced) could be deemed insufficient even though in reality they enable exactly that.

In doing so, it (1) clarifies that uses of extracts limited to individual words or very short excerpts of text are exempted [Note: if the use of individual generic words had ever been protected by copyright, all of us could have stopped posting online a long time ago] and (2) it removes the fact that the extracts should meet the originality criteria, which is the foundation for copyright. The efforts of the Presidency to exempt very short excerpts could be seen as positive, however, in practice this could lead to fragmentation across the EU, as the length of what ‘a very short except’ is will have to be set by the courts, and URLs could still be covered.

In this context, it should be noted that the Copyright Arbitration Board of the German Patent and Trade Mark Office (DPMA) recommended that snippets can comprise exactly 7 words, and anything exceeding those 7 words would require a licence. The general feeling is that the Bulgarians have drafted amendments that suit the existing ‘German’ solution.

Reduce the scope of users affected by the protection: wishful thinking but what is the reality behind it? (1st sub-paragraph of Article 11(1) and Recitals 31, 32 and 34)

The Presidency suggests to limit the exclusive right to authorise or prohibit acts of reproduction and making available to the public of press publications performed by service providers, regarding digital uses. This is an attempt to make sure that individual users are not affected, although obviously the devil will be in the detail. If one looks at Article 13 where online platforms are deemed to be the ones ‘using’ the content uploaded by their users, things get a bit more complicated, especially considering the censorship filter of Article 13 must cover the content protected by Article 11.

Reduction of the term of protection of 20 years: and suddenly, they become too shy to set a term? (Article 11(4))

The Presidency notes that most delegations have expressed support for a reduction. However, views still diverge on how long the term of protection should be, therefore no explicit protection term is being proposed for the time being. But then again, what term is justified for a measure that makes no sense ?

In summary

One could think the Bulgarians have pretty much done the bidding of the publishers’ lobby as set out in their different letters and public statements:

  • Ensure that the press publisher’s right is a ‘prohibition right’. This is exactly how Thomas Höppner, a Professor of commercial law and IT law, and the lawyer for German publishers in several court cases against Facebook and Google, put it at a European Parliament hearing: ‘This is a prohibition right. It is a right that makes sure there are not platforms coming up everywhere and anywhere that take advantage of content that has been published and make their business out of it. The first and foremost goal is to prevent these exploiting businesses – simply not have them.’ (Watch the video recording )
  • Ensure the right covers everything, not just original content (there again, the Höppner intervention in the EP is crystal clear).
  • Ensure hyperlinks are covered at least potentially under the ‘communication to the public’ criteria, which creates sufficient legal uncertainty to ensure service providers will have to consider hyperlinks as a risk. This fits neatly in the open letter messages published in Le Monde (behind a paywall, ô irony) and in which large news agencies stated ‘They offer internet users the work done by others, the news media, by freely publishing hypertext links to their stories. […] Solutions must be found. […] We strongly urge our governments, the European parliament and the commission to proceed with this directive’.
Article 13 – Upload Filter: Let’s just decide that the Ecommerce Directive and Charter do not apply when copyright is at stake

The Bulgarian Council Presidency wants to “look into the main elements of a possible compromise”, but remarks that “the option of going back towards the approach taken in the Commission proposal remains on the table also as a possible compromise”.

So-called ‘Clarification on the communication to the public and definition of content sharing services’ approach: how to deep dive into a puddle of mud and hope you’ll see through it

The Presidency claims that such a clarification could be achieved “through the combination of a definition in Article 2 of ‘online content sharing service provider’ and the use of certain criteria for communication to the public based on the case law of the CJEU in Article 13”. In their view, “using the definition of ‘online content sharing service’ would allow targeting in a clearer way the online services covered, while not affecting the notion of communication to the public”. In this context, “the clarifications as to which services are not targeted by the proposal could be left in a recital”, a solution which creates absolutely no legal certainty as recitals are non-binding and are simply there to guide courts if they feel like it. Member State delegations are asked to indicate whether they consider this approach appropriate.

Liability for service providers that communicate to the public and possible link to measures (possible limitation of liability under certain conditions): bye bye e-Commerce Directive and Charter!

The Bulgarians ask the EU Member State delegations “to indicate whether in cases of services which perform an act of communication to the public and are excluded from Article 14 e-Commerce Directive (ECD), a targeted limitation of liability should be provided for and whether the drafting proposal below goes in the right direction or should a different approach be taken and if yes, which one”. The Presidency clearly states that Article 14 of the ECD does not apply when a communication to the public is done, as opposed to the initial text of the EC which at least kept the caveat ‘unless they are eligible for the liability exemption provided in Article 14’, as a way of not fully closing the door. Well, the Bulgarian text slams the door on the e-commerce Directive and states the liability covers both acts of communication to the public and acts of making available.

Measures to be undertaken: if (b) is notice and stay down, what is (a)…surely not a censorship filter?

The Presidency notes that if the obligation of taking measures is maintained that then the text based on the previous discussions and consolidated versions would be maintained, subject to further adaptations and a final compromise. Please note that the wording of the BG Council Presidency hints at the fact that services are required to take specific measures, whilst the European Commission keeps on claiming that that services “are not under an obligation to apply specific measures of monitoring” – read the EC’s reply [PDF] to the October 2016 open letter from over 50 human rights and media freedom organisations. The enumeration of 2 scenarios in paragraph 1a of Article 13 seems to also confirm the necessity to (a) implement a measure that ‘prevents the availability’ and (b) implement a measure that handles notice and stay down for the content that slipped through the measures in scenario (a). One can hardly think of anything but a filter under scenario (a).

Limitation of liability impact on users: cute but probably not worth the paper it’s written on

The text proposed in the latest consolidated version would be maintained, as the Member State delegations seems to agree that users should be covered by licences between rightholders and services. This feels however more like a feel good measure rather than something that could effectively work from a contractual law point of view. It also means users are not in the clear absent licensing agreements.

Knowledge: from ‘knowledge of an infringement’ to ‘you should have known better’

The explanatory text states that “the notion of knowledge, which remains in Article 13, could be further specified in a recital if necessary”. It should be noted that the knowledge criteria contained in paragraph 1 of Article 13 covers ‘full knowledge of the consequences of its action’, and not as per usual the knowledge of the fact that content is infringing copyright.

In summary

The direction proposed by the Bulgarian Presidency looks oddly similar to the requests issued by the rightholders’ lobby, such as Gesac, in their umptiest recent petition site, where they ask policy makers to:

  • ‘clarify that UUC [user uploaded content] platforms like YouTube are involved in reproducing and making our works available under copyright laws;
  • ensure that the safe harbour non-liability regime does not apply to them as it is meant for technical intermediaries only’.

Guess what: you can stop the petition, the Bulgarians (and the European Commission) have heard you loud and clear…and who cares about the collateral damage to the Internet as a whole?

As a reminder on that collateral damage, let’s recall that over 50 human rights and media freedom organisations expressed their concerns to the EU legislators in an open letter dating from October 2016. The letter warns for the fact that Article 13 “would violate the freedom of expression set out in (…) the Charter of Fundamental Rights” and “provoke such legal uncertainty that online services will have no other option than to monitor, filter and block EU citizens’ communications”.

Conclusion: The Bulgarians are even more willing than the previous presidency to do the European Commission’s bidding (or is it France’s?)

At some point one wonders to what extent Council presidencies can just ignore half of the Member States in the room and go ahead with whatever script they were given by the European Commission, supported by a few big Member States and rightholder lobbies? Shouldn’t there at least be some form of ‘pretending’ that democracy is at play here?

The approach taken here by the Bulgarians seems to indicate that ‘no’ is the answer…something citizens and governments should not consider acceptable.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
[Note: These infographics are being published in the context of the 2018 #CopyrightWeek, in light of the theme of 19 January on ‘Safe Harbors”, which revolves around the idea that: “safe harbor protections allow online intermediaries to foster public discourse and creativity. Safe harbor status should be easy for intermediaries of all sizes to attain and maintain”.]

The Save the Link Campaign created two infographics above in the context of the 2018 #CopyrightWeek, in light of the theme of 19 January on ‘Safe Harbors”, which revolves around the idea that: “safe harbor protections allow online intermediaries to foster public discourse and creativity. Safe harbor status should be easy for intermediaries of all sizes to attain and maintain”.

The EC’s proposal for a Directive on Copyright in the Digital Single Market is trying to force ‘voluntary’ private policing and filtering duties on all online platforms for all user uploaded content, under Article 13. The wording is carefully picked, as it leaves online platforms  no other choice than to implement content filtering technology, because they are being obliged to ‘prevent’ the availability of certain content. Preventing that this content becomes available on their platform can only be achieved by filtering it the source, namely during the upload. As such, the EU legislators are trying to coerce online platforms to take ‘voluntary’ filtering measures, whilst going scot-free because they by-pass the safe harbour provisions in Article 15 of the e-Commerce Directive which prohibits Member States from imposing general monitoring obligations.

All online platforms are thus forced to implement automated upload filters that will decide about if a users’ content will go up or not. This implies that this proposal is set to clash with another piece of legislation, namely the new EU data protection law: the General Data Protection Regulation (GDPR), which will enter into force in May 2018. Article 22(3) of the GDPR aims to protect users against the impact of automated decisions. There is, however, an exception when these measures are being mandated by the government. But, as explained above, the Member States cannot impose such an obligation, as this would  go against the e-Commerce Directive.

So, copyright legislation is turning into a Catch-22 situation (i.e. a situation where one is faced with contradictory rules) for online platforms:

  • one the one hand, all online platforms are being forced into ‘voluntarily’ filter all user uploads, as this is the only solution to ‘prevent’ certain content from being upload, but this is in breach of Article 22(3) of the GDPR; and,
  • on the other hand, when ‘voluntarily’ filtering this content online platforms cannot benefit from the exception of Article 22(3) of the GDPR for measures imposed by governments, as the Member States are not allowed to impose such a filtering obligation under Article 15 of the e-Commerce Directive.

The conclusion: if this filtering obligation is mandatory it violates the e-Commerce Directive, if its voluntary, it violates the GDPR. Online platforms are thus caught between a rock and a hard place, violating either Article 13 of the copyright Directive or Article 22(3) of the GDPR.

Moreover, as depicted by these infographics, content filtering solution:

More flaws of Article 13 are described here .

Read also more about the need to preserve the e-Commerce Directive, and about the tensions between the copyright reform and the GDPR.

  These infographics were created by Ruth Coustick-Deal and Marianela Ramos Capelo of OpenMedia, and are made available under a Creative Commons BY-NC-SA 4.0 licence.
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
[Note: This analysis is being published in the context of the 2018 #CopyrightWeek, in light of the theme of 18 January on ‘Copyright as a Tool of Censorship”, which revolves around the idea that: “freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it”.]

The discussions at EU level on the review of EU copyright are dangerously intertwining copyright and the E-commerce Directive (ECD). Looking specifically at the proposal for Article 13 of the Copyright Directive in the Digital Single Market (DSM), aka the Censorship Machine, one can’t stop hearing the lyrics of ‘Killing me softly’ in the background when thinking of what copyright is doing to freedom of expression and how it is making censorship the norm. Though the softly may have to be requalified in light of the latest document issued by the Bulgarian Council Presidency a couple of days ago (more on those below).

A quick word on procedure: a Bulgarian ‘coup’?

On 1 January 2018, Bulgaria took over the Presidency of the Council of the European Union from Estonia (these handovers happen every six months). The latter had set a brisk pace of working group meetings with copyright attachés and national experts to redraft the various articles in the Copyright Directive. This led to a variety of iterations of Article 13 (see our blog posts here and here), the latest version being over 7 pages if one adds the Recitals to the mix. The result as presented by the Estonians in December is a monstrosity (see our analysis here) of new definitions and principles, with carve-outs, clarifications, contradictions, all packaged under the interesting caveat by the Estonians at the end of their Presidency that:

“At this stage we have chosen not to provide for an explicit clarification as to whether the online content sharing services are excluded from Article 14 ECD as a number of questions remain open and further analysis is needed as to whether and how we can reconcile the different wishes that the delegations have expressed“

Talk about the Elephant in the room!

Anyway, as every proposal opened a new Pandora’s box of questions and complexities, the Bulgarians, it seems, have decided to go for the steam roller approach by removing (most) of the technical experts out of the room and push for a decision at a ‘political level’ on the most complex issues under discussion. In other words: ‘move over expertise and common sense, let the horse trading begin!‘.

A more healthy alternative would surely have been to take a step back, consider all the questions that escaped Pandora’s box over the past six months, and decide that maybe more analysis was needed in light of the weakness of the European Commission’s (EC) impact assessment and the growing concerns of all stakeholders.

On 30 November over 80 organisations – representing human and digital rights, media freedom, publishers, journalists, libraries, scientific and research institutions, educational institutions including universities, creator representatives, consumers, software developers, start-ups, technology businesses and Internet service providers –  sent a letter listing 29 prior letters which voiced their concerns. Since then, the telecoms operators represented by ETNO and GSMA also entered the fray, highlighting notably the need to preserve the prohibition of a general monitoring obligation that stems from Article 15 ECD).

Making intermediaries directly liable for the content their users upload: the brutal switch from secondary to primary liability or how Censorship becomes the norm

Article 13, as initially proposed by the EC was bad in many ways (see the background section below) but somehow attempted to give the impression that (some of) the ECD’s core principles might still survive.

With the Bulgarians putting to a political poll questions, such as ‘ should there be a clarification in Article 13 that service providers that store and give access to user uploaded content perform, under certain conditions, an act of communication to the public’, the debate takes on an entirely new dimension.

To fully understand what’s at stake and why this goes way beyond copyright, one has to understand the key building block intermediary liability has been built on so far under the ECD’s provision. An October 2017 study by the European Parliament’s Policy Department  gives a good summary of the situation created by the ECD: ‘The eCommerce Directive (2000/31/EC) provides certain categories of Internet intermediaries with a limited exemption from secondary liability, i.e., liability resulting from illegal users’ behaviour’.

The reasons why this exemption was put in place, are multiple, as set out by the study:

  1. on the one hand, to promote the activity of the intermediaries and preserve their business models; and,
  2. on the other, to prevent ‘excessive collateral censorship (i.e., preventing the intermediaries from censoring the expressions of their users). The last rationale pertains to the fact that secondary liability could induce intermediaries to excessively interfere with their users: the fear of sanction for illegal activities of the users could induce intermediaries to impede or obstruct even lawful users’ activities‘.

In other words and to make it simple: if websites hosting user uploaded content are deemed to be communicating that content to the public, they become directly liable for that content and will hence (1) censor any remotely doubtful content (including legal content), and/or (2) only accept content from big companies willing to sign a contract stating they will come to the (financial) rescue in case of conflict with rightholders.

Both outcomes would mean the end of the Internet as we know it, and an alternative that is based on big companies vetting the business model of other big companies.

Putting an End to Copyright’s Big Little Lies: the red line must be the preservation of the E-commerce Directive and safeguards against censorship

The debate so far has been filled with ‘Big Little Lies’, the biggest being that the ECD was not under threat, whilst it very much and increasingly is, as each version of Article 13 morphs into a more dangerous variant. Such a threat targets all EU citizens and businesses in their daily behaviour: will we as citizens still be able to share our kids videos, our ideas of Ikea Hacks, our open software codes, our brilliantly tasty recipes…and will our businesses be able to compete with the existing (mostly US based) established companies that might have pockets deep enough to face the risks this whole discussion entails.

More importantly: will the European Union be faced with a ‘Chinese Great Firewall’ of its own making, put there for the sake of a few rightholders and to the detriment of the general interest? Surely, that cannot be what the EU is about?

As we set out in our Christmas story, we think it is not too late to rekindle with legislative wisdom. The route forward could be to start again, based on the EC’s initial proposal and to work away all the legal uncertainties that it comprises, by clarifying that:

  1. the intermediary liability regime set in place by the ECD still fully applies: no direct liability gets introduced for copyright, either directly or through a spectrum of confusing provisions which at best remove all legal certainty and at worst expressly kill the ECD. This does not mean the ECD should not be looked at in the future, but that it can only be modified after a thorough analysis of the impact of such changes on the entire Internet ecosystem, based on input from all affected sectors and all relevant experts.
  2. intermediaries must ensure that they remove expeditiously any copyright infringing material brought to their attention.
  3. content recognition type  technologies cannot be imposed by law, either because a provision requests them or because compliance with a provision can only be achieved through them.
  4. where content recognition technologies have been implemented, user safeguards in terms of complaint procedures and judicial redress will be guaranteed and enforced.

See, simple!

Background: Summary of the issues at stake in the European Commission original proposal on Article 13

Article 13 establishes an obligation to put in place privatised censorship of all content by an undefined number of online companies following an undefined procedure.

There are so many loose and dangerous ends in the proposed Article 13, that we have tried to summarize them in the table below:

The text proposed by the European Commission What does this mean in practice?
Information society service providers that store and provide to the public access to large amounts of works or other subject-matter uploaded by their users shall, Who?

Online players that store large amounts of user uploaded content can cover a lot of very different type of players, ranging from commercial platforms to non-profits and can cover all types of hosted content ranging from:

·         videos (YouTube, Vimeo, Daily Motion),

·         blogs (Tumblr, WordPress),

·         crowdsourced information (Wikipedia),

·         social media (Facebook, Twitter),

·         documents (DropBox, Google Drive),

·         pictures (Flickr), etc.


This covers all sorts of creations, ranging from literary works, music, choreographies, pantomimes, pictures, graphics, sculptures, sound recordings, architectural works, etc..

→ So this is not confined to Content ID type softwares used on YouTube, which only scan music and video uploads to identify copyright infringements.

It covers also content uploaded by a user who is the rightholder of that content or who has the right to do so under an exception or limitation under EU law, as there is no mention of the fact the content has been uploaded there rightfully or not.

in cooperation with rightholders, Who?

Rightholders covers a vast reality ranging from big labels or the Hollywood studios to every individual creator if he has not signed away his rights. This is a lot of people to sit around a table and ‘cooperate’ with, especially if you are a smaller company that would prefer to hire engineers than lawyers. Online companies could have to deal with thousands of claimants all wanting a share of their revenues, or simply face the prospect of such interactions and hence have a less attractive business case to defend before investors.

Collecting societies could maybe be used to decrease the number of stakeholders involved, but they are not always known as smooth negotiators and do not necessarily represent the interests of all rightholders.


What does that even mean when your interests are not necessarily aligned? And where are the users in this relationship?

take measures
to ensure the functioning of agreements concluded with rightholders for the use of their works or other subject-matter To do what?

The obligation here is to comply with an agreement of the rightholder, regardless if that agreement relates to actual copyright infringements or not. It also implies that online platforms ‘use’ the works that are uploaded by their Internet users, a qualification which is not that clear-cut from a legal point of view and is aimed at pretending they are not just ‘hosting’ the material.

to prevent the availability on their services of works or other subject-matter identified by rightholders through the cooperation with the service providers. The works and other subject matters to be ‘filtered’ are those identified by rightholders. How that identification occurs is not stated, nor how claims of rights are checked (it is not unusual for several people to claim they have the rights over the same work, and in some cases, all of their claims are true).
Those measures,
such as the use of effective content recognition technologies, This language seems to point directly towards the type of ContentID software used by YouTube, even though the scope of what needs to be recognized goes dramatically beyond what ContentID is capable of handling.

Moreover, such automated tools can only match a file to another, and do not have the capability to recognize more complex issues, such as the fact that whilst a copyright protected file might have been used by a user, it does not infringe the rightholder’s copyright as it falls under an exception recognised by law (for example, parody).

shall be appropriate and proportionate.



Not really. Seeing all of these measures will be (1) decided by private companies and (2) fall under the terms and conditions of the websites, the ‘appropriate and proportionate’ nature of the implemented measures is left to the appreciation of those private companies, with no control by judicial or administrative instances, nor by consumer representatives. This interpretation seems confirmed by Recital 39 of the proposed Directive.

The service providers shall provide rightholders with adequate information on the functioning and the deployment of the measures, as well as,
when relevant, Who will judge relevance? The rightholders is our best guess.
adequate reporting on the recognition and use of the works and other subject-matter. So aside from investing money into censorship tools, online companies must also make sure they come up with reports to please the rightholders.
What others are saying
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
[Note: This analysis is being published in the context of the 2018 #CopyrightWeek, in light of the theme of 17 January on ‘Transparency”, which revolves around the idea that: “whether in the form of laws, international agreements, or website terms and standards, copyright policy should be made through a participatory, democratic, and transparent process”.]

On 17 January, the #CopyrightWeek theme is ‘transparency’. The starting point is that “(…) copyright policy should be made through a participatory, democratic, and transparent process”.

That is, in a perfect world. In reality, ideology, rather than evidence, has been guiding the European Commission’s (EC) proposal for a Directive on Copyright in the Digital Single Market.

EC Withholds © Evidence (Again)

Those who start to scratch the surface, such as Julia Reda – German Member of the European Parliament for the Greens/EFA Group – and Corporate Europe Observatory (CEO), are uncovering how the EC carefully cherry-picked the evidence that supports their ideological policy choices, whilst withholding evidence going against them. The EC officials must have confused policy-based evidence making with evidence-based policy making.

Just before the 2017 Winter break, MEP Reda uncovered another attempt of the EC to swipe evidence under the carpet. Officials from the EC’s Directorate‑General for Communications Networks, Content and Technology (DG CNECT) where caught in the act, when they ‘kindly’ reminded a researcher of the EC’s Joint Research Centre (JRC) to not publish a study, contradicting the EC’s policy choice, on the highly debated press publishers’ right (Article 11) at the request of their hierarchy.

Following MEP Reda’s exposure of this charade, Vladimír Šucha, JRC’s Director-General, had a sudden urge to send the random tweet below into the world to defend the publication criteria upheld by the JRC. This message was then conveniently re-tweeted by Roberto Viola, DG CNECT’s Director-General.

The goal of the JRC is actually to “supports EU policies with independent scientific evidence throughout the whole policy cycle”. However, here independence only seems to go as far as hierarchy allows it to go, as the EC itself acknowledges that the JRC is not independent of the Commission’s internal processes”.

This raises the question: how many JRC studies remain unpublished because other EC officials consider the results not to be ‘robust’ enough. Robust clearly being a synonym for ‘presenting evidence supporting the EC’s internal policy choices’.

Ignorance is Bliss
The EC prides itself for its commitment to ‘evidence-based policy making’. In its Better Regulation Agenda the Juncker Commission explains that “applying the principles of better regulation will ensure that measures are evidence-based, well designed and deliver tangible and sustainable benefits for citizens, business and society as a whole” (p. 3).

However, the EC is very ‘picky’ about which evidence can guide them. The case at hand is a perfect illustration. An earlier and shorter version of the JRC study that MEP Reda dug-up was already shared with DG CNECT in March 2016. The same document was then also shared in June 2016 with the EC’s Inter-Service Steering Group (which includes other European Commission departments than DG CNECT, as well as the EC’s legal services) preparing the copyright reform proposal.

The study, which finds no benefit to introducing an EU-wide ancillary copyright – or press publishers’ right as it was dubbed in Article 11 – was thus available to numerous EC officials involved in the internal discussions on the proposals and EC’s Impact Assessment, well before the the official publication of the actual reform proposal on 14 September 2016. However, this study was apparently put aside, and the EC followed through with its intention to introduce an EU-wide press publishers’ right.

This should not come as a surprise. In September 2017, MEP Reda already exposed that the EC withheld a €369.871 study on the impact of piracy since May 2015, as its results do not support the ‘voluntary’ private policing and filtering duties that it is trying to force upon online platforms under Article 13.

The EC also deemed that it was much easier to neglect all the opposing views that citizens submitted to their public consultation on the role of publishers in the copyright value chain and on the ‘panorama exception’ (March-June, 2016).

In a response to a 2017 Parliamentary Question by S&D MEP Nessa Childers, EC Vice-President Andrus Ansip, explained how easy it is to wave away results that go against your policy ideology, as he pointed out that: “the Commission adopts a cautious approach to quantitative data, as responses to consultations are generally not statistically representatives of a target population”.

The EC’s strategy on copyright policy is clear to us: ignorance is bliss. However, this blatant disregard of scientific evidence and citizens’ views neglects the impact copyright policy has on citizens’ daily lives, as Glyn Moody noted in one of his editorials:

“(…) everyone who uses the Internet is profoundly impacted by copyright every moment they are online, and often when they are offline too. Changes to the key directive covering this area are, above all, a public policy issue affecting 500 million people, rather than purely the domain of the copyright industry.”

The EC’s attempts to withhold evidence and its contempt for citizens’ opinions are quickly diminishing the legitimacy of the policy making process. Moreover, these efforts also strip away the ability of the other EU institutions, namely the European Parliament and the Member States in Council, to take decisions that are grounded in sound evidence. This is especially detrimental at a time when Member States are being forced into taking political decisions on the press publishers’ right and upload filtering on online platforms, which will change the face of the Internet.

MEP Reda’s crusade isn’t over yet. The JRC study was shared again with DG CNECT in October 2016, requesting their feedback by mid-November 2016. It took DG CNECT until 12 June 2017 to provide comments. In that 8 month time-span, they produced a brief antithesis of the study, depicting what they identified as weaknesses and discrepancies, in an attempt to put down any arguments that could contest their policy choice.

In the period from October 2016 till May 2017 there was a sudden radio silence between the JRC and DG CNECT on this study. This ‘gap’ caught MEP Reda’s interest, as she filed a ‘confirmatory application’ to request the EC to double-check if all available documents were actually released. The EC has now until 23 January to respond to this request.

MEP Reda also took-up this opportunity to write a letter [PDF] to European Commission President Jean-Claude Juncker, Vice-President Andrus Ansip, and Commissioner Mariya Gabriel, to remind them of the fact that “the paramount principles of this [copyright] debate should be transparency and integrity”. In this letter, she also urges the EC to “take a much more proactive role in the dissemination of its own findings and to abandon any attempts to withhold or distort such findings, regardless of whether they are considered supportive of the Commission’s plans or not”.

“the paramount principles of this [copyright] debate should be transparency and integrity” – MEP Julia Reda

However, it should be noted that within the European Parliament (EP) similar evasive maneuvers are attempted in order to steer the debate in a specific direction.

During a recent hearing of the EP’s Legal Affairs (JURI) Committee, the JURI Secretariat judged that it was good practice to invite Professor Thomas Höppner to present evidence against the outcome of an independent academic study that it had commissioned through a tendering process. This whilst knowing that Professor Höppner’s main occupation is his litigation work at lawfirm Hausfeld, where he represents, amongst other clients, a number of German newspaper publishers who are grouped in the Bundesverband Deutscher Zeitungsverleger (BDZV) . Professor Höppner just recently filed an application on behalf of the BDZV against Google’s appeal of the €2.42 billion fine it was slammed with by the European Commission. Lead EP Rapporteur, MEP Axel Voss – German EPP Group Member, came to the defence of Professor Höppner, as MEP Voss stated to POLITICO that it was normal for professors to take on legal clients. The question of the normality of European Parliament Committees inviting such professors as ‘experts’ in a debate presenting a clear conflict of interest is however still to be questioned.

Now let’s look at why the EC actually tried to sweep this study under the carpet.

Contrary to some policy makers’ belief, newspapers do actually benefit from news aggregation platforms and a press publishers’ right is worth ZERO

The JRC study takes an economical approach in order to uncover why news publishers failed to monetise the neighbouring right introduced in Germany and Spain. Some of its key findings are that (pp. 25-26):

  • “(…) newspapers actually benefit from news aggregation platforms in terms of increased traffic to newspaper websites and more advertising revenue.”;
  • News platforms that display ads share between 70 and 100% of ad revenue with newspaper publishers. That explains why publishers are eager to distribute their content through aggregators.”; [Note: “While Facebook Newsfeed is ad-driven Google News is ad-free.” – p. 3]
  • “Research also shows that the question of indirect ad revenue from news content generated in general search can be addressed by modifying the length of text snippets in search.”;
  • “There is no evidence that the production of news articles has decreased, despite the fall in the number of newspapers and in revenue.”;
  • “the German and Spanish cases show that the law can create a right but market forces have valued this right at a zero price.”;
  • “(…) news aggregators promote diversity because they facilitate access to news across different sources.”; and,
  • “(…) the online news industry is far from stabilized yet and that there is wide scope for further innovation in exploring new revenue sources that go beyond that traditional choice between ads and subscriptions”.
A key question: Do News Aggregators Add or Take Revenue Away?

“Aggregators (…) generate additional traffic to news publishers’ websites and thereby may increase rather than reduce their online revenue” – JRC study (p. 9)

The study believes that one of the basic questions in this debate is “whether re-routing content through online news aggregators platforms increases or diminishes ad revenue for news publishers on their own website”. It considers that this “is an empirical question that cannot be settled by economic theory or legal reasoning”, and only data can bring the answer.

Based on the available empirical evidence to date the answer is that (p. 9):

Aggregators are complementary rather than competing services to newspapers’ original websites. On balance, they generate additional traffic to news publishers’ websites and thereby may increase rather than reduce their online revenue

In their brief antithesis of the study, DG CNECT brings up the EC 2016 Eurobarometer survey to try and claim that “47 % [of users] (…) browse and read news extracts on these websites without clicking on links to access the whole article in the newspaper page, which erodes advertising revenues from the newspaper webpages”.

This is an argument that they keep on repeating, both in their impact assessment and in public debates. However, it’s a non-argument at multiple levels.

First, the wording of the question put forward and the answering options create a certain bias in the responses.  Question 17 of this Eurobarometer survey asked: “When you access the news via news aggregators, online social media or search engines, what do you most often do?”

The participants were offered three options but could only choose one:

  • Browse and read the main news of the day, without clicking on links to access the whole articles. (47%)
  • Click on available links to read the whole articles on their original webpage. (45%)
  • You never access the news via news aggregators or online social media. (6%)

In other words, aside from the fact that multiple answers were prohibited, the participants were asked what they “most often do”, not what they “DO” or “DO NOT”. Moreover, most people would probably click on certain links and not on other, depending on the time they have an their interests. Finally, this question actually shows that 94% of the users actually access or have accessed news through news aggregators or online social media!

Second, even the JRC rebuts their argument that, in case the survey findings were reliable, that this “erodes advertising revenues from the newspaper webpages”, as they respond that:

“The empirical evidence cited in the first part of the paper proves the contrary: there is no substitution effect, the quantity effect dominates. This results in an expansion of revenue.”

“modification of the Spanish intellectual property law creates new market access barriers, both for existing news aggregators and for new service providers” – Spanish Competition Authority

A market failure?

The study also tries to assess if there is any kind of market failure. To do so, it looks at the assessments of both the German and Spanish competition authorities, who already addressed the possible abuse of market dominance by Google Search and Google News to assess the need for an intervention by them or the national regulator (p. 18ff). However, both national competition authorities did not perceive a market failure.

  • The German competition authority (Bundes Kartel Ambt [BKarA]): “Google eventually asked all publishers to sign a written contract with a zero price license under the ‘opt-in’ clause. All newspaper signed in order to allow Google News and Google Search to continue linking to their news articles. The news publishers then filed a complaint at the German competition authority about the alleged abuse by Google of its dominant position in the German market. The competition authority (…) ruled that there was no need for action. (…) Although the news publishers are not remunerated for their contribution to the platform, the BKarA still considers this to be a market, though with a zero price.”
  • The Spanish competition authority (CNMC): “The CNMC concludes that there is no indication of a market failure in news aggregation because news publishers do not seem to actively oppose news aggregators. Consequently, there is no reason for regulatory intervention in that market. Moreover, the intervention through the modification of the Spanish intellectual property law creates new market access barriers, both for existing news aggregators and for new service providers.”
The zero price point: A byproduct of the copy-paste news model

The study concludes that “the German and Spanish cases show that the law can create a right but market forces have valued this right at a zero price” (p. 25). The study also points out that (p. 21):

“In Google News there are no ads and only publishers and consumers. Neither of them pays. However, publishers receive benefits from additional traffic and ad revenue on their own websites. (…) even a zero price, does not affect their participation because they still get benefits – as the empirical evidence suggests.”

This zero price point seems to be a byproduct of the free news model and the fast news cycle, as the study observes that “news has become so ubiquitous on the internet that few consumers are willing to pay for it” (p. 23). This can be linked to the fact that “widespread practice among online news publishers to closely watch and copy immediately from each other” (p. 22).

Interesting in this context is the reference the study makes to research by Julia Cage (Sciences Po), Nicolas Herve (Institut National de l’Audiovisuel [INA]) and Marie-Luce Viaud (INA), who analyse the speed and modalities of online news dissemination, and found that (p. 17):

“on average, it takes two hours for information published by an online media outlet to be published on another news site, but less than 45 minutes in half of the cases and less than 5 minutes in 25% of the cases. At least half of online dissemination is copy-and-paste and does not follow rules for citing and crediting. Information is costly to produce but cheap to reproduce.”

This could be the main reason why some newspaper publishers are asking party for a neighbouring right, as copyright is too complex with its originality criteria, a threshold many news articles would not meet nowadays. But then again, if originality is discarded as a criteria, can we truly claim that quality journalism is the objective?

Moreover, newspaper publishers also have to recognise that the Internet has enabled the ‘unbundling of newspapers’, in print editions news is packaged, whilst online “consumers have more choice in making their own selection of articles online as they can freely move between different newspapers”.

They study also notes that “the volume of freely available content is huge”, and as a result “the price of online content goes down as the volume of free supply increases” (p. 14).

A deal with the Devil or not?

The study explains that “news aggregation platforms are a Faustian deal between platforms, publishers and consumers” (see p. 24).

  • The upside: “The platform offers an increase in audience reach for news publishers; it offers consumers a reduction in transaction costs and access to a wide variety of articles. In return, it increases its ad revenues.”
  • The downside: “(…) the loss of editorial control..
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
[Note: This editorial is being published in the context of the 2018 #CopyrightWeek, in light of the theme of 15 January on ‘Public Domain and Creativity’, which revolves around the idea that: “Copyright policy should encourage creativity, not hamper it. Excessive copyright terms inhibit our ability to comment, criticize, and rework our common culture”.]

When modern copyright was created in 1710 by Great Britain‘s Statute of Anne, something remarkable happened that is often overlooked: as well as codifying copyright as we now know it, it also brought into being its negation. Before the Statute of Anne, publishers held a form of eternal copyright on the books they placed in the Stationers’ Register. By granting authors a time-limited government-enforced monopoly – 14 years by default, with an optional extension to a maximum of 28 years – the new law recognised that there was a moment when copyright ceased to apply; it therefore confirmed that works once published could later exist in a state without any monopoly protection.

” (…) the steady accretion of works in the public domain has formed an ever-larger reservoir from which creators could draw as they wished, with resultant benefits for both them and their audiences.”

We now call that conceptual and legal space the public domain, since works hitherto locked down by private copyright monopolies become freely available to everyone, to enjoy and to re-use as they wish. In doing so, the Statute of Anne fashioned an immensely rich artistic resource that could be drawn upon by later creators. Since all art builds to a lesser or greater degree on the ideas and achievements of those who have come before – nothing emerges in a vacuum – the steady accretion of works in the public domain has formed an ever-larger reservoir from which creators could draw as they wished, with resultant benefits for both them and their audiences.

Despite the evident power of adding works to this universal resource, the public domain has been under repeated attack. The most direct assault has come from the extension of copyright’s term. All around the world, the length of government protection has moved in one direction only: upwards. From the basic 14 years provided by the Statute of Anne, the copyright ratchet has now brought about a widespread 70 years over and above for the whole lifetime of the creator.

Term extension is sometimes applied retroactively – removing works that have duly entered the public domain, so as to lock them down as monopoly goods once more. If so-called “copyright theft” exists in any meaningful sense, it is this practice, not the making of unauthorised copies of digital works, which the EU’s own study showed causes negligible harm to sales.

The steady expansion of copyright’s reach has had a profound effect on the public domain and how it can be enjoyed. In the years following the passing of the Statute of Anne, a work would remain protected for at most 28 years. That meant a new book would typically enter the public domain during the lifetime of its readers. At that point authors could show their admiration for the work in question – and its creator, who was probably still alive – by using it and building on it in myriad ways. Similarly, they could look forward to the prospect of their own work forming the basis for further creativity by the next generation, which they would probably live to see and enjoy.

“(…) today’s artists can only draw on works created by their long-dead predecessors, unless an artist opts for a more generous licensing approach such as those offered by the Creative Commons organisation.”

That is no longer the case in general. When a book, or music, or a painting, appears today, it is very unlikely that anybody will live long enough to be able to use it in their own works. The minimum length of copyright in most countries is typically 70 years, and that is the case only if the creator dies before a work is released. Since life expectancy is increasing, a more realistic estimate of how long the next generation of creators will have to wait before they can build on contemporary works is nearer 100 years. In other words, today’s artists can only draw on works created by their long-dead predecessors, unless an artist opts for a more generous licensing approach such as those offered by the Creative Commons organisation. For the digital age, where ordinary people have routinely become new kinds of creators, and frequently post text, sounds and images online, that’s a huge loss of potential source material.

Another attack on the public domain comes from a surprising quarter: cultural institutions, which ought to be among its chief defenders. The problem arises from digital images taken of analogue artefacts. When the latter enter the public domain, it is only logical that “factual” digital representations of them should also be in the public domain. That is, when the intent is simply to record its physical appearance faithfully, rather than to produce an entirely new artistic creation that is based on the public domain work, in which case it would be granted its own copyright monopoly.

Some museums and galleries not only refuse to accept this, they go even further, and try to argue that members of the public have no right to make their own images of creations that are unequivocally in the public domain. There is an important case being heard by Germany’s top court to establish whether photos of works in the public domain taken by a Wikipedia supporter can be used freely by the online encyclopaedia. It’s absurd that this should even be a question, but it underlines how susceptible the public domain is to erosion by spurious claims of ownership.

Attempts to limit access to public domain works have been taking place for years, but there’s a new threat to the public domain that could prove to be one of the most serious. It is illustrated by the recent experience of the Australia-based music technologist Sebastian Tomczak. He uploaded to YouTube a ten-hour video that consists of nothing but white noise, defined by Wikipedia as “a random signal having equal intensity at different frequencies”. Its random nature means that there is nothing original about it, and that there can be no copyright. Despite that fact, Tomczak was hit with no less than five copyright complaints from companies claiming that his video was “infringing” on their creations, which also used white noise. Even though “their” white noise was also – by definition – random, and therefore not covered by copyright, YouTube’s complaint system was unable to appreciate that point, and treated the claims as potentially valid.

“If a work uses public domain materials, it could easily be blocked because of pre-existing claims by companies that have produced their own works using the same sources.”

Precisely the same is likely to happen for works that draw on the public domain if the EU’s proposed upload filters are imposed on Internet sites. If a work uses public domain materials, it could easily be blocked because of pre-existing claims by companies that have produced their own works using the same sources. Such a claim would quickly be dismissed if it ever came before a judge, but with upload filters that are (inevitably) automated, the tendency will doubtless be to err on the side of caution. A work that might look infringing because it includes public domain material used elsewhere therefore runs the risk of being widely blocked.

Although in theory those using public domain materials might be able to appeal against such an action, it would require them to know how to do that, and to have the time and the inclination to do so. One of biggest strengths of public domain materials is that they can be used without permission by anyone – especially by those who know nothing about the finer points of copyright law, and who have limited financial resources. It is precisely these individuals who will be unwilling or unable to challenge erroneous blocking by upload filters. Over time, people may even avoid drawing on public domain materials for fear that their posts will be blocked, and that they may be subject to other punishments by sites hosting their material because of their repeated copyright “offences”.

Those pushing for upload filters will doubtless insist this outcome is not their intent, and that may be so. But given the impossibility of incorporating detailed legal knowledge about this famously complex area into online censorship systems, and the vulnerability of the public domain, which is particularly at risk because there is no organisation to defend it, it is inevitable that this rich resource, built up over three hundred years, will be badly affected by automated filters. If it adopts this approach, the EU will end up undermining the basic quid pro quo of copyright – that works can be used freely after a temporary monopoly has elapsed – and thus the public’s acceptance that the current framework is in some sense “fair”. Ironically, a draconian upload filter system brought in supposedly to defend copyright could end up leading to it being seriously de-legitimised.

Featured image by Nick Webb.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Copy tells a Christmas story on how, once upon a time, everyone was safely harboured within tall, high, strong walls, and how, outside the city, the deep dark forest was lurking. Technological progress allowed the inhabitant in the forest to survive, everyone connected to everything and everyone else through services created and offered by a bunch of Merry Men who ruled the forest.

However, all of this was happening with absolutely no control by the king of the castle: and that, the king DID NOT LIKE! You see the king had always been there to defend right holders. Control meant money and money meant power. Watch the video to discover how the king tries to turn the Merry Men into his private police in order to regain control.

We hope that in 2018, for the first time ever, the common good will prevail, leaving the Internet in its current state: open and full of noise! In 2018, let’s try and make things creative, enlightening, curious, scientific, diverse, and non-filtered … Let’s make it happen!

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As numerous CopyBuzz posts attest, there are three sections of the proposed EU Copyright Directive that are particularly problematic: Articles 3, 11 and 13. Article 3 concerns text and data mining, while Article 11 is the infamous snippet tax, also known as the link tax. Those are both worrisome, but it is Article 13, the upload filter, that is arguably the worst of all.

A filter, by its very nature, is about stopping people from uploading material. No system is perfect, so it is inevitable that some things will be blocked even though they are perfectly legal. That, in turn, will lead to people self-censoring because they fear that something near the boundary of what is permitted will be blocked. The supposedly bright line between legal and illegal will blur, taking out a big chunk of legitimate creativity in the process, and damaging freedom of expression in the EU.

The call for an upload filtering system in itself reflects an ignorance about the technical reality of such approaches. Proponents of the idea like to cite YouTube’s Content ID as proof that upload filters can be built. What this overlooks is the fact that according to Google, Content ID is the product of over 50,000 hours of coding and some 60 million dollars. There are few other firms that could match the scale of investment made by Google. One leading system used for music filtering, Audible Magic, has been in development for over a decade. An EU requirement for mandatory upload filters could mean that US companies will end up with a monopoly for video and audio content filtering, deciding what can and cannot be posted online, an unsatisfactory situation for startups in the region.

Moreover, upload recognition systems comparable to Audible Magic and Content ID will be needed for every kind of material, not just audio and video. That will either require hundreds of millions of euros’ expenditure or lead to cheaper, flawed products. Trying to cut corners during development to save money would lead to systems that produce many false positives, with serious chilling effects for creativity in the EU.

It’s not only the range of filtered material that is broad. The list of sites that will be subject to a requirement to monitor everything that is uploaded to them goes way beyond deep-pocketed services like YouTube. In particular, it will impose impossible burdens on key sites like Wikipedia and the GitHub development platform. As a non-profit organisation, Wikipedia simply doesn’t have the resources to allocate to costly filtering systems. GitHub’s ability to act as a relatively friction-free vehicle for open source will be seriously harmed by a requirement to check every single file that is uploaded to its servers for possible copyright infringements. That will have knock-on effects that will put a brake on free software development in the EU. Similarly, these issues will have a devastating impact on open access and academic collaboration in the region. The EU’s standing in the world of research will suffer as a result.

As if those practical problems weren’t enough, there’s another serious issue, which concerns the compatibility of the general upload filter idea with existing EU law. Article 14 of the EU’s E-commerce Directive says that “the service provider is not liable for the information stored at the request of a recipient of the service”, while Article 15 unequivocally states: “Member States shall not impose a general obligation on providers … to monitor the information which they transmit or store, nor a general obligation actively to seek facts or circumstances indicating illegal activity.” The Copyright Directive’s requirement for major online services to install an upload filter that monitors everything for possible copyright infringement in order to avoid liability runs counter to both Article 14 and Article 15 of the E-commerce Directive, as well as other key rulings from the EU’s highest court on the issue.

The European Commission is well aware of this. In a desperate attempt to square the circle, it published recently its “Communication on Tackling Illegal Content Online – Towards an enhanced responsibility of online platforms”. As its title suggests, the Commission wants online services to take “enhanced responsibility” for removing material from their platforms. The supposedly voluntary nature – emphatically not “a general obligation” – is the trick that the Commission claims enables online companies to carry out constant pro-active monitoring and filtering without losing their immunity for copyright infringement that the E-commerce Directive grants to sites that function as passive conduits of user data.

However, even accepting this absurd idea of “passive” service providers that “pro-actively” filter all material, there’s another big problem with the fact that European Commission “strongly encourages” online platforms to do this, but won’t require it by law. It was noted in a blog post by Dr Sophie Stalla-Bourdillon, who is Associate Professor in Information Technology/Intellectual Property Law within Southampton Law School at the University of Southampton.

The issue involves one of the most important pieces of recent legislation passed by the EU, the General Data Protection Regulation (GDPR), which will be enforced from May next year. The GDPR is an update to the EU’s already strong privacy protections. It adds a number of important new features designed to enhance the rights of EU citizens in this field. One of them concerns “Automated individual decision-making“, where Article 22 of the GDPR lays down: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

Naturally, there are some exceptions to that right. For example, if the automated processing forms part of a contract, or if the person involved gives their explicit consent. Dr Stalla-Bourdillon’s blog post explains why the contract exception is unlikely to apply in the case of users uploading files to an online platform. There is clearly no consent to material being blocked by an automated filter, since the person uploading it wants it to appear online. That leaves just one other legal justification for carrying out the automated decision-making: if it is “authorised by Union or Member State law to which the [data] controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests”.

When it comes to automated processing carried out by upload filters installed to satisfy Article 13, the European Commission might try to claim it is legal under the GDPR because of that third exception, thanks to its “Communication on Tackling Illegal Content Online”, hoping that no one would notice that it is a policy document with no mandatory authority. But even if it were, the use of upload filters would no longer be voluntary, but a requirement, in which case it is forbidden by Article 15 of the E-commerce Directive.

To summarise: Article 13’s automated general upload filters are either voluntary, in which case they are illegal under the GDPR, or they are mandatory, and therefore illegal under the E-commerce Directive. There’s no other possibility. What’s clear is that upload filters are illegal in all situations, and must therefore be dropped from the Copyright Directive completely.

Featured image by Nicu Buculei.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview