Kluwer Copyright Blog (KCRB) is a publication of Kluwer Law International, providing information and news on European copyright law. They have assembled a group of leading experts, comprising practising lawyers and academics to report on the latest developments.
Forthcoming in the November 2018 issue of Communications of the ACM, a computing professionals journal, is a column entitled “Legally Speaking: The EU’s Controversial Digital Single Market Directive” by Professor Pamela Samuelson, Berkeley Law School. The editors of Communications of the ACM have given permission for this column to be pre-published for the Kluwer Copyright Blog.
The proposed European Digital Single Market (DSM) Directive would mandate a new copyright exception to enable nonprofit research and cultural heritage institutions to engage in text- and data-mining (TDM). The European Commission and the Council recognize that digital technologies have opened up significant opportunities for using TDM techniques to make new discoveries by computational analysis of large data sets. These discoveries can advance not only natural but also human sciences in ways that will benefit the information society.
Article 3 would require EU member states to allow research and cultural heritage institutions to reproduce copyrighted works and extract information using TDM technologies, as long as the researchers had lawful access to the contents being mined. These researchers must, however, store such copies in a secure environment and retain the copies no longer than is necessary to achieve their scientific research objectives.
Importantly, rights holders cannot override the TDM exception through contract restrictions. (They can, however, use technology to ensure security and integrity of their networks and databases, which opens the possibility of technology overrides.) Article 3 also calls for rights holders, research organizations, and cultural heritage institutions to agree upon best practices for conducting TDM research.
No TDM Privilege for Profit-Making and Unaffiliated Researchers
The DSM Directive assumes that profit-making firms can and should get a license to engage in TDM research from the owners of the affected IP rights. Although the DSM contemplates the possibility of public-private partnerships, it forbids those in which private entities have control over TDM-related collaborative projects. Unaffiliated researchers (say, independent data scientists or think-tank personnel) cannot rely on the DSM’s TDM exception.
Article 3 is likely to put the EU at a disadvantage in AI research because some countries have already adopted less restrictive TDM exceptions. Japan, for instance, allows text- and data-mining without regard to the status of the miner, and does not confine the scope of the exception to nonprofit “scientific research.” In the U.S., for-profit firms have been able to rely on fair use to make copies of in-copyright materials for TDM purposes, as in the Authors Guild v. Google case. This ruling did not limit TDM purposes to scientific research.
Commentators on the DSM Directive have expressed several concerns about the restrictions on its TDM exception. For one thing, TDM licenses may not be available on reasonable terms for startups and small businesses in the EU. Second, some EU firms may ship their TDM research off-shore to take advantage of less restrictive TDM rules elsewhere. Third, some non-EU firms may decide not to invest in TDM-related research in the EU because of these restrictions. Moreover, in the highly competitive global market for world-class AI and data science researchers, the EU may suffer from “brain drain” if its most talented researchers take job opportunities in jurisdictions where TDM is broadly legal.
The DSM Directive’s proposed exception for TDM research is a welcome development for those who work at research and cultural heritage institutions. However, the unfortunate withholding of the exception from for-profit firms and independent researchers may undermine prospects for the EU’s achieving its aspiration to promote innovations in AI and data science industries. It will be hard for EU-based entities to compete with American and Japanese firms whose laws provide them with much greater freedom to engage in TDM analyses.
Forthcoming in the November 2018 issue of Communications of the ACM, a computing professionals journal, is a column entitled “Legally Speaking: The EU’s Controversial Digital Single Market Directive” by Professor Pamela Samuelson, Berkeley Law School. The editors of Communications of the ACM have given permission for this column to be pre-published for the Kluwer Copyright Blog.
The stated goals of the EU’s proposed Digital Single Market (DSM) Directive are laudable: Who could object to modernizing the EU’s digital copyright rules, facilitating cross-border uses of in-copyright materials, promoting growth of the internal market of the EU, and clarifying and harmonizing copyright rules for digital networked environments?
The devil, as always, is in the details. The most controversial DSM proposal is its Article 13, which would require online content sharing services to use “effective and proportionate” measures to ensure that user uploads to their sites are non-infringing. Their failure to achieve this objective would result in their being direct liable for any infringements. This seemingly requires those services to employ monitoring and filtering technologies, which would fundamentally transform the rules of the road under which these firms have long operated.
This column explains the rationales for this new measure, specific terms of concern, and why critics have argued for changes to make the rules more balanced.
Article 13’s Changes to Online Service Liability Rules
For roughly the past two decades, the European Union’s E-Commerce Directive, like the U.S. Digital Millennium Copyright Act, has provided Internet service providers (ISPs) with “safe harbors” from copyright liability for infringing uses of their services about which the ISPs had neither knowledge nor control.
Under these rules, ISPs must take down infringing materials after copyright owners notify them of the existence and location of those materials. But they do not have to monitor for infringements or use filtering technologies to prevent infringing materials from being uploaded and stored on their sites.
Because online infringements have greatly proliferated, copyright industry groups have strongly urged policymakers in the EU (as well as the U.S.) to impose stronger obligations on ISPs to thwart infringements. Their goal has been the adoption of legal rules requiring ISPs to use monitoring technologies to detect in-copyright materials and filtering technologies to block infringing uploads.
In proposing the DSM Directive, the European Commission has responded to these calls by deciding that certain ISPs should take on greater responsibilities on to help prevent infringements. Article 13 is aimed at those ISPs that enable online content sharing (think YouTube).
While not directly requiring the use of monitoring or filtering technologies, Article 13 can reasonably be interpreted as intending to achieve this result.
Which Online Services Are Affected?
The DSM Directive says that Article 13 is intended to target only those online content sharing services that play an “important role” in the online content market by competing with other services, such as online audio or video streaming services, for the same customers.
If the “main purpose” (or “one of the main purposes”) of the service is to provide access to “large amounts” of copyrighted content uploaded by users and it organizes and promotes those uploads for profit-making purposes, that service will no longer be protected by the E-Commerce safe harbor. It will instead be subjected to the new liability rules.
Concerns about the overbreadth of Article 13 led the Commission to narrow the definition of the online content sharing services affected by the rules. It now specifically excludes online encyclopedias (think Wikipedia), repositories of scientific or educational materials uploaded by their authors, open source software repositories, cloud services, cyberlockers, and marketplaces engaged in online retail sales of digital copies.
Article 13’s New Liability Rules
The most significant regulation in Article 13 is its subsection (4):
Member States shall provide that an online content sharing service provider shall not be liable for acts of communication to the public or making available to the public within the meaning of this Article when:
(a) it demonstrates that it has made best efforts to prevent the availability of specific works or other subject matter by implementing effective and proportionate measures … to prevent the availability on its services of the specific works or other subject matter identified by rightholders and for which the rightholders have provided the service with relevant and necessary information for the application of these measures; and
(b) upon notification by rightholders of works or other subject matter, it has acted expeditiously to remove or disable access to these works or other subject matter and it demonstrates that it has made its best efforts to prevent their future availability through the measures referred to in point (a).
The italicized language above signals terminology that is vague and open to varying interpretations, but anticipates the use of technologies to show those “best efforts.”
Copyright industry groups can be expected to assert that it is necessary to use monitoring and filtering technologies to satisfy the requirements of Article 13(4). They will also point to an alternative way that online services can avoid liability: by licensing uploaded copyrighted content from their respective rights holders.
Affected online services will have an uphill battle to fend off the efforts to interpret the ambiguous terms as imposing monitoring and filtering obligations. It is, of course, impossible to license contents for every copyrighted work that their users might upload to their site. But the big media firms can use this new rule to extract more compensation from platforms.
Concerns About Article 13’s Liability Rules
Critics have raised two major concerns about this proposal. First, it will likely further entrench the market power of the leading platforms who can afford to develop filtering technologies such as YouTube’s ContentID, and deter new entry into the online content sharing market. Second, it will undermine user privacy and free speech interests, leading to blockages of many parodies, remixes, fan fiction, and other creative reuses of copyrighted works that would, if examined by a neutral observer, be deemed non-infringing.
When the proposal was pending before the European Council in late May, several members, including representatives from Finland, Germany, and the Netherlands, opposed it and offered some compromise language, so it does not have consensus support. Since then, opponents have mounted a public relations campaign to urge EU residents to contact their Parliamentary representatives telling them to vote no in order to “save the Internet.”
Among the many critics of Article 13 has been David Kaye, the United Nation’s Special Rapporteur for Freedom of Expression. He wrote a nine-page letter explaining why Article 13 is inconsistent with EU’s commitments under international human rights instruments.
In addition, Tim Berners-Lee, Vint Cerf, and 89 other Internet pioneers (plus me) signed an open letter urging the EU Parliament to drop Article 13:
By requiring Internet platforms to perform automatic filtering on all of the content that their users upload, Article 13 takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users.
More than 145 civil society organizations also came out against it, urging European voters to contact members of Parliament to oppose it.
The protests about Article 13 were successful enough to induce a majority of the European Parliament to vote for giving further consideration to the DSM directive.
While it is certainly good news that the EU Parliament decided against giving a rubber stamp to the DSM proposal in its current form, the battle over Article 13 is far from over. The EU Parliament will be taking up further proceedings about it in the fall of 2018, but its proponents can be expected to mount a new campaign for its retention.
Whether Article 13, if adopted as is, would “kill” the Internet as we know it, as some critics have charged, remains to be seen. Yet the prospect of bearing direct liability for the infringing activities of users will likely cause many sharing services to be overly cautious about what their users can upload and new entry will be chilled. In its current form, Article 13 gives copyright enforcement priority over the interests of users in information privacy and fundamental freedoms.
About the Author:
Pamela Samuelson is the Richard M. Sherman Distinguished Professor of Law, University of California, Berkeley. She can be reached at firstname.lastname@example.org.
To make sure you do not miss out on posts from the Kluwer Copyright Blog, please subscribe to the blog here.
Real estate photographers failed to provide evidence that software provider CoreLogic, Inc., removed copyright management information (CMI) from licensed photos posted to listing services by real estate agents using CoreLogic’s software with the requisite mental state for liability under the Digital Millennium Copyright Act (DMCA), the U. S. Court of Appeals in San Francisco has held. The photographers did not affirmatively show that CoreLogic knew that its software’s failure to retain metadata in the photographers’ digital image files that contained invisible watermarks would “induce, enable, facilitate, or conceal” copyright infringement, as required for liability under 17 U.S.C. §1202(b). There was no evidence from which one could infer that future infringement was likely to occur as a result of the removal or alteration of copyright management information. The Ninth Circuit affirmed a decision of the federal district court in San Diego, which ruled on summary judgment that CoreLogic could not be liable for violating the DMCA due to the photographers’ failure to meet the “mental state” requirement of Section 1202(b) (Stevens v. CoreLogic, Inc., June 20, 2018, Berzon, M.).
It is difficult to find an article on any topic in the field of intellectual property (IP) that does not call for reform. Many legislative efforts are afoot in the EU to “update” IP norms, including a proposed Directive on copyright in the Digital Single Market. The same is happening elsewhere but most of those reforms are at the national or regional level. Is international reform possible?
One argument to explain why it is not possible is that international law typically follows national law so that incorporating new norms in an international instrument such as a new treaty presupposes that countries already agree. The treaty here is seen as de jure recognition of de facto state practice. Historically this has certainly been true on a number of occasions but in IP empirically it is not verified. New international IP instruments can do more than reflect an existing consensus; they can create new ones.
The 1961 Rome Convention, which protected rights of performers, record companies and broadcasters, created a new legal regime which very few countries had put in place at the time. The Convention pulled the neighbouring (related) rights regime up rather than confirming its existence in other words. There were arguments at the time that some musical performances should be copyright works. Still today the United States considers sound recordings as works (and tried but failed to make that an international rule during the TRIPS negotiations), leaving open vexing questions about originality.
Speaking of the TRIPS Agreement—adopted as part of the World Trade Organization (WTO) package in April 1994—, it also pulled IP protection norms upwards in several jurisdictions. Reviled in the years following its application in developing countries (for most of them, 1 January 2000), TRIPS is now often used as a defense against demands for higher norms in regional trade and other agreements. With a better understanding of the flexibilities it contains, TRIPS is viewed by many commentators and policy makers worldwide as an acceptable standard.
Since TRIPS, the IP spotlight moved (back) to the World Intellectual Property Organization (WIPO). Many new instruments were adopted there, including the 1994 Trademark Law Treaty (TLT) and its cousin the 2006 Singapore Treaty on the Law of Trademarks, which are best seen as administrative instruments not fundamentally changing substantive trademark law. In the same category (mostly administrative with some substantive impacts), I would put the 2015 negotiation of a new Act of the Lisbon Agreement on the protection of geographical indications. That Act, for reasons I’ve explained elsewhere, is not likely to succeed in expanding GI protection, especially in common law jurisdictions.
Since then, a number of substantive instruments have been successfully negotiated, all in the field of copyright and related rights. The 1996 WIPO Copyright Treaty, the WIPO Performances and Phonograms Treaty (WPPT), the 2012 Beijing Treaty on Audiovisual Performances, and the 2013 Marrakesh Treaty for visually impaired persons. Whether this is a good way to reform international copyright law I will discuss in a separate blogpost later.
Can reform actually happen internationally? Lobbies are powerful, but they are not an insurmountable obstacle. A comprehensive reform package could appeal, in the way that a good compromise can. There is an institutional problem as well, however.
IP policy requires both normative trade-offs and appropriate doctrinal implementation of policy choices.
Of the intergovernmental organizations (IGOs) active in IP, the WTO has the broadest portfolio. It can, and has, offered trade-offs between patents and textiles; sound recordings and bananas; or cotton and pharmaceuticals. It has made one change to TRIPS since 1994, namely the addition of the system allowing compulsory licensing of pharmaceuticals (art. 31bis). That said, the WTO’s “round“ approach to reforms (where all sectors are on the table) make it an unlikely forum for major progress in the foreseeable future.
The World Health Organization (WHO) has health as its prime objective and many of its recent efforts in IP have focused on access issues and skewed research investment. It cannot easily offer comprehensive reforms in a way that patent and data protection right holders would find attractive.
Let us not forget UNESCO, which has copyright in its brief as administrator of the now obsolete Universal Copyright Convention and has risen to prominence again in the area of intangible knowledge protection, where its mandate intersects with IP (copyright and TK). UNESCO membership seems unlikely to expand UNESCO’s IP reform anytime soon.
WIPO remains of course front and center. It can cooperate with other IGOs, as it has on health. WIPO can push, as it now does, for small-scale reforms. WIPO has stayed away from more comprehensive normative reforms. Yet it has the resources and expertise to identify, analyze and parse normative issues and to make policy proposals, which could be of a more comprehensive nature. Barring a more ambitious approach, it may be that the feeling of perpetual dissatisfaction, best seen as a sign of suboptimal international IP norms, will continue.
To make sure you do not miss out on posts from the Kluwer Copyright Blog, please subscribe to the blog here.
While awaiting the vote (on 5 July 2018) of the European Parliament on the Legal Affairs (JURI) Committee Proposal on Article 13 of the draft Directive on Copyright in the Digital Single Market – commented on previously by Christina Angelopoulus – in this post we will focus on the Proposal agreed on by the European Council. The Council proposal takes a clear position, creating a special liability regime for providers which are considered to be performing acts of communication to the public or making available to the public of works uploaded by the users. The mechanism envisaged by the Council is not entirely different from that proposed by the Parliament. The latter though lacks some of the clarity of the Council’s text. In spite of the improvements that the Council’s proposal brings in terms of clarity, serious concerns still exist. In addition to the potential violation of fundamental rights linked to the obligation for platforms to proactively prevent copyright works from being posted on their websites, the proposal does not address the issue of creators’ remuneration. Therefore, in the hope that the Parliament’s vote will reopen the dialogue on Article 13 wording, legal mechanisms that guarantee the remuneration of creators will be proposed.
Online Content Sharing Service Provider (OCSSP)
Article 13 in the Council draft focuses on Online Content Sharing Service Provider (OCSSP) which are defined in Article 2(5) as “a provider of an information society service whose main or one of the main purposes is to store and give the public access to a large amount of works or other subject-matter uploaded by its users which it organizes and promotes for profit-making purposes”. The Council in recital 37a and 37b attempts to narrow down the scope of the provision: Article 13 applies to those providers whose main purpose is to profit from providing access to copyright-protected content by organising it and promoting it in order to attract a larger audience. Recital 37a explains that organising and promoting content involves, for example, indexing the content, presenting it in a certain manner and categorising it as well as promoting it. The notion of OCSSP does not include services whose main purpose is not to provide access to copyright-protected content with the purpose of obtaining profit from this activity (e.g. internet access providers, cloud providers or online marketplaces). In the same manner, websites which store and provide access to content for not-for-profit purposes (e.g. “online non-for-profit encyclopaedias, scientific or educational repositories or open source software developing platforms which do not store and give access to content for profit-making purposes”) are not OCSSPs.
The Council provision mandates a case-by-case assessment of whether the provider falls into this category. However, it is difficult to predict how courts would interpret within the whole provision indeterminate concepts such as “large amount of works” and “main purpose to give access to copyright-protected works” and this might cause uncertainty. Legal uncertainty might incentivise platforms (e.g. User Generated Content Platforms) to be overzealous removing content in order to comply with the proposal’s requirements.
The New Parallel Liability Regime for OCSSP
From the text suggested by the Council emerges a new parallel liability regime for OCSSP. Article 13 (1) states that OCSSPs perform an act of communication to the public or an act of making available to the public when they give public access to copyright-protected works uploaded by their users. Therefore, the Council, interpreting and expanding the Commission’s text, explicitly derogates to the notion of right of communication to the public of works and right of making available to the public under Article 3 (1) and (2) of Directive 2001/29/EC (InfoSoc Directive) as construed by the CJEU. This intention is explicitly conveyed in recital 38: “This Directive … does not change the concept of communication to the public or of making available to the public under Union law nor does it affect the possible application of Article 3(1) and (2) of Directive 2001/29/EC to other services using copyright-protected content”. This “legal fiction” is the basis of the liability regime envisaged by the Council for OCSSPs which in principle are expected to either get a licence from the rightholders or prevent the availability of works on their services. The following graph provides an overview of the new liability regime for OCSSPs.
The first option for the OCSSP is to enter into licensing agreements with copyright holders. Where an OCSSP obtains authorisation from the rightholders to communicate or make available to the public works or other subject-matter, this authorisation also covers acts of uploading by users, acting on a non-commercial basis, of works acquired by the providers. This means, for example, that if the song Hallelujah by Leonard Cohen, interpreted by Cohen and produced by Columbia Records, has been licensed to the OCSSP, users can upload that song for non-commercial purposes without seeking authorisation from the rightholders.
The second path deals with a further paradigmatic departure from the acquis provided for in Article 13(3) and Recital 38b: where the rightholders’ authorisation has not been obtained, in principle OCSSPs do not enjoy, in relation to copyright-relevant acts, the liability exemption regulated by Article 14 of Directive 2000/31/EC (E-Commerce Directive). Therefore, according to Article 13 these services have an obligation to prevent the availability of non-authorised works. This obligation is specified by the Council as summarised in the yellow box of the above graph and represents the most critical part of the proposal. The OCSSP is not liable for copyright infringements in two cases (Article13 (4)):
Where it demonstrates that by implementing effective and proportionate measures it has made its best effort to prevent the availability of works. On this point, it is worth noting that the obligation for OCSSPs to prevent availability of works is limited to specific works identified by rightholders. This may mean that rightholders which do not identify works for an OCSSP “to stay down” grant a sort of implied licence to an OCSSP which therefore may be allowed to communicate to the public or make available to the public those works uploaded by users.
Where, upon notification by rightholders of works, it acts quickly to remove or disable access to these works.
On a positive note:
The provisions under Article 13 (5) and Recital 38(e), while defining the obligation for OCSSPs, distinguish between big and small companies. Indeed, effectiveness and proportionality of measures referred to in sub a) shall be estimated taking into account among other factors:
Nature and size of OCSSPs;
Amount and type of works or other subject-matter uploaded by users;
Availability and costs of the measures and their effectiveness in light of technological developments in line with industry best practices.
According to recital 38(e), in some cases small and micro enterprises may not be expected to apply preventive measures but only to expeditiously act to remove unauthorised content.
In principle at least, exceptions and limitations shall remain unaffected.
Article 13 (6) requires Member States to ensure that OCSSPs and rightholders “cooperate with each other in a diligent manner to ensure the effective functioning of the above mentioned measures”.
Nonetheless, two major concerns arise.
1) “Effective and Proportionate Measures”
The first big concern is related to the understanding of “effective and proportionate measures” to prevent availability of works. It must be acknowledged that (unlike in the Commission proposal) there is no explicit mention of the obligation of general monitoring and filtering. However, the wording “effective and proportionate measures” read in combination with recital 38e (where “effective technologies” are mentioned as a means to fulfil this obligation) might lead to the conclusion that at least big OCSSPs should apply general monitoring and filtering systems in order not to be found liable. On this point, it suffices here to recall the well-founded criticisms raised by leading copyright academics across the EU who have been highlighting the inconsistency of such an obligation with the EU Charter of Fundamental Rights.
Similarly, despite the statements of principle, applying monitoring and filtering systems to prevent availability of works might in practice undermine the implementation of the statement “without prejudice to exception and limitations provided for in Union law”. In order to avoid being sued, platforms might tend to over-block content even in cases where content is covered by exceptions and limitations. This is true both in the case of “automated monitoring”, and of “human monitoring” carried out by OCSSPs’ employees. It has been reasonably pointed out that content monitoring by OCSSPs “would inevitably be inaccurate”. Also, studies show that this problem exists already under notice and takedown systems (NTD) and would grow worse under a monitoring system. One obvious problem with OCSSP monitoring is the difficulty in identifying exceptions and limitations – a task with which lawyers and judges struggle. Even though according to Article 13(7) service providers should put in place a complaint and redress mechanism for users of the services in case of disputes over the application of the measures on their content, research on existing notice-and-take-down systems has demonstrated that similar mechanisms are rarely utilized by end-users and are anyway ineffective.
Article 13 should be reworded explicitly excluding the obligation of monitoring and filtering. Also, the NTD procedure under 13(b) in the Council’s text should be specified in detail to achieve a high level of harmonisation within the European Union. In the interest of legal certainty, an elaborate legislative design of the NTD, including counter notice procedure, is needed. In particular, measures to restrain potential abuse of NTD seem particularly sensible. Also, the duties of the rightholders should be substantiated. Specifically, certain requirements for the legitimacy of those rightholders who want to remove certain protected content should in particular be regulated by law.
2) Lack of Consideration of Creators’ Interests
The provision proposed by the Council, like those of the Commission and the Parliament, only refers to “rightholders”, including both creators and derivative rightholders, without taking into account the whole picture as represented in the following graph.
Normally creators transfer or assign their rights to publishers and producers, however they do not always get an equitable share of the profits generated by the exploitation of their rights through OCSSPs. This share is left to the parties for negotiation. However, it may be the case that due to the weak bargaining power of creators they do not receive an equitable remuneration for this particular exploitation of works.
Legal mechanisms to ensure creators’ participation as well as performers’ participation in the profits generated through OCSSPs’ exploitation of works and performances may be included. Two options may be taken into account.
2. A second alternative mechanism – already know to the European legislator – could be borrowed, mutatis mutandis, from Directive 2006/115/EC (Rental Directive). Article 5 of the Rental Directive seeks to ensure that authors and performers benefit from their rental rights. Article 5 therefore provides for an unwaivable right to remuneration to which authors and performers will remain entitled even after having transferred their rental rights to a producer. According to Article 5, Member States have the freedom to regulate whether and to what extent collecting societies shall administer the equitable remuneration right and who shall be responsible for the payment of the remuneration (see, Krikke, in Dreier, Hugenholtz, Concise European Copyright Law (2nd edn, Wolters Kluwer 2016) 290 ff.).
“All kinds of content are today easily made available online. The dissemination of creative content through the internet encourages cultural diversity. However, this has to be balanced against an appropriate level of protection and fair remuneration for those creating the content” Boil Banov, Minister of culture of Bulgaria.
This quotation reflects the main objective of the Council’s proposal, to strike the right balance between the remuneration received by authors and performers and the profits made by internet platforms when they make their works accessible. However, the Council fails to reach that goal since the creators’ interests are not taken into careful consideration. Therefore it is hoped that the legislative discussion develops looking at the whole picture including creators’ rights as well as users’ rights.
Last week, the Legal Affairs (JURI) Committee of the European Parliament voted in favour of Rapporteur MEP Axel Voss’s proposal on Article 13 of the draft Directive on Copyright in the Digital Single Market. The saga surrounding this infamous text is however far from over. A group of MEPs are currently challenging the JURI version of the proposal through the so-called rule 69c procedure. This means that on the 5 July at 12:00, the plenary of the Parliament will vote on whether the JURI report provides an acceptable basis to start “trilogue” negotiations with the Council. This will be a yes/no vote. If the EP decides to vote against the JURI mandate, every MEP will be able to file an amendment to the JURI report in the next plenary session in September. This would allow for Article 13 (as well as the equally contested Article 11) to be reworded.
Should Article 13 be amended? As I explain below, the answer to this question was already given by the Court of Justice of the European Union in 2012.
Pictured: an endangered species (no, not the frog)
What websites does Article 13 target?
Article 13 in the Voss version of the draft directive focuses on so-called ‘online content sharing service providers’. These are defined by Article 2 of the proposal as internet service providers one of the ‘main purposes’ of which is to give access to the public to copyright protected works and which ‘optimise’ those works. ‘Optimisation’ is further defined in Recital 37a as including the promotion, display, tagging, curating and sequencing of works. Until now, it has been argued by many that such acts did not give a platform an ‘active role’ of the kind that would expose it to liability if they were done through automated technology. The Voss draft however closes the door on this interpretation by unequivocally stating that such optimisation amounts to an active role ‘irrespective of the means used therefore’.
This exposes most modern User-Generated Content (UGC) websites to liability. A safety valve might be found in the concept of ‘main purpose’ – unfortunately however, this is not defined in the text. Is the ‘main purpose’ of UGC sites to give access to copyright-protected works? I would argue that they intend to promote the exchange of content created by users – but copyright-infringing content is often also posted on such services. Politically, it is clear that the reforms are targeted at such platforms. It is hard to predict how courts would react if cases involving UGC sites were to come before them. The legal uncertainty this would create would incentivise such websites to comply with the proposal’s requirements anyway, so as to avoid lengthy and expensive legal battles.
The proposal does include a list of carve-outs for online encyclopaedias, educational and scientific repositories, providers of private cloud services, open source software developing platforms and online marketplaces. These exceptions are helpful, but they would still leave video-sharing websites, such as Dailymotion, online forums, such as reddit, and social networking sites, such as Twitter, out in the cold.
What obligations does the law impose on providers?
Article 13 seeks to make the targeted platforms directly liable for copyright infringements performed by their users online. It offers platforms two basic options: they must either enter into licensing agreements with copyright holders or take measures to prevent copyright works from being posted on their websites. As Prof. Martin Senftleben explains here, the first of these options is practically impossible: the platforms would have to acquire licenses from every copyright holder currently protected in the EU. Given the wide variety of works some platforms host, the rights clearance task would be enormous. The fragmented collecting society landscape in the EU raises further roadblocks.
This would leave providers with one option: to ensure that copyright-protected works and other subject matter are not posted on their platforms. Given the vast volumes of content that even smaller websites handle, the only practicable way of achieving this would be through filtering. The Voss draft appears aware of this. Although the wording tries to suggest that other options would be available, the only one named in the draft is the implementation of ‘effective technologies’ – i.e. filtering. It is therefore clear that Article 13 aims at imposing an obligation on platforms to filter.
What is wrong with that?
That the EU legislator would seek to oblige platforms to filter is perhaps somewhat surprising, given that this option has already been rejected by the CJEU in its case law as incompatible with: a) existing EU law and b) the Charter of Fundamental Rights of the European Union.
a) Incompatibility with the Prohibition on General Monitoring Obligations of Art. 15 E-Commerce Directive
‘Member States shall not impose a general obligation on providers, when providing the services covered by Articles 12, 13 and 14, to monitor the information which they transmit or store, nor a general obligation actively to seek facts or circumstances indicating illegal activity.’
This is the so-called ‘general monitoring prohibition’ that has formed a cornerstone of the EU’s internet law for almost 20 years. The term ‘general monitoring’ is contrasted by Recital 47 ECD to monitoring ‘in a specific case’, which is permissible.
The prohibition has been further interpreted by the CJEU. The seminal cases in this area are SABAM v Scarlet and SABAM v Netlog. In these, the CJEU examined whether an injunction imposed on, respectively, an internet access provider and a hosting service provider, requiring them to install filtering systems, would be permissible under EU law. It is worth noting that Netlog concerned precisely the type of provider considered by Article 13. The Court concluded that such an injunction would oblige the intermediaries to actively monitor almost all the data relating to all of their users in order to prevent any future infringement of intellectual-property rights. It would therefore involve general monitoring, bringing it into conflict with Article 15 ECD.
The same conclusion would hold true of Article 13’s ‘effective technologies’. Indeed, even if a platform were to eschew filtering in favour of, for example, manual moderation, it would still not be able to check whether each piece of content on its website is infringing or not without, well, checking each piece of content. General monitoring is a necessary property of filtering.
It is worth noting the counter-argument that, in the case of Article 13, the monitoring would not be truly ‘general’, as it would not be targeted at preventing any future infringement in an abstract sense. Proponents of Article 13 suggest that it only envisages an obligation for the providers to remove works where they have previously received ‘relevant information’ by the right-holders. As a result, they conclude, platforms will not be obliged to filter at large, but only in order to prevent those copyright infringements brought to their attention by right-holders. This is an interpretation inspired by decisions of the German BGH, according to which, if a monitoring activity is targeted at preventing the infringement of a specific work, then it is itself also specific. Yet, even the extremist position of the BGH at least requires a court order before filtering can be imposed. This basic procedural guarantee is absent in the proposed directive, which would instead incorporate a duty to monitor directly into statutory law.
This logic does not add up. Even given a limitation to pre-identified works, the monitoring will necessarily affect the totality of the content available on the service as uploaded by any user. If everybody’s content is being monitored, whether that is to prevent any copyright infringement or only particular copyright infringements is immaterial: the monitoring is still general. This has been made clear by the CJEU in McFadden, where it defined ‘general monitoring’ as the monitoring of ‘all of the information transmitted’ by a provider, with no disclaimer depending on the specificity of the protected works. This is also the approach favoured by the French Cour de casssation, which in 2012 rejected the ‘notice-and-stay-down’ model as involving general monitoring.
Instead, ‘monitoring in specific cases’ should be limited to what the natural meaning of the term suggests: the monitoring of the activity of specific persons online. It must be recalled that Article 15 ECD applies horizontally to a variety of different kinds of online unlawfulness. While monitoring in specific cases may be less useful for copyright holders, it is often imperative in the investigation of cases involving the posting of images of child sexual abuse or terrorist content. This has been recently acknowledge by the Commission’s Communication on tackling illegal content online, which indicates that, in certain cases, platforms should abstain from removing illegal content, so as to enable investigations by national law enforcement authorities and the prosecution of perpetrators.
b) Incompatibility with the Charter
Even if problems did not arise with regard to Article 15 ECD however, filtering obligations would still be incompatible with the EU’s most basic legal principles. This is because, according to case law of the CJEU, in the enforcement of copyright, a ‘fair balance’ must be struck between the protection of copyright on the one hand and, on the other hand, the protection of the fundamental rights of other persons. In the SABAM cases, it was found that filtering violates this requirement.
According to the Court, this is because filtering obligations would require the intermediary to install ‘a complicated, costly, permanent computer system at its own expense’. This would bring the obligation out of balance with Article 16 of the Charter and the freedom of the intermediary to conduct its business. The filtering would also have involved ‘the identification, systematic analysis and processing of information connected with the profiles created on the social network by its users’. This information is protected personal data because, in principle, it allows those users to be identified. As a result, the filtering would also have been incompatible with Article 8 of the Charter on the protection of personal data. Finally, the system ‘might not distinguish adequately between unlawful content and lawful content, with the result that its introduction could lead to the blocking of lawful communications.’ This would make the filtering incompatible with a fair balance with Article 11 of the Charter on end-users’ freedom of expression and information.
The lack of a ‘fair balance’ would also plague the obligations to adopt effective content recognition technologies envisaged by Article 13 of the Proposal.
It should be noted that the cooperation with right-holders which the Proposal envisages could lessen some of the burden on the intermediaries’ freedom to conduct their business. As right-holders would take on the responsibility of indicating which copyrights should be protected, the intermediary would not have to filter for any infringement. Potentially, therefore, a greater balance – perhaps even a sufficient balance – between the protection of copyright and the platforms’ freedom to conduct their business may be injected into the system.
Nevertheless, the problems with end-users’ freedom of expression and data protection rights persist.
For one thing, in order to identify infringements among the totality of the content on the host service providers network, it would still be necessary to ‘identity, systematically analyse and process’ that totality of content. This would include end-users’ personal data. In this regard, it is worth considering the difference in nature between the right to privacy and the right to the protection of personal data. Where users upload non-infringing content onto the platform of a hosting service – e.g. a home-video onto YouTube – it will almost inevitably contain personal data. That content would nevertheless have to be scanned and compared against the ‘fingerprint’ of the copyrighted work in the filter’s database to ensure that it is not infringing.
Finally, although in recent years the technology has improved in this area, it is still not capable of correctly identifying whether a given use of a work benefits from the protection of an exception or limitation to copyright. The same problem, therefore, that was noted by the CJEU in the SABAM cases with regard to freedom of expression continues to exist. Technology is simply not yet at the stage where it is capable of identifying a parody or deciding whether one work criticises or reviews another.
Moreover, it should be considered that exceptions and limitations to copyright are not currently properly harmonised at the EU level. Article 5 of the Information Society Directive instead only provides a closed list of options from among which the Member States may choose. As a result, even if the technology could develop the ability to correctly discern an exception or limitation in sensitive context-dependent situations, different filtering systems would have to be devised for each individual Member State. In the SABAM cases, the CJEU emphasised that,
whether a transmission is lawful also depends on the application of statutory exceptions to copyright which vary from one Member State to another. In addition, in some Member States certain works fall within the public domain or may be posted online free of charge by the authors concerned.
Of course, the Voss report does attempt to introduce some safeguards for end-users’ rights. So, while Article 13(1) states that the measures adopted by platforms must lead to ‘the non-availability’ of infringing works, it also requires that non-infringing works remain available. Article 13(1b) further states that,
‘Members States shall ensure that the implementation of such measures shall be proportionate and strike a balance between the fundamental rights of users and rightholders and shall in accordance with Article 15 of Directive 2000/31/EC, where applicable, not impose a general obligation on online content sharing service providers to monitor the information which they transmit or store.’
Two big problems arise in this regard. First, as noted above, the text has already stated that the targeted platforms play an ‘active role’. This means that they are deprived of the protection of the defences of Articles 12, 13 and 14 ECD. Yet, Article 15 ECD states that is only applicable to platforms ‘when providing the services covered by Articles 12, 13 and 14’. Whether this means that providers need only provide mere conduit, caching or hosting services to fall under Article 15 or need to abide by the conditions of Articles 12-14 is not entirely clear. The natural meaning of the sentence would suggest the first – in which case (as explained above) Article 13 is incompatible with the ECD. Alternatively, Article 15 never applies to the targeted providers, making reference to it as a safeguard misleading.
More importantly, these safeguards are entirely incompatible with the basic purpose of Article 13: to require platforms to proactively remove infringing content. To achieve this aim, platforms will have to examine each piece of content posted by end users. This will amount to general monitoring. Realistically, providers will turn to filters to accomplish this general monitoring. Filters by definition can neither avoid systematically processing the personal data of users nor reliably recognise defences against copyright infringement. They are therefore incapable of allowing for a ‘fair balance’ between the fundamental rights of users and right-holders. This makes the entire provision a contradiction in terms.
In any case, perhaps the most important guarantee for fundamental rights foreseen in the text is introduced by Article 13(2). Although this does nothing to preserve end-users’ Article 8 rights, it does attempt to protect exceptions and limitations to copyright. It requires that platforms put in place effective and expeditious complaints and redress mechanisms. These are intended to allow users to object in cases where content that is protected by an exception or limitation is incorrectly taken down. The trouble here is that, as research on existing notice-and-take-down systems has demonstrated, complaints and redress mechanisms are rarely utilised by end-users. Indeed, in the fast-paced world of internet, the use of e.g. a meme in a discussion forum is rendered useless minutes after it has been posted: the conversation will have moved on, complaining takes time and restoring the content will do nothing to rectify the effect on freedom of expression.
Furthermore, the complaints and redress mechanisms envisaged by the Voss report suffer from fundamental failings. The text only requires that rightholders reasonably justify their decisions *after* a complaint has been lodged. This indicates that content may be removed whenever a filter finds a match *without any justification at all*. More troublingly yet, Recital 39c also suggests that it should fall to the rightholders themselves to decide whether the take down has been justified or not. As the text states,
‘[the mechanism] should prescribe minimum standards for complaints to ensure that rightholders are given sufficient information to assess and respond to complaints. Rightholders or a representative should reply to any complaints received within a reasonable amount of time.’
This means that copyright holders, i.e. one of the parties in the dispute, will be put in charge of deciding whether their own rights have been infringed or not!
How courts will approach these oxymoronic statements is unclear. It is possible that some courts might conclude that, if no filtering is available that can respect user’s rights, then no filtering needs to be implemented. Other courts may be tempted to enforce the main objective of the law anyway. And, again, incentives are important: well before they find themselves in court, providers will have a choice between either taking the risk that they may be held liable for infringing content or adopting filtering software, regardless of whether this means that lawful content will be taken down. It is likely that they will err on the side of caution to the detriment of users’ rights.
Although it is of course hard to predict the future, none of this looks very good, especially for smaller businesses and internet users. The larger, more established and richer players have already invested in developing filtering technologies and have incorporated these into their business models (see e.g. YouTube’s Content ID). Competitors will either have to develop their own content recognition systems (which is very difficult) or buy existing systems from others. It is hard to tell how expensive this might be, as filtering developers don’t tend to make their pricelists public, preferring instead to offer bespoke deals to platforms. What is clear is that unsophisticated filtering systems, that result in higher rates of false negatives and positives, are much cheaper than the good ones.
The result will be bad for both smaller platforms and end-users. Smaller platforms will either be pushed out of the market or forced to pay large sums to filter providers. This supports big, established companies by forcing their competition to close up shop or buy their products. End-users rights will be affected, as everything they post online will be supervised against a database of copyright-protected works to ensure there is no match. This will have obvious chilling effects on freedom of expression online. It will also mean that posters will have to fight for each protected use of a copyright work: first the filter will block posting and only after using the complaints and redress mechanism and getting explicit permission from the right-holder will the content be allowed online – despite being entirely legal!
For all the above reasons, leading copyrightacademicsacrosstheEU have raised serious concerns about Article 13, arguing for its deletion or at least significant re-wording. Thankfully, the rather dismal future the provision spells out can still be avoided – but only if the European Parliament decides to side with the fundamental rights of EU citizens next week.
The debate on Art. 13 Draft DSM Directive has gained speed, after the Commission’s initial 2016 proposal was supplemented by the Council’s proposal of May 25, 2018, and after the European Parliament’s JURI Committee on June 20, 2018 also voted on an own proposal for Art. 13 Draft DSM Directive. The plenary vote is due on July 4, 2018. While the language of the proposals by the Commission, Council and Parliament offer differences, all three proposals seem to share the common standpoint that active role hosting providers should have a duty to prevent the availability of unauthorized copyright content.
The contrasting view by some politicians, e.g. MEP Ms. Reda in this blogpost create the impression that such a duty, including by means of applying Automatic Content Recognition technologies would be something new and not already part of the existing EU law. A look from a legal standpoint shows that this impression would not be correct. Rather, so far EU law and the respective case law has allowed the imposition of a duty to prevent the availability of unauthorized copyright content for hosting providers under certain requirements. This includes neutral non-active hosting providers.
Art. 15 E-Commerce Directive (ECD) provides for a prohibition on imposing general monitoring duties upon internet providers (for the text of Art. 15 see here). This is in particular true for passive hosting providers, which may – due to their non-active role – rely on the liability privileges of Art. 14 ECD. The CJEU understood Art. 14 as applying only to mere technical, automatic and passive operators providing data processing services to their customers (CJEU of 23 March 2010, joined cases C-236/08 to C-238/08 para. 114 – Google and Google France; CJEU of 12 July 2011, C-324/09 para. 113 – L’Oréal/eBay).
Art. 15 ECD plays an important role in particular in determining the scope of injunction claims, which remain applicable even if Art. 14 ECD applies. In contrast to Arts 12 to 14 ECD, Art. 15 ECD applies to injunction claims, in particular to injunction claims which are raised pursuant to Art. 8(3) Copyright Directive in the field of copyright, and pursuant to Art. 11 third sentence Enforcement Directive for other IP rights. Pursuant to Art. 8(3) Copyright Directive and Art. 11 third sentence Enforcement Directive, right holders can ask providers to take measures to prevent future rights infringements. The Articles do not proscribe or prohibit any measure to achieve the goal.host
In this regard, Art. 15 ECD helps to balance the fundamental rights at stake by the internet provider, its users and the right holders (CJEU of 14 April 2011, C-70/10, para. 69 et seq. – Scarlet/SABAM; CJEU of 16 February 2012, C-360/10, para. 39 et seq. – SABAM/Netlog; CJEU of 15 September 2016, C-484/14, para. 87 – McFadden/Sony Music). For example, the CJEU has found that an injunction imposed on a hosting provider requiring it to install a filtering system obliging the hosting provider to actively monitor all the data relating to all of its service users, in order to prevent any future infringement of intellectual property rights is incompatible with Art. 15 ECD (CJEU of 16 February 2012, C-360/10, para. 38 et seq. – SABAM/Netlog).
But as Art. 15 ECD is an open provision which requires a careful balancing of rights, it does not stand in the way of more specific monitoring duties, in particular by hosting providers. For example, file hosters have been obliged by German and Italian courts to apply word filters, after having been notified about a specific title of a copyright work, made available without authorization by a user This interpretation of Art. 15 seems convincing. As the filtering is confined to a specific title, it is not in conflict with the prohibition of general monitoring duties by internet providers. Recital 47 ECD in particular mentions that “monitoring obligations in a specific case” are not prohibited by Art. 15 ECD.
Nevertheless, a clear delineation between prohibited general monitoring obligations and allowed specific monitoring obligations has not yet been established, in the absence of relevant CJEU case law. Confusingly, the French Federal Supreme Court (Cour de Cassation) has rejected stay down obligations for hosting providers as conflicting with the prohibition of general monitoring duties (Cour de Cassation Arrêt no. 831, 11-13.669, 12 July 2012 – Google France/Bach films; Cour de Cassation Arrêt no. 828 of 12 July 2012 – Google France/Bach films.).
It does not seem convincing to maintain that stay down and a duty to prevent the availability of specific works are always stopped by the prohibition of general monitoring duties pursuant to Art. 15 ECD. In the end, it is a question of the technical solution used by the provider. For example, if the measures relate only to files of a certain type and thus only prevent the availability of such files, one cannot talk of general monitoring. If one would apply Art. 15 ECD in all cases that involve any processing of data, no room would be left for specific monitoring duties. What also speaks in favour of the freedom to establish specific monitoring duties to ensure prevention of infringements, is the recognition of a balancing of rights by the CJEU specifically for Art. 15 ECD. If any and all prevention duties for hosting providers were prohibited by Art. 15 ECD, this would preclude any kind of duties of care for hosting providers and would reduce the duties of hosting providers to a mere takedown. Such mere takedown duties would not be in line with EU law.
Rather, the CJEU has in several cases recognised prevention duties of hosting providers (CJEU of 12 July 2011, C-324/09 para. 131 – L‘Oréal/eBay; CJEU of 16 February 2012, C-360/10 para. 29 – SABAM/Netlog.). In particular for hosting providers after they were notified of a clear rights infringement, Art. 8 (3) Copyright Directive establishes duties beyond mere takedown, also for stay down and for prevention of similar clear rights infringements of the same kind. This is at least the established case law of the German Federal Supreme Court. In its L’Oréal/eBay decision, the CJEU confirmed the German case law for the sister provision Art. 11 3rd sentence Enforcement Directive. According to the court, the prevention duty included the duty to ensure that an online market place takes measures “which contribute, not only to bringing to an end infringements of these rights by users of the market place, but also to preventing further infringements of that kind”. (CJEU of 12 July 2011, C-324/09 paras. 127, 128 to 134 – L‘Oréal/eBay). In particular, according to the German case law, such (specific) prevention duties by hosting providers can include the application of word filters with regard to the works notified to the provider.
In the case of unjustified notices, it should also be noted that here too EU law already provides for a solution. E.g. for access providers the CJEU has confirmed a right of action for internet users where they face an unjustified blocking of information through website blocking (CJEU of 27 March 2014, C-314/12 para. 57 – UPC Telekabel Wien). This decision concerned Art. 8 (3) Copyright Directive and more specifically the weighing of the fundamental rights at stake. Art. 8 (3) also applies to hosting providers. As a consequence, and also in the case of unjustified blocking of information by hosting providers already under the existing regime of Art. 8 (3) Copyright Directive, the rules provide for a right of action for the uploader.
As a result, reasonable and specific duties to prevent the availability of copyright works for neutral and passive hosting service providers seem possible under the current regime.
The case of active hosting providers
That said, prevention duties (which may include upload filtering for specific content) for active hosting providers should not fall behind this current status of law for passive providers. Rather, it logically follows from CJEU case law that active hosting providers deserve even a stricter liability regime than neutral hosters.
CJEU Ziggo/Brein (“The PirateBay”) concerned a website blocking claim raised against a Dutch access provider under Art. 8(3) Copyright Directive. In this context, the CJEU analysed the website The Pirate Bay, which is an online index for digital content, facilitating peer-to-peer file sharing among users of the BitTorrent protocol. The court held it to be a sufficient intervention in a communication that The PirateBay offered an index classifying the works under different categories, based on the type of works, genre and popularity, and the operators of The PirateBay checking that the work has been placed in the appropriate category. Also, the operators deleted obsolete or false Torrent files and actively filtered some content (CJEU of 14 June 2017, C-610/15, para. 38 – Ziggo/Brein).
The role of The Pirate Bay as a platform to connect users of the BitTorrent protocol for infringing activity was evaluated by the CJEU as primary liability for communication to the public. Moreover, it appears that the criteria for primary liability (as established by the CJEU in The Pirate Bay) for communication to the public run parallel with the requirements for an “active role”, which excludes hosting providers from the liability privilege of Art. 14 ECD. This would also guarantee a sound interface without gaps between the EU liability rule for communication to the public and the liability privilege of Art. 14 ECD.
Therefore, according to the CJEU case law, active hosting service providers are, due to their very nature, under a stricter, primary, copyright liability through their (active or deliberate) intervention in the making available of works on their platform. It follows that in order to avoid liability, active hosting service providers must take effective measures to prevent the unauthorised availability of works on the services. It would be a bizarre conclusion that primarily liable service providers would have lesser duties than those already applied to passive hosting service providers (see 1. above).
It may be of interest in this respect that the German Federal Supreme Court (BGH) has also applied the requirements for filtering duties for providers which are within Art. 15 ECD to providers outside it. According to the opinion of the BGH, search engines are not within the reach of Art. 15 ECD. But the prohibition to impose general filtering obligations upon them applies. Nevertheless, specific filtering duties may be imposed on search engines. In particular, after a notification by the right holder, work specific word filtering duties and also work specific audiovisual filtering duties (if proportionate) may be imposed on the search engine (BGH of September 21, 2017, file no. I ZR 11/16: “Vorschaubilder III” (“Thumbnails III”), see here.
It would therefore appear perfectly feasible and within the framework of the existing EU law to impose more extensive duties to prevent the availability of copyright content on active hosting service providers, subject to a careful balancing of legitimate interests and rights. Active hosting providers with an in principle legal business model may in particular face specific filtering duties. Depending on the business model, such filtering duties may be extended. For example a borderline business model attracting infringements could face more general filtering duties.
Result: A duty to prevent the availability of protected works is a necessary part of copyright liability law
CJEU and national case law show that reasonable duties of care for hosting service providers, including the duty to prevent the availability of copyright works, are an integral part of copyright liability already. They have been applied for years without “breaking the internet”. But such “preventive” duties have to be applied with a differentiated approach:
– Duties of Passive hosting providers should be assessed against Art 15 of the ECD, and in particular the prohibition against “general” monitoring.
– Active hosting service providers fall under a stricter regime. But the scope of duties depends on the legitimate interests at stake. For some active role hosters specific monitoring will apply, but in other cases they must do more. One example would be services deploying a business model based on the unauthorized availability of copyrighted works uploaded by their users. In order to avoid copyright liability such services should take effective measures to prevent the availability of works on their services.
To make sure you do not miss out on posts from the Kluwer Copyright Blog, please subscribe to the blog here.
 German Federal Supreme Court (Bundesgerichtshof – BGH) of 15 August 2013, I ZR 79/12, para. 56 – File-Hosting-Dienst II; BGH, I ZR 85/12 , para. 61 – File-Hosting-Dienst III; Court of Appeal of Hamburg of 1 July 2015 2015, 5 U 87/12, juris para. 547; see further Jan Bernd Nordemann in Fromm/Nordemann, Urheberrecht Kommentar (Commentary to the German Copyright Act), 11th Edition 2014, Article 97 German Copyright Act, note 163a. Same opinion in Italy: Court of Rome, Verdict no. 8437/16.
 German Federal Supreme Court (Bundesgerichtshof – BGH) GRUR 1030 para. 46 (2013) – File-Hosting-Dienst I; German Federal Supreme Court (Bundesgerichtshof – BGH) GRUR 370 para. 29 (2013) – Alone in the Dark; see for a detailed analysis of the BGH case law Jan Bernd Nordemann 59 (no. 4) Journal Copyright Society of the USA (2012) 773 at 778 et sec.; Jan Bernd Nordemann Liability for Copyright Infringements on the Internet: Hostproviders (Content Providers) – The German Approach, 2 (2011) JIPITEC 37.
 Same opinion Ohly, The broad concept of “communication to the public” in recent CJEU judgments and the liability of intermediaries: primary, secondary or unitary liability? GRUR Int. 2018, 517; Jan Bernd Nordemann, EuGH-Urteile GS Media, Filmspeler und ThePirateBay: Ein neues europäisches Haftungskonzept im Urheberrecht für die öffentliche Wiedergabe, GRUR Int. 2018, 527 et seq; Jan Bernd Nordemann, Recent CJEU case law on communication to the public and its application in Germany: A new EU concept of liability, to be published in JIPLP 2018).
Rapper Jay-Z has won another round in his defense against claims that he infringed the copyright in a 1957 arrangement of an Egyptian composer’s song, “Khosara, Khosara” when he used a sample from the arrangement in the background music to his hit single “Big Pimpin’.” The U.S. Court of Appeals in San Francisco affirmed a district court’s judgment as a matter of law in favor of Jay-Z and other defendants because the plaintiff—the composer’s heir—lacked standing to bring the copyright claims. Pursuant to Egyptian law, when the heir transferred “all” of his economic rights in the song to the owner of an Egyptian music company in 2002, the transfer included the right to create derivative works adapted from “Khosara.” Any moral rights retained by operation of Egyptian law were not enforceable in U.S. federal court, and even if they were, the heir failed to comply with a compensation requirement of Egyptian law, which did not provide for his requested money damages and only provided for injunctive relief from an Egyptian court. The fact that the heir retained the right to receive royalties did not give him standing to sue for copyright infringement, the court said (Fahmy v. Jay Z, May 31, 2018, Bea, C.).
On 4 June 2018, one of the core concepts of copyright – the copyright work – was disputed at the Court of Justice of the European Union (CJEU). The “cheese battle”, which started in 2015 at the Court of First Instance in Gelderland (the Netherlands) between HEKS’NKAAS (applicant) and ‘Witte Wievenkaas’ (defendant), resulted in a copyright war at EU level. Not only did the issue lead to unprecedented events, such as the cheese-tasting by judges in a parallel dispute between Heks’nkaas and Magic cheese, but also to unforeseen implications: defining or refining the scope of copyright law.
The core question in this case is: can the taste of food be protected by copyright under EU law? The controversial nature of this question, together with the main arguments put forward by both parties during the national proceedings, can be found in my earlier post. The present piece summarizes what fifteen judges heard on a cheesy Monday afternoon.
The 2009 Infopaq judgment is the building block of this dispute, and served as a reference framework for all parties. Accordingly, for copyright protection, the subject-matter must be original in the sense that it is its author’s own intellectual creation. The Painer case further refined this definition to the extent that the subject-matter ought to reflect the author’s own personality and be an expression of free and creative choices.
The competing arguments at the hearing
Heks’nkaas argued that these criteria had always been applied in a consistent manner by the CJEU. This factor therefore dictates whether copyright protection exists. The Court clarified in the Premier League case that football matches are excluded from copyright protection, citing a lack of room for creative freedom. The InfoSoc Directive merely refers to a “work” and copyright is prima facie constructed around the idea of the ‘creative human being’. Heks’nkaas therefore claimed that the medium used for the author’s creative expression does not play a role. Accordingly, taste is undeniably an expression of creativity. In order to illustrate this, reference was made to the signature dishes by famous chefs such as Paul Bocuse or Ferran Adrià. Moreover, EU law does not, a priori, preclude taste from copyright protection. It actually already allows for the protection of imperceptible and inaudible works. For example, the Community Design Regulation affords protection to the texture of a product (Art 3 sub (a)), which amounts to the protected design, while also explicitly recognizing, under Art 96(2), the subject-matter’s parallel eligibility for copyright protection. This suggests that EU copyright law could extend to the texture of a work.
Concerning the subjective character of taste, Heks’nkaas argued that every work of art is perceived subjectively. To put this in Kant’s words, to which express reference was made: ‘Truth does exist but we cannot know it’. Musical scores are the translation of music; recipes are the translation of taste. However, a subjective gap stands between the translation and the work, which cannot be defined. It needs to be experienced. A lack of ability to accurately define smells or tastes is therefore not an objective justification for denying copyright protection under EU law. If this were the case, a considerable number of copyright-protected works should not deserve protection. In closing, Heks’nkaas argued that the purpose of the InfoSoc Directive is to afford a high level of protection to “intellectual creations”. Taste should therefore be included within its scope.
In opposition, Witte Wievenkaas pointed out that harmonisation of the originality criterion, under Infopaq, primarily serves internal market aims. To grant copyright protection to taste would run counter to these aims as it entails uncertainty. What if a certain taste is granted protection in country X but not in country Y? Two arguments were put forward: taste does not fit into the system of copyright, and taste lacks an appropriate means of translation.
Concerning the former argument, the defendant claimed that it would be impossible to ‘communicate taste to the public’ or to license the copyright on it. Moreover, different exceptions, such as the “citation” exception under Article 5(3)(b) of the InfoSoc Directive, would be inapplicable. Accordingly, by granting copyright to taste, the initial purposes of the Directive would be extended.
Concerning the latter argument, Witte Wievenkaas claimed that the work itself is instable as it is perceived differently depending on the person tasting it (men, women, pregnant, sick); when in time it is consumed (i.e. proximity to expiration date); and where (in the mountains, in the sun, etc.?). Unlike a Mozart composition, which sounds the same when heard at 10 or 40 degrees Celsius (this was explicitly mentioned), cheese has an evolving character. Witte Wievenkaas referred to the recent abolition, in the Netherlands, of the herring-, or oliebollentest to illustrate that taste cannot be objectively judged. If experts cannot judge it; how could judges possibly do so?
Taking the uncertainties surrounding taste into account, Witte Wievenkaas argued that granting protection would be undesirable for society at large. It would give rise to monopolies that are prone to misuse, to the detriment of small businesses. Instead of stimulating creation, it would curb innovation. Taste’s indefinability and vagueness would run counter to Article 10bis of the Paris Convention, which relates to unfair competition.
Turning to the Netherlands, its representative clarified that no standpoint would be taken. The first part of the pleading explains that Dutch Copyright law is in line with the InfoSoc Directive and that both instruments are based on the Berne Convention. The non-exhaustive list of protectable works under Article 2(1) of the Convention has been further refined in jurisprudence.
The second part relates to the concept of ‘work’ as protectable subject-matter and to the fact that it has been harmonized at EU level through Infopaq. However, by referring to subsequent case law and to German law, the representative pointed out that the question remains whether the ‘type’ of work in which copyright protection can exist has been exhaustively harmonised. If not, Member States have discretion as to what ‘type’ of work should be protected. Since the Directive does not have a system in place for mutual recognition, possibly leading to internal market hindrance, the Netherlands proposed that this question be addressed by the Court.
In order to determine whether ‘taste’ can be copyright protected, the Court should provide Member States with clearer indications regarding the ‘originality’ criterion. What constitutes a ‘work’? In particular, the Court should clarify whether additional requirements exist besides those set out in Infopaq. Is there a need for objective representation or stability of the work? Whereas trademark law requires such objective representation for registration purposes, which follows from Sieckmann, copyright exists without registration. Is a stable medium for “taste” therefore needed at all?
Accordingly, while deferring to the judgment of the Court, if the Court were to come to the decision that copyright could exist in taste, the representative requested that the Court better explain how to apply the Infopaq criteria to this type of work.
France, on the other hand, asked the Court to preclude the possibility for taste to be copyright protected. Taste does not meet the originality criterion set forth in Infopaq and does not fit within the ‘types of work’ listed under Art 2(1) of the Berne Convention. Although France accepted that this list is illustrative and non-exhaustive, it pointed out that none of the given examples are applicable to ‘taste’, which shows the intention of the legislator.
However, in the event that taste is not excluded from copyright protection, it is argued that it would still need to meet the ‘objectivity’ criterion. The CJEU has shown in Sieckmann that the subject-matter shall be described in an objective and unequivocal way. Not applying this requirement to taste would, in the same way as for trademarks, lead to legal uncertainty. This line of reasoning mirrors the Court of Cassation’s position in Lancôme, where it ruled that copyright does not extend to smell because of its unidentifiable character.
The United Kingdom
The UK position is aligned with the categorical French “no”. Its representative claimed that the question should be answered in two stages. Firstly, the work must belong to a ‘type of work’ that can be copyright-protected. Only if this question is answered in the affirmative, must regard be given to the originality test set out in Infopaq and Painer. To illustrate this, the representative referred to the 2012 SAS Institute case, in which the Court drew this distinction.
Concerning the ‘type of work’, the representative argued that the Berne Convention contains a de facto exhaustive list. This is because Art 2(6) thereof requires Member States to grant protection to all the works listed therein. In order to avoid cross-border issues arising, it is thus of critical importance for all protectable works to be cited in the list. If the Court finds this list to be non-exhaustive, copyright shall only apply to visible or audible works (as illustrated in Berne). Reference is made to the idea-expression dichotomy, according to which copyright does not apply to ideas, only to the expression of these. Consequently, a recipe can be protected whereas its “taste” cannot.
In any event, the UK’s representative further argued that ‘taste’ cannot meet the ‘originality’ criterion. Taste lacks any form of fixation and it evolves over time. Granting copyright protection would therefore result in practical difficulties: how does one pinpoint infringement? How to compare two tastes objectively? Moreover, from a commercial point of view, it would open the door to abusive infringement claims.
Turning to the Commission’s view, this echoes the idea expressed by the Netherlands, that the notion of ‘work’ is an autonomous EU concept. It takes the point of view, however, that Member States have no discretion as to what ‘type’ of work can be protected. Allowing for such discretion would run counter to this autonomous concept and be contrary to internal market goals.
Concerning the requirements for copyright protection, the Commission argued that these should solely be dictated by the originality criterion set out in Infopaq and Painer. Despite this, it is necessary to first find out whether the subject-matter is a protectable work. Doing so both ensures that copyright is properly demarcated from other intellectual property rights and that legal certainty and internal market aims are served.
According to the Commission, the Berne Convention contains a non-exhaustive list of copyright-protectable works. However, taste does not fit within these illustrative examples. Like the UK, it pointed out that copyright only applies to the expression of a work, a position supported by the TRIPS Agreement, WIPO and the Berne Convention. Taste lacks such a form of expression, which renders copyright non-applicable to the present situation.
The complexity of this dispute was reflected in the judges’ controversial questions. Each party was asked to elucidate their point of view. Interestingly, Heks’nkaas was asked whether the taste of wine should also be copyright protected. The party replied that this could a priori be the case, but that the process of winemaking does not leave sufficient room for creative freedom, as its taste is influenced by external factors such as the grape used, the climate and the region. Creativity is thereby strongly limited. This answer shows the many nuances of copyright, some of which could, without much difficulty, similarly be applied to cheese.
What started as a ‘cheese battle’ might, in the near future, reveal the darkest secrets of copyright law. Is the concept of ‘work’ harmonized at EU level? Does copyright only allow for the protection of certain ‘types’ of work? If so, is the list in the Berne Convention guidance as to the types of works protected at EU level? In the event the Court fails to answer all of these questions, you might address your cheese-related concerns to any of the fifty law students from Amsterdam, whose curiosity justified an eighteen-hour trip to Luxemburg (no exaggeration). Food lover, law nerd or both, they all found their guilty pleasure.
N.B.: the Advocate General will deliver its opinion on the 25th of July.
I would like to thank João Pedro Quintais and Stef van Gompel for their valuable insights and comments.
To make sure you do not miss out on posts from the Kluwer Copyright Blog, please subscribe to the blog here.
As a consequence of these rulings, the scope of copyright protection in the EU has become increasingly difficult to predict, at the expense of legal certainty, and EU copyright law’s delicate structure of rights and exceptions is becoming gradually unbalanced. Another critique is that in the digital environment the rights of communication to the public and of reproduction increasingly overlap, requiring providers of digital content services to negotiate multiple permissions from concurrent right holders for acts that – seen from an economic perspective – amount to single acts of usage (e.g. content streaming).
Concerns over the proper scope of the economic rights protected under EU law have inspired a large-scale collaborative academic research project, ‘Reconstructing Rights’, which ran from 2014 to 2017. The aim of the project was to ‘reconstruct’ the economic rights protected under EU copyright law, by bringing these rights more in line with economic and technological realities. The project – designed as a thinking exercise – brought together a group of leading scholars in the field of European copyright law and economics. Each member of the group was charged with drafting an ideal model of economic rights. The proposed models make fascinating reading, and attempt to ‘reconstruct’ in different ways, by combining economic analysis with insights from competition law, trademark law, unfair competition law or communication sciences. The results of the project, which also includes on elaborate study on the economics of copyright in ‘borderline cases’, such as hyperlinking and text and data mining, have now been published in book form in Wolters Kluwer’s Information Law Series. The introductory chapter of the book, which summarizes the models and main findings of the project, is available here.