Loading...

Follow TechnoLlama on Feedspot

Continue with Google
Continue with Facebook
or

Valid

It’s been an eventful couple of weeks for YouTube specifically, and for Internet moderation in general. It all started with YouTube’s controversial decision not to ban a famous content creator for instigating anti-gay and racist abuse. Then they were applauded in some circles for banning a number of neo-Nazi and […]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
TechnoLlama by Andres Guadamuz - 3w ago

There is a very interesting dispute taking place in cryptocurrency circles right now that has nothing to do with price, hacking, exchanges, or any other of the usual hot topics surrounding this area. This time copyright is being used to try to answer one of the most important questions for […]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
TechnoLlama by Andres Guadamuz - 3w ago

It’s no secret that many of the organisations involved in governing the Internet have had a strong involvement with US interests, both public and private. The Internet started as a US military project, and that country remained influential in key decision-making bodies, but most importantly, the Internet grew out of […]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
TechnoLlama by Andres Guadamuz - 1M ago
There is an overwhelming narrative at the heart of the current push for Internet regulation, and it is that the Internet is like the Wild West, and unregulated anarchic cesspit filled with filth, terrorism, abuse, and Nazis. At every corner teenagers are presented with self-abuse images prompting them to commit […]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The UK government has now released its awaited Online Harms White Paper., detailing some potential changes to the law regulating intermediaries to try to curb damaging material found on the Internet. To say that the white paper has been controversial would be an understatement. While there are specific problems with […]
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

What if I told you that there will be spoilers?

What if I told you that The Matrix was first released 20 years ago?

That can’t be real, you think. I still remember seeing it in the cinema! Morpheus would just look at you and say “What is real? How do you define ‘real’?”

In this reality, The Matrix was indeed first released on March 31, 1999. Yes, we are that old.

For those younger readers, it is very difficult to describe what a game-changer The Matrix was, how much it influenced cinema, fashion, clubbing, and music. It is easy to forget just how mindbogglingly cool it all was. The music, the costumes, the obvious anime influences, the visuals, the special effects, and even the hokey philosophy loosely based on Baudrillard’s Simulacra and Simulation.

I just watched it again, and it has aged surprisingly well for a film that is so badly acted (with the exception of Lawrence Fishburne, who always shines effortlessly as Morpheus). It is perhaps because of Morpheus that the film has retained such an enduring meme imprint (memeprint? memability?) It’s perhaps no coincidence that some of the most salient parts of the movie are precisely related to his explanations of what is The Matrix.

For those not willing to watch it and who may still be reading this, the Matrix tells the story of Thomas A. Anderson (hacker name Neo), who is a programmer at a successful software company by day, and masterful hacker by night. After an encounter with the evil Agents, Neo is taken to meet the most dangerous man in the world, a hacker by the name of Morpheus, who tells him that the world is not as it seems. The Matrix is the reality that surrounds, “a prison of the mind”. In order to break free and learn the truth, Neo must make a choice between two pills, the blue pill will keep him trapped, while taking the red pill will show him “how deep the rabbithole goes”. Neo takes the red pill, only to realise that The Matrix is a computer simulation that keeps humanity trapped, machines have enslaved humanity and harvest them as an energy source. What follows is typical of the hero’s journey: learning, crisis, success.

Many terms of the above may seem familiar even to those who have never seen the film. It is a testament to the strength of the narrative that the story continues to resonate even to this day. At the time it was released, the idea of a rebellious small group of people trying to awake humanity from its slumber had more radical political undertones, the machines portrayed a monolithic establishment comprising the press, the government, and the corporations, all intent on slaving the mindless masses. “Most brains are not ready for the truth”, Morpheus tells Neo, so only a small elite of cyber-warriors is left to try to uncover the truth and destroy The Matrix. It’s no coincidence that at the end of the film Rage Against the Machine scream at us to “Wake up!”

Today’s political reading of The Matrix is quite different. While the enemy remains the establishment, The Matrix has become real, it is the lies told by the mainstream media to keep us away from the truth. And the truth of course is anything that the group of brave resisters oppose: the evils of a globalist elite composed of the European Union / the Feminists / the Jews / Multiculturalism / Islam / Environmentalists / Cultural Marxists (delete as appropriate). To take the red pill has become a shorthand to describe mostly right-wing groups that have access to reality. While it started life as a metaphor for the rebellious techno-anarchists, the Red Pill has now been fully appropriated by the alt-right and all similar groups.

But while some of the imagery has taken a darker tone, it is important to remember its origins come from a world before 9/11, a more innocent time where rebellion meant dressing in black leather, wearing shades inside, and listening to techno. But we need to reclaim its core message, not the puerile belief that there are elites controlling our every thought, but that we are indeed living in The Matrix, an online environment full of lies and deceit. The machines are sometimes bots, but are most likely to be shared and spread by your racist neighbours and opinionated uncles than by a shadowy cabal intent on world domination.There is rampant disinformation online, but for the most part it comes from users. But for the most part, we must wrestle Matrix’s rebellious and cool imagery, and make it ours again.

Make The Matrix Anarchic Again.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It would be fair to say that the Christchurch terrorist attack has been one of the most shocking events in recent history, not only because of the heinous act itself, but because the perpetrator live-streamed the attack on Facebook, and the video has then been shared countless times online.

After a tragedy such as this, there is usually a desire to make immediate changes so that such events cannot happen again, so New Zealand has immediately reacted to ban assault rifles. But the Internet question is more difficult to tackle, and policy-makers and many commentators seem to be baffled as to how to react.

In many ways, the story of the attack offers a perfect illustration of the difficulties of regulating the Internet. It has become evident that the terrorist was part of an online white supremacist sub-culture that radicalises young people and spreads its message through memes in various websites, most of these are completely unregulated boards. In particular, the Christchurch terrorist advertised the attack on 8chan beforehand, providing a link to his manifesto and stating that he would be live-streaming the event on Facebook with a body-cam. As advertised, the assault was broadcast live for 17 minutes to a live audience of between 40 and 200 people, one has to assume all 8chan users. The video as not reported until 12 minutes after it was finished, and it was viewer by 4,000 people until it finally was removed. However, by the time it got removed it had already been downloaded and recorded in various formats, including screen recordings. It was then re-posted all over the Internet, including Twitter, Reddit, and YouTube, and it started being shared by people using messaging apps like Whatsapp. A massive digital clean-up operation ensued, within 24 hours Facebook had blocked 1.5 million attempts to re-upload the video, and YouTube removed an ‘unprecedented volume’ of videos, without specifying numbers.

The digital virality of the footage has prompted various leaders to call for more Internet regulation, with Australian PM Scott Morrison going as far as stating in a call for action that “It is unacceptable to treat the internet as an ungoverned space”. Expect some Internet regulation action to come soon. In a time of tragedy it is normal to look at some form of allocating blame outside of the obvious perpetrator. Could this have been prevented? If so, how?

The first reactions point towards a general blame against tech giants, with Facebook and YouTube getting the largest share of the blame as being the first conduits of the video being shared, and there is considerable talk of putting a leash on online content. I have to admit that I am unsurprised by some of the reactions, and I also certain that most proposed “solutions” will completely miss the mark.

The main reason for my scepticism is that it is evident that there is some selective blame-allocation taking place in policy circles. While the Australian Prime Minister blames the Internet, it is quite ironic that he seems to have failed to criticise the fact that Australian mainstream media broadcast the video, and while there is a review of this action, the damage is done. Similarly, we seem to be ignoring the fact that we the public deserve quite a lot of blame, Facebook, Twitter and Google do not share the video, it is the users uploading it and sharing it.

Similarly, there appears to be a complete misunderstanding of how the extreme right-wing online forums operate. Anyone who has been following the rise of the alt-right and Neo-Nazis online will know that these communities are quite sophisticated when it comes to spreading their message. Many mainstream memes begin their life in places such as 4chan, and there is often a very good understanding of the type of content that will become viral. The Christchurch terrorist was aware of this, and the streaming was done in a way that ensured widespread sharing. The manifesto itself if filled with in-jokes and references to the obscure sub-culture that includes mentions to Bitconnect, PewDiePie, and various memes, the intention is almost entirely baiting mainstream media to have to mention these obscure references, it is almost as if the whole event is part of an ongoing big joke for some of the participants.These communities use sites like Facebook only to enhance their message, but the actual discussion takes place in places that have practically no oversight, and which are almost entirely devoid of regulation.

Moreover, the video was shared so much because the internet is built on the idea of spreading information, by reaching a large audience, the video shows that the Internet works precisely as intended. Censorship is difficult online, even after the application of filters. Facebook claims that it was able to filter about 80% of the shared content, but even 20% is enough to guarantee further spread, all you need is one copy to be shared to ensure the content will remain out there.

Should we give up?

Of course not, but we need a more sober look at what is really going on, and this means also taking a very good look at ourselves and at what we really expect from technology. Firstly, we should understand that the likes of Facebook offer tools that allow users to share information. We could try to impose more restrictions on how these companies do it, but in the end this cannot be done without severely hindering the main function of online spaces. If we want a “safer” Internet, we must be prepared to give up quite a few perks of a more open Internet. No live streaming. Filtered user-generated content. National firewalls. No sharing to a public audience, only to your circle of friends and followers. Monitored private communications. No anonymity allowed. Subscription services using verified “real life” information, or only allowing verified users to broadcast. Remove all intermediary liability exceptions.

This may seem like an acceptable compromise to some, but even then the threat will not end. If we regulate the tech giants more and more, the Internet will still allow users to congregate outside of these commercial structures. Furthermore, these actions will almost surely not affect sites like 8chan. But perhaps more importantly, full control in a global decentralised network is futile because what we will be regulating are the centralised nodes that we commonly use, but not the network itself.

I honestly do not have a viable solution, and I do not think that one is possible with our current system. While I am aware that some people may want to live in a fully controlled Internet, that is not how it works, and if you achieve any sort of sanitised network, it won’t be the Internet as we know it. Making Internet intermediaries more liable will also not result in the removal of harmful content, as for the most part this does not originate there.

But perhaps we do need to look at our own practices more. I am personally glad that I have yet to come across the Christchurch attack video, which may seem remarkable given the amount of time I spend online. I have disallowed automated video playback wherever possible. I am lucky that nobody I follow in any social media shared the video (that I am aware of). Every time I encounter anyone in my timelines that shares what I would consider objectionable content, I promptly mute or unfollow; on a few extreme occasions I have contacted the person telling them precisely that I object to racist, misogynist, nationalist content shared. I practically never use Whatsapp or join any Facebook groups, so there is also limited scope for someone sharing content. These are just personal practices, but for now they seem to have helped me to avoid extreme content online, and while I cannot expect others to follow these practices, we cannot only rely on tech companies to keep us safe.

But we live in the time of regulatory over-reaction, so at the very least, expect things to get worse.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Whenever I have presented about artificial intelligence in the last few years, I often get asked the question of whether training an AI with data can infringe copyright. Take for example Bot Dylan, a machine learning project that has been trained using 23,000 folk songs. Does the music it produces infringe copyright? The accepted answer is no, training an AI by making it listen to music is no different to a musician listening to various songs and being influenced (unless they cross some blurred lines, see what I did there?).

But the problem is often not the resulting output, it is all about having lawful access to content that can be scraped and analysed by the machine learning algorithm. Storing and using large amounts of data in this manner could indeed infringe copyright, and it is the reason why there are increased calls to have some sort of data mining exception to copyright. The UK already has one in place, and this is part of the upcoming Digital Single Market Directive.

 This is becoming more and more of an important subject as tech companies try to join the AI race and large datasets become a hot commodity. There was quite an argument about machine learning tools and training datasets recently due to the Edmond de Belamy AI painting. In that occasion, a French artist collective called Obvious used pre-existing machine learning tools to generate a series of paintings, one of which was sold in auction for over $400k USD. The portraits used to train the AI were in the public domain, so the possible infringement question of those never arose, but there were arguments at the time that Obvious could be infringing copyright in the algorithms used. In my opinion, this was not the case as all the tools had been released with open source licences.

We are currently witnessing another extremely interesting case that has been uncovered in a very thorough article for NBC News, which reports that millions of photographs posted to picture sharing site Flickr are being used without consent to train machine learning algorithms. Reporter Olivia Solon writes:

“The latest company to enter this territory was IBM, which in January released a collection of nearly a million photos that were taken from the photo hosting site Flickr and coded to describe the subjects’ appearance. IBM promoted the collection to researchers as a progressive step toward reducing bias in facial recognition.
But some of the photographers whose images were included in IBM’s dataset were surprised and disconcerted when NBC News told them that their photographs had been annotated with details including facial geometry and skin tone and may be used to develop facial recognition algorithms.”

Leaving aside ethical and privacy considerations, one of the most interesting questions raised by this report are the copyright implications of these practices. Can IBM do this legally? If so, how?

Firstly, let me start by stating that this is a rather complex area, so I will be over-simplifying quite a bit. If you want some more detail about the legal implications of data mining, this may be a good place to start. When we are talking about training artificial intelligence, it is recognised that having access to a good sized dataset appropriate for the task is abeneficial, and this is why so many companies such as Google continue to provide services that allow them to have access to vast troves of information. Data mining is one way of finding data to feed the algorithms to train them, but in order to do so the researchers need to have access to data, yet this data could be proprietary.

Data can be anything that is the subject of the research, music, pictures, paintings, text, poetry, scientific literature, figures, drawings, sketches, etc. Data is not about an individual work, it is all about the accumulated reading of a collection of data. So in order to analyse this information and turn it into something useful, there has to be a process that “reads” the data. There are lots of different processes and techniques, but many of these require for the miner to at least copy the data temporarily (but not all techniques do that, some can do it on the go).

The legal situation of this type of access to data varies from one jurisdiction to the other. In the US it has been argued that data mining falls under fair use as being transformative, and I tend to agree with that opinion. In the UK we have a fair dealing for data mining for non-commercial uses, and other jurisdictions have adopted, or are thinking of adopting similar measures (the DSM directive contains one such proposition, although heavily diluted). So in many circumstances, non-commercial data mining to train an AI will be legal. But as this is still a highly uncertain area of the law, and as many companies want to train neural networks for commercial purposes, then those enterprises and researchers will want to use data that is either in the public domain, or under a permissible licence, such as a Creative Commons licence.

This is precisely what IBM did. Flickr is a sharing site that is famous for being an early Creative Commons adopter, allowing their users to release their shared pictures under some rights reserved licences. For the most part, this didn’t mean much to the average user, my own photostream is under CC a licence, and has remained largely unnoticed (as far as I can tell). A few years ago, Flickr released 100 million pictures that had been shared on their website under CC licences. For machine learning researchers, this is a treasure trove because in theory it can be used and reused for commercial purposes without fear of infringement. IBM took this dataand narrowed it down to 1 million pictures containing faces and annotations, and made it available to researchers as the “Diversity if Faces” dataset.

The legal question is whether IBM can do this, so we need to look at the licences in more detail. The source photographs have been shared using a range of CC licences. It is a good time to remind readers that there are six types of CC licence that range from the very permissive Attribution only (BY), to the more restrictive Attribution-Non Commercial-No Derivatives licence (BY-NC-ND). CC-BY allows all sorts of reuses as long as the author receives attribution, while the most restrictive licences allow reuses as long as these are for non-commercial purposes, and in some instances, they do not allow derivatives, or require the work to be shared with the same licence (share alike). Interestingly, the majority of the pictures shared in the Flickr dataset have a non-commercial restriction (about 66%, see source).

The Flickr dataset itself is not shared with a CC licence, and it is actually accessible after signing up to an Amazon Web Services account, and most importantly, after agreeing to specific terms of use for this dataset. This makes a lot of sense as older CC licences do not work well with databases, but most importantly, actual datasets may not be protected in some jurisdictions, including the US. Therefore, the better way to protect such data is through contract law, by imposing restrictions with terms of use. As these go, they are not onerous, and mostly seem to require attribution when re-using the data. For example, the ToU states:

“You may use the Dataset to review, analyze, summarize, interpret and create works from the Dataset.  You may publish your observations, commentary, analyses, summaries, and interpretations of, and works from, the Dataset.”

So we are back to having to analyse whether using this dataset to train an AI would be in breach of the CC licence with which a user would have shared the work in Flickr. My initial answer is that such uses are permitted under the CC licence. The main purpose of the Creative Commons licence is to allow re-uses of a work with as few restrictions as possible, allowing the creation of a free culture environment where sharing benefits society. So when someone posts a picture under a CC licence, we have to assume that they are allowing further re-uses and mash-ups of their work. As an avid CC user, I like making my pictures available to the public, and I do not really think about potential downstream uses. I do impose a non-commercial restriction because I am bothered by my work being used for commercial purposes.

So let’s assume that one of my pictures is included in one of the datasets (I haven’t looked), and also let’s say that the picture has the most restrictive CC licence, say BY-NC-ND. So nobody can re-use my picture for commercial purposes, they need to attribute me, and they cannot make modifications to the picture. The Flickr dataset fulfils those requirements, it is non commercial, it is not being modified to create a derivative, and the metadata contains adequate attribution.

So now assume the picture is included in the IBM dataset and it is used to train machine learning algorithms that are being used to train face-recognition software, and let’s further assume that some of those uses are commercial. I would argue that the terms of my licence are still intact. Firstly, the IBM dataset is being offered for free to researchers, so the non-commercial element is maintained. The pictures appear to have maintained the metadata, so I am also being attributed. The problem starts if my picture is changed in a way that it would be considered a derived work, as this would go against the terms of my licence. There could also be a problem if my picture is used by a researcher for commercial purposes.

Great! As the licence is breached, I can sue the commercial researchers for copyright infringement. Riches beckon!

Not so fast. If we were talking about my individual picture, I may have a case, but this is not a single use, this is part of a large dataset of millions of images, and my lone individual picture is of no interest, what matters is the accumulation of images, this is where the value of machine learning resides. And if we take all the pictures as a whole, then the individual terms and conditions of each CC licence are less important. Researchers that have had access to the dataset legitimately by complying with the terms and conditions set out by its creators can reuse these in any way permitted, and as we mentioned, these terms are quite permissive. Furthermore, the IBM and Flickr datasets fall under the protection of fair use in the US, and data mining exceptions where these exist, so researchers that are using them could do so freely, in some cases even for commercial purposes.

This opens up a new question, and that is whether an individual photographer whose picture has been made available in either dataset could sue for copyright infringement. I do not think so. To be able to claim individual infringement, there must be evidence that the image has been used in a way that is in breach of the licence, and these may not be the case, the mere inclusion of an image into a dataset is not a breach of the licence.

There is also the issue that the training of an AI does not constitute a derivative work in its own, and therefore the author of individual pictures would have a very difficult time proving that the resulting outputs from the training are a direct result of their individual image. Furthermore, the outputs from the data mining training may not be subject to any sort of protection in their own right, or these may differ from copyright. If the machine learning algorithm is used to produce a portrait, then this could very well be in the public domain as it is not original (see more about this here). If it produces software, a database, a model, an algorithm, or any similar technical effect, the protection may be under trade secrets, patents, database right (in Europe), and even copyright. But the result may have no connection whatsoever with my own picture, and therefore it would be impossible to claim infringement.

These are my first ideas based on the facts presented by the investigation. I am currently writing about this very subject, so much of the law is still fresh in my head, but I would be curious to see what others think.

One thing is clear, data mining is increasingly becoming a very important legal subject for copyright. Perhaps we are all obsessed with Article 13, when the most relevant article for the future is that containing data mining (Article 3 for those interested in looking it up).

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Last week we talked about the agreed text of the new copyright directive, particularly the problems presented by Article 13. Perhaps what is lost in all of the discussion is precisely why such a toxic proposal is being discussed and where does it come from.

It is impossible to understand the current situation without understanding the existing regime that regulates the behaviour of users and service providers online. While there is a rich history behind this, the general idea is that platforms have a limited immunity from liability of acts committed by their users as long as they are not aware that a possible actionable situation is taking place. So if I defame someone on YouTube comments, or if I upload an infringing image to Facebook, the service providers will not be liable for my actions as long as they are not aware that this took place. As soon as they are notified, they could become liable.

This system has worked for almost two decades, and has allowed tech companies big and small to be able to operate with relative certainty that they will not be sued out of existence, but it has also allowed a weird and wonderful ecology of user generated content to develop. The system is not perfect, but for the most part it has served both users and the platform operators well.

But not everyone is content with the status quo, almost from the start content owners have complained that the system allows intermediaries to profit from rampant copyright infringement committed by users, and it has given those platforms disproportionate power and money, while creators do not get a fair share of these profits. Behind this narrative are quite a few interesting assumptions that seem to be conflated in the discussion. Proponents of a change in the intermediary liability model argue that copyright infringement is rampant online, and that the platforms are profiting from such practices. At the same time, they argue that whenever there are legal services being developed, the power accrued by tech giants makes it impossible to get proper earnings (the so-called value gap).

So we have quite an interesting situation in which these two get conflated over and over, but also they get deployed by the copyright owners and their proponents interchangeably, and are both used as an excuse for the proposal of Article 13 and the deployment of upload filters. So in the recitals for the new directive, we get actually both justifications. Most telling is Recital 37, which repeats the problems caused by the rampant infringement of copyright by the users of those platforms, and then states almost seamlessly that a licensing system will be put in place to allow creators to get a fair share of money.

So the proposed system is quite a clever political exercise where piracy is being used to justify the creation of filtering mechanisms, but the objective is not to deploy filtering, the whole reason of the exercise is to force platforms to enter into licensing agreements with copyright owners, or they will be deemed to be directly liable for the infringement committed by their users. They are using the spectre of piracy (which by all accounts is falling across the board) to justify the deployment of a system that will force tech giants to share some of their profits to creators. Filtering is not the desired result, it is the stick used to gain compliance.

“Nice Internet you have there, it would be a shame if something were to happen to it.”

It’s a clever game of switcheroo, but it represents a huge gamble. The proponents of Article 13 are hoping that the threat of direct liability and the prospect of having to deploy expensive filters will force tech giants to the negotiating table, and they will comply by giving large sums of money in licensing fees to maintain the status quo. But the result could be very different to what is desired.

I suspect that a few companies like Google and Facebook will deploy filtering and will refuse to pay licensing fees, after all, these companies already possess some monitoring capabilities. A second tier of large companies with no filters may make an investment. The third tier, smaller and medium companies that fall under the proposal, will just abandon European markets altogether.

But my prediction is that nobody will pay licensing fees because that way madness lies.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

You shall not pass!

And so it has come to this, the great copyright battle of our time. After a troubled process that has spanned various stages of development, we finally have a proposed compromise text on the Digital Single Market Directive, the latest part of a future directive overhauling copyright for the digital age. After the text was approved by the Parliament last year in a disappointing vote, there was a period of negotiation between the Council, the Commission and the Parliament to try to come up with a final text to put to a vote.

The negotiation was difficult. As we have discussed several times in this blog, some articles have been controversial as they set out obligations that could have nefarious effects to the Internet. Some countries were reluctant to sign on to such proposals, and opposition started to grow in various circles during the process. The impasse was fixed when France and Germany agreed on a common position, and once that had been achieved, the rest fell behind.

There are many concerns about the compromise text, but for now we will concentrate on the dreaded Article 13, which is set to create an upload filtering requirement for intermediaries that will result in a very different Internet experience in Europe. The proposed text of Article 13 has proven to be extremely controversial because it is like trying to kill an ant with a bazooka. While there is a problem with some users infringing copyright online, the proposed solution is to impose restrictions that could have lasting effects on how we experience online content.

The text begins by defining “online content sharing provider”, which will be the subject of the regulation. These are defined like this:

‘online content sharing service provider’ means a provider of an information society service whose main or one of the main purposes is to store and give the public access to a large amount of copyright protected works or other protected subject-matter uploaded by its users which it organises and promotes for profit-making purposes. Providers of services such as not-for profit online encyclopedias, not-for profit educational and scientific repositories, open source software developing and sharing platforms, electronic communication service providers as defined in Directive 2018/1972 establishing the European Communications Code, online marketplaces and business-to business cloud services and cloud services which allow users to upload content for their own use shall not be considered online content sharing service providers within the meaning of this Directive.”

So Art 13 is supposed to cover large online services that allow users to upload content, such as Facebook, Google, Twitter, etc. Some services are expressly excluded, such as auction websites (eBay), online encyclopaedias (Wikipedia), educational repositories, cloud services (Dropbox), and open source repositories (Github). So far so good, it won’t affect small and medium enterprises, right? Not really, the size of the company is not mentioned, just the fact that the service gives public access to “large” amount of copyright protected works for profit. Most online companies start small: Instagram, Twitch, Whatsapp, etc. As long as your startup has an upload capability and a potentially large audience, it could be covered by this definition. I am worried that this will affect small sharing websites (think such platforms as imgur), but most importantly, it will stifle newcomers in Europe.

The article then proposes a substantial change to intermediary liability:

“Member States shall provide that an online content sharing service provider performs an act of communication to the public or an act of making available to the public for the purposes of this directive when it gives the public access to copyright protected works or other protected subject matter uploaded by its users. An online content sharing service provider shall therefore obtain an authorisation from the rightholders referred to in Article 3(1) and (2) of Directive 2001/29/EC, for instance by concluding a licencing agreement, in order to communicate or make available to the public works or other subject matter.”

This is momentous. For those unfamiliar with the intermediary liability regime, the current system contains immunity from liability for content uploaded by users as long as the service provider is not aware that the content may be infringing, which explains the various mechanisms for notice-and-take-down in existence. The above completely changes the way in which this regime operates, and it makes service providers directly liable for the content uploaded by their users. The article suggests that service providers should enter into licensing agreements with content owners, which sounds feasible until one realises that this would mean any content owner, there are not just a few creators out there, we are thinking TV, film, music, photography, anything.

The objective of Art 13 is clear. The copyright industry wants to get a slice of the online revenue pie, and they want tech giants to pay them licensing fees, even when content has been shared by users that “are not acting on a commercial basis or their activity does not generate significant revenues”. If the service provider doesn’t want to pay licensing fees, then they will be liable unless they put steps in place to stop the potential infringement.

The directive doesn’t mention filters, but it clearly means filters. For example, intermediaries have to provide evidence of the following to be exempt from direct liability:

“(a) made best efforts to obtain an authorisation, and
(b) made, in accordance with high industry standards of professional diligence, best efforts to ensure the unavailability of specific works and other subject matter for which the rightholders have provided the service providers with the relevant and necessary information, and in any event
(c) acted expeditiously, upon receiving a sufficiently substantiated notice by the rightholders, to remove from their websites or to disable access to the notified works and subject matters, and made best efforts to prevent their future uploads in accordance with paragraph (b).”

So service providers have to get a licence, or clear authorisation, which is onerous and expensive; failing that they have to ensure the works will be unavailable (again, this means filters, or intrusive and extensive content moderation); and they should have a mechanism for removing content, which is the system in existence now.

What happens if you upload something that is legitimate and it is removed? For example, your work is a parody, or it is being used for educational purposes? Art 13 mentions that these uses shall not be affected, but this is just paying lip service to criticisms that the directive will erode exceptions and limitations. It is very difficult to code a system that filters out content AND also protects exceptions. The system is supposed to allow providers to provide redress for content that has been removed illegitimately, but it is difficult to see how intermediaries will be able to comply with both requirements easily.

Finally, while Art 13 states that it will not impose a monitoring requirement, it is evident that all of the above will require invasive and pervasive monitoring systems to be put in place.

So what is likely to happen?

If Article 13 is adopted as drafted, it’s almost certain that it will have an immediate effect on how the Internet operates. One only needs to see what happened after the adoption of the GDPR to see that we may be faced with further balkanization of the Internet. For many months after the new Data Protection regulation came into being, a seizable number of websites placed restriction to access content from European users. If the DSM became law, it is easy to see that something similar will happen. Internet intermediaries will be faced with these choices:

  1. Enter into licensing agreements. This will be expensive, time consuming, and it doesn’t ensure immunity from liability, as content could be uploaded that belongs to a copyright owner with which there is no agreement yet. This also implies a cost that will have to be passed on to the consumer, and it could be particularly punishing for smaller firms.
  2. Create a filtering and monitoring mechanism. This would be inevitable, even if the text never mentions such a system, as it appears to be the only way service providers could continue to operate making best efforts to make uploaded protected works unavailable. There is only one large service provider that has such a system in place, and it is Google with YouTube’s ContentID. Everyone else will have to pay for expensive filtering capabilities.
  3. Give up. When faced with the two expensive and resource-intensive options above, it is almost certain that quite a few services will just leave European customers to their own devices and start making their services unavailable in Europe. This would appear to be the most logical solution for smaller providers that do not want to worry about the convoluted and complex system that is being proposed, and honestly, who could blame them?

The next step is that the Directive will be discussed at committee level at the European Parliament, and then it will be put to a vote. Hopefully sanity will prevail and Article 13 will be defeated and cast back to the fiery chasm from whence it came.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview