Loading...

Follow Socially Aware Blog | Social Media Lawyers | Mo.. on Feedspot

Continue with Google
Continue with Facebook
or

Valid

A federal district court in California has added to the small body of case law addressing whether it’s permissible for one party to use another party’s trademark as a hashtag. The court held that, for several reasons, the 9th Circuit’s nominative fair use analysis did not cover one company’s use of another company’s trademarks as hashtags. Whether a hashtag is capable of functioning as a trademark, the topic of two of Socially Aware’s most popular posts, is—of course—another issue entirely.

In what The New York Times describes as “the latest in a line of rulings allowing companies to use arbitration provisions to bar both class actions in court and class-wide arbitration proceedings,” the Supreme Court held that the employment agreements of workers at the lighting fixture retailer Lamps Plus can’t band together to sue the company for allegedly failing to protect their data. The details of the data breach make for an interesting read.

The U.K.’s data regulator has proposed rules that would prevent social media platforms from allowing children to “like” posts. Here’s why.

Officials in a Georgia city might pass a law that would allow elected and appointed officials and employees of the city to sue—at the city’s taxpayers’ expense—anyone who defames them on social media.

Instagram influencer Gianluca Vacchi, who is not a fictional character, but—according the complaint he filed in a federal court in New York—“an international social media celebrity, influencer, fashionista, and disk jockey” —is suing E*Trade for allegedly depicting a character in its commercials who is “stunningly identical” to him. The suit claims copyright infringement, Lanham Act false association and unfair competition, and violation of New York’s right of publicity and privacy.

The Chinese social media company Weibo restored access to this type of content from its platform after significant backlash from its users.

In the age of smartphones and social media, what can trial lawyers do to secure a jury that relied only on the evidence presented in court?

Artificial intelligence is informing which items McDonald’s includes on its outdoor digital menu displays.

The California State Bar is considering using artificial intelligence, too. The bar hopes that AI can help it to more efficiently determine which attorney misconduct complaints to pursue, and perform a function that affects every wannabe lawyer.

Should Wendy’s put Spicy Chicken Nuggets back on its menu? Social media users have spoken (with some prompting from Chance the Rapper).

The post Trademarks as hashtags; influencer sues company allegedly depicting him in an ad; new uses for AI technology appeared first on Socially Aware Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Directive on Copyright in the Digital Single Market (Directive) was finally approved by all EU legislative bodies on April 15, 2019. Introducing “modernizing EU copyright rules for European culture to flourish and circulate” was a key initiative of the European Commission’s Digital Single Market (DSM), which, according to the Commission’s President Jean-Claude Juncker, has now been completed by the Directive as “the missing piece of the puzzle.” The Directive was approved, just in time for the elections to the EU Parliament taking place in May 2019. Within a period of 24 months, the Member States are required to implement the Directive’s provisions into national law.

Various Member States have issued, along with their approval of the Directive, statements regarding their interpretation of the Directive and voicing quite different views about the upcoming implementation process. While Germany strongly opposes the notion of upload filters, it appears that France is in favor of a copyright protection mechanism that includes upload filters. At the same time, it remains a pressing question whether currently available algorithm-based filters would even be able to sufficiently differentiate between infringing and non-infringing content.

These dissonant views, as well as the sometimes vague wording of the Directive, have raised concerns that Member States will implement the Directive in different ways, leading to a lack of harmonization across the EU. The Directive, however, expressly requests a harmonized implementation as well as stakeholder dialogues, led by the EU Commission and the Member States.

The Dissonant Vote

  • European Parliament, March 26: The vote saw the Directive adopted without amendments, with 348 votes in favor and 274 votes against. The upstream vote on whether to consider amendments to the press publishers’ right and the platform liability regulation, however, was rejected by only five votes (13 parliamentarians reportedly pressed the wrong button, meaning the motion would otherwise have passed).
  • Council of the European Union, April 15: On April 15, the Directive cleared its final hurdle by passing the last Council vote. Six Member States rejected the Directive (Italy, Sweden, Finland, Poland, The Netherlands, and Luxembourg), and Belgium, Estonia, and Slovenia abstained. However, the proposal was backed by 19 countries, representing 71.26% of the voting power.

What the Directive Will Change

The final, adopted version of the Directive  coincides with the trilogue compromise (see a detailed assessment of the main provisions in our Client Alert “The EU Copyright Directive hits the Homestretch”) and provides the following main changes. Due to mere editorial changes, its articles have been renumbered:

  • Online content-sharing service providers’ liability for copyright-infringing content, Article 17 (ex–Art. 13)
    Online content-sharing services are subject to a direct liability for copyright-infringing content uploaded by their users if they fail to prove that they made “best efforts” to obtain the right holder’s authorization or fail to evidence that they made “best efforts” to ensure the unavailability of such content. They are also liable if they fail to act expeditiously to take down uploads of work for which they have received a takedown notice.
  • Ancillary copyright for press publishers, Article 15 (ex–Art. 11)
    Press publishers are granted an ancillary copyright for press publications, covering the reproduction and making available of such content by information society service providers (excluding only hyperlinks accompanied by “individual words or very short extracts”).
  • Further provisions
    The Directive also introduces exceptions and limitations (e.g., for text and data mining (incl. in favor of commercial enterprises)); provisions regarding collective licensing; and recall, transparency, and fair remuneration rights for authors.

Harmonized Implementation?

The Directive is expressly designed to provide a harmonized legal framework in order to prevent the fragmentation of the European market. Article 17(10) of the Directive anticipates that the Commission will conduct stakeholder dialogues with all interest groups, especially online service providers and rights holders, on best practices regarding the obligations under Article 17. Drawing on these dialogues, the EU Commission is required to issue guidelines on the application of “best efforts” requirements, as well as the cooperation with rights holders (i.e., the rights holders’ notification of protected works to online service providers and the negotiation of license agreements).

  • Germany has already declared that it will take an active role in these stakeholder dialogues, and that it presumes the promotion of a harmonized implementation of Article 17 to be one of the main goals of these dialogues. The statement issued by Germany voices strong opposition to the notion of upload filters as a mechanism for ensuring the permanent staydown of infringing content. If such “technical solutions” are still used in order to comply with Article 17, the German statement requests taking into account the data protection requirements of the General Data Protection Regulation (GDPR), as well as the overall principle of proportionality. The statement also proposes the development of open-source technologies with open interfaces (APIs) to provide for standardization and to prevent the market domination of a few established filtering technologies.Furthermore, Germany has raised the issue that the definition of the term “online content sharing service provider” urgently requires further clarification, and that it intends to expressly exclude certain types of platforms and service providers.
  • France is expected to follow a stricter approach. In a speech given just one day after the final vote in the European Parliament, the Minister for Culture announced that the French High Authority for the Dissemination of Works and the Protection of Rights on the Internet, jointly with the Higher Council for Literary and Artistic Property and the National Center for Cinema and the Moving Image, will launch a project promoting “content recognition technologies.” The Minister underlined that said project is essential to enabling Article 17, and that there is “no time to waste on this topic.”
  • The Netherlands, Luxembourg, Poland, Italy, and Finland, all of which rejected the Directive in the final Council vote, declared the Directive to be a step backwards for the DSM, failing to strike a fair balance between the protection of rights holders and the interests of citizens and companies, or to provide legal certainty.
  • The stance of the UK is less clear, which may be due to the expected Brexit – as a result of which the UK would no longer be obliged to transpose the Directive into national law. Leading Conservative politician and former Foreign Minister Boris Johnson insisted that the UK not apply Article 17, arguing it would be “terrible for the internet.”

Outlook – Stakeholder Dialogues & Implementation Process

The stakeholder dialogues of the implementation process will represent an important period for stakeholders wanting to raise their voices as to (i) ensuring that the concrete implementation of the so-far vague wording of Article 17 of the Directive will reflect an appropriate solution and (ii) for those platforms with uncertain qualification as an online content-sharing service provider as to ensuring that they will be included in the list of platforms to which Article 17 does not apply.

The post The EU Copyright Directive Passes – But Member States Remain Split on Upload Filters appeared first on Socially Aware Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Often hailed as the law that gave us the modern Internet, Section 230 of the Communication Decency Act generally protects online platforms from liability for content posted by third parties. Many commentators, including us here at Socially Aware, have noted that Section 230 has faced significant challenges in recent years. But Section 230 has proven resilient (as we previously noted here and here), and that resiliency was again demonstrated by the Second Circuit’s recent opinion in Herrick v. Grindr, LLC.

As we noted in our prior post following the district court’s order dismissing plaintiff Herrick’s claims on Section 230 grounds, the case arose from fake Grindr profiles allegedly set up by Herrick’s ex-boyfriend. According to Herrick, these fake profiles resulted in Herrick facing harassment from over 1,000 strangers who showed up at his door over the course of several months seeking violent sexual encounters.

Herrick sued Grindr, claiming that the company was liable to him because of the defective design of the app and the failure to police such conduct on the app. Specifically, Herrick alleged that the Grindr app lacked safety features that would prevent bad actors such as his former boyfriend from using the app to impersonate others. Herrick also claimed that Grindr had a duty to warn him and other users that it could not protect them from harassment stemming from impersonators.

Herrick asserted a large number of claims, including negligence, intentional infliction of emotional distress, negligent infliction of emotional distress, failure to warn, failure to respond, product liability and negligent design, copyright infringement, fraud, negligent misrepresentation, promissory estoppel, and false advertising. The case attracted considerable attention, including concern from social media companies, due to the prospect of platform operators being held liable for the bad conduct of their users, with some commentators theorizing that the case could change the legal landscape of tech and free speech.

Grindr moved to dismiss Herrick’s suit, on the grounds that all his claims (other than the copyright infringement claim) were barred under Section 230. On January 25, 2018 a federal district court in New York granted Grindr’s motion to dismiss and Herrick appealed shortly thereafter. Unfortunately for Herrick, and for those who would advocate for a narrow interpretation of Section 230, the Second Circuit sided with the trial court, ruling against Herrick in a 3-0 Summary Order.

Despite Herrick’s attempts to avoid the application of Section 230, the Second Circuit applied a standard three-part Section 230 analysis,  noting “[i]n applying the statute, courts have broken it down into three component parts, finding that it shields conduct if the defendant [A] is a provider or user of an  interactive computer service, [B] the claim is based on information provided by  another information content provider and [C] the claim would treat the defendant  as the publisher or speaker of that information.”

Interactive Computer Service

The court cited well-established authority to reaffirm that the definition of interactive computer service includes social networking platforms and online matching sites that, like Grindr, provide users with access to a common server. Furthermore, Herrick’s Amended Complaint expressly conceded that Grindr was an interactive computer service.

Claim Arises From Actions of a Third-Party Content Provider

Herrick argued that his claims arose from Grindr’s management of it users, as opposed to specific user content. The court, however, held that Herrick’s product liability claims arose directly from the fake profiles that Herrick’s ex-boyfriend created along with specific direct messages the ex-boyfriend exchanged with other users. Thus, the court ruled, the second prong of the test was satisfied because the basis for Herrick’s claims that Grindr’s product was defective and dangerous arose directly from content provided by a third-party.

The court further reasoned that Herrick’s claims for negligence, intentional infliction of emotional distress and negligent infliction of emotional distress related in part to the app’s geolocation function. The court went on to conclude that this function was also based on information provided by a third-party, because the geolocation functions is “based on real-time streaming of a user’s mobile phone’s coordinate data,” which was provided by Herrick’s ex-boyfriend.

Publisher/Speaker of Offensive Content

The court reiterated that the core purpose of Section 230 is to bar lawsuits seeking to hold a service provider liable for conducting traditional editorial functions (e.g., deciding to publish, remove, or alter third-party content). Thus, claims premised on the fact that a service provider refused to remove offensive content produced by a third-party are barred.

Herrick again attempted to argue that his claims were premised not on these traditional editorial functions or Grindr’s role as a publisher of third-party content, but on the design and operation of the app itself—specifically, its lack of safety features. The court rejected this argument, reasoning that claims based on the “structure and operation” of an interactive computer service were barred by Section 230 because the lack of safety features reflects “choices about what content can appear on the website and in what form, which are editorial choices,” citing the First Circuit’s decision in Jane Doe No. 1 v. Backpage.com.

The court also rejected Herrick’s argument, based on Doe v. Internet Brands, that his failure to warn claim was not barred by Section 230. The court distinguished Doe v. Internet Brands, noting that in that case “there was no allegation that the defendant’s website transmitted potentially harmful content . . . [so] the defendant was therefore not an ‘intermediary’ shielded from  liability under § 230,” whereas “Herrick’s failure to warn claim is inextricably  linked to Grindr’s alleged failure to edit, monitor, or remove the offensive content  provided by his ex-boyfriend . . . [and,] accordingly, is barred by § 230.”

Herrick’s Other Claims

The appeals court’s opinion focused heavily on the Sec 230 analysis, but the court also rejected Herrick’s claims for (1) failure to respond, holding that Grinder was exercising traditional editorial functions or in the alternative that it did not assist in the development of the unlawful conduct; (2) fraud and misrepresentation,  holding that Grindr’s Terms of Use made no claim that Grindr would remove illicit content and this claim lacked causation; and (3) promissory estoppel, holding that this claim was barred for lack of detrimental reliance.

* * * *

While Section 230 continues to face headwinds both in the courts and legislatively, Herrick shows that the statute continues to provide robust immunity for social media sites and other platform providers and online intermediaries. Indeed, it is amazing to think that, more than 20 years after its enactment, Section 230 continues to serve its original purpose of enabling online services to thrive without the threat of crippling liability for the bad acts of their users, particularly given that today’s online services are so different from those that existed when President Clinton signed the Communications Decency Act into law in 1996. But we will undoubtedly see additional attempts to rein in Section 230, so watch this space for further developments.

The post Appeals Court Again Upholds Section 230 Protections in Case Against Grindr appeared first on Socially Aware Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A new law in Australia makes a social media company’s failure to remove “abhorrent violent material” from its platform punishable by significant fines. The law also states that the executives at social media companies who fail to remove the content could be sentenced to jail time.

The European Parliament voted to approve the Copyright Directive, a directive that, although vaguely worded, affords copyright holders significant new protections online, and requires online platforms to police content more thoroughly than ever before. Find out exactly what impact industry advocates predict the law will have, and how long it will be until it’s implemented.

Learn how companies can collect and use biometric data without becoming an easy target for litigation, according to my co-editor Julie O’Neill and our colleague Max Phillip Zidel.

As part of the FTC’s continuing efforts to ensure consumers are aware of when an online endorser has been compensated in connection with an endorsement, the agency recently settled a complaint against a subscription service that allegedly offered its product for free to consumers who posted positive online reviews.

In the wake of reports about social media influencers purchasing fake followers and fake likes, as well as failing to adequately label endorsed content, online celebrities are embracing more relatable posts, potentially in an effort to appear more trustworthy.

To better compete with digital media platforms, the top 40 television markets in the United States will introduce a broadcasting standard that will enable interactive and targeted advertising.

Snap Inc., whose Snapchat app currently excludes users younger than 13 but generally does not verify ages, has announced that it is working with British lawmakers to prevent underage children from signing up for its service.

What’s up with Google’s new streaming game platform?

A photographer is suing supermodel Gigi Hadid for copyright infringement for posting a photo of herself to Instagram.

Fruit of the Loom is holding a contest on Instagram in search of the best jingle for their Breathable Boxer Briefs. See how much the underwear manufacturer promises to award the winning songwriter.

The post Social Links: An EU law to protect copyright owners online; collecting biometric data without running afoul of the law; influencers’ attempts to appear more authentic appeared first on Socially Aware Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In early March 2019, the Department of Justice (DOJ) revised its Foreign Corrupt Practices Act (FCPA) Corporate Enforcement Policy (the Policy). First announced in November 2017, the Policy is designed to encourage companies to self-report FCPA violations and to cooperate with DOJ FCPA investigations. The Policy and its recent revisions were incorporated into the United States Attorneys’ Manual (USAM), now referred to as the Justice Manual (JM), which is the internal DOJ document that sets forth policies and guidance for federal prosecutors.

One of the most notable aspects of the original Policy was its requirement that companies seeking to obtain remediation credit prohibit employees from using ephemeral messaging systems unless appropriate retention mechanisms were put in place. According to the original Policy, a company would receive full credit for remediation only “if [it] prohibit[ed] employees from using software that generates but does not appropriately retain business records or communications.”

We heard many concerns from the business community and defense bar about this prohibition, which was seen as inconsistent with the way many parts of the world conduct business. Many people we heard from could not, for example, imagine their employees doing business in Brazil without WhatsApp or in China without WeChat. But storing all messages sent on such programs poses technological and financial challenges, and could increase a company’s vulnerability to cyber breaches.

In seeming response to these concerns, the DOJ removed the outright prohibition against ephemeral messaging and revised the Policy so as to give companies more leeway to develop a system that better fits their business needs while still complying with the Policy’s underlying goal — to deter employees from going “off the grid” to further a foreign bribery scheme, and to preserve the evidence in the event that a foreign bribery scheme does take place.

Under the revised Policy, companies seeking remediation credit must “implement[] appropriate guidance and controls on the use of personal communications and ephemeral messaging platforms that undermine the company’s ability to appropriately retain business records or communications or otherwise comply with the company’s document retention policies or legal obligations.”

The revised Policy thus gives companies the ability to choose the technology, policies and controls for ephemeral messaging that work best for their businesses. For example, a company may limit the use of ephemeral messaging systems to logistical issues, and prohibit their use for substantive business communications (unless the employee takes steps to preserve such communications). A company seeking to preserve ephemeral messaging may also wish to adopt a written retention policy to ensure that ephemeral messages are stored—and deleted—consistently and in a way that balances the costs and challenges of storage against other business needs.

On that point, devising policies and controls for ephemeral messaging is not just about maximizing remediation credit in the relatively rare event that a company becomes the subject of an FCPA enforcement action—it also makes good business sense. Like the DOJ, companies have an interest in discouraging employees from using ephemeral messaging to avoid detection of improper behaviors—whether that be bribery, self-dealing or any other form of non-compliant behavior—and in making sure important business discussions are appropriately memorialized.

Prior to the original Policy, many companies had not addressed or thoroughly considered how to integrate these new technologies into their business processes. Thus, although the Policy got off to a bit of a rocky start, it did turn the spotlight on an issue that companies are well advised to consider and address.

The post How to Comply with the Revised Ephemeral-Messaging Provision in the FCPA’s Corporate Enforcement Policy appeared first on Socially Aware Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Socially Aware Blog | Social Media Lawye.. by Amber Harezlak And Aaron Rubin - 2M ago

As consumers increasingly communicate and interact through social media platforms, courts have had to grapple with how to apply existing laws to new ways of communicating, as well as disseminating and using content. Sometimes, however, traditional legal standards apply to these new platforms in a straightforward manner. At least, that is what the court found in Dancel v. Groupon, Inc., a putative class action against Groupon, Inc., alleging that Groupon’s use of images originally posted on the social media site Instagram violated users’ rights under the Illinois Right of Publicity Act (IRPA).

Groupon, a website that offers consumers deals on goods and services, built a widget intended to provide its users a window into businesses for which Groupon offered deals. The widget used Instagram’s API to find photos that Instagram users had taken at particular locations, and then displayed those images under the deals offered on Groupon’s own website.  When a visitor to the Groupon page hovered his or her mouse over the Instagram images, the Groupon user could see the username of the person who posted the photo on Instagram and an associated caption, if there was one.

Dancel, who maintains an Instagram account with the username “meowchristine,” took a selfie of herself and her boyfriend in front of a restaurant and posted it on Instagram with a tag noting the name of the restaurant. Groupon later displayed this photograph, among others, in connection with its deal for the same restaurant.

Dancel moved to certify a class of all persons in the United States who maintained Instagram accounts and whose photographs were acquired and used on a Groupon webpage for an Illinois business, and a subclass of all members whose likeness appeared in any such photograph.  The court, however, denied Dancel’s motion for class certification pursuant to Federal Rule of Civil Procedure 23(b)(3), which requires a plaintiff seeking certification to demonstrate that common questions of law or fact predominate in the class (as opposed to questions affecting individual members of the class).

According to the court, there was no single, common answer as to whether Instagram usernames could establish the identity of a particular person under the IRPA. The use of an “identity” under the IRPA requires that the use be sufficient to identify the person to a reasonable audience, but the court found that the answer to this question was an individual one, making the claim inappropriate for a class action.

While this ruling is informative with respect to the relatively narrow issue of class actions alleging violations of the IRPA based on the use of social-media handles and related images in a commercial context, the result might be different in other jurisdictions or with slightly different facts. State law governs the right-of-publicity, and variations in state statutes and state courts’ interpretations of common law rights could impact the outcome of a similar claim in another jurisdiction. For example, in California the common law right of publicity broadly interprets “identity” to mean anything that evokes a person’s identity. In addition, the court in Dancer dismissed without explanation the idea that a photo of a person would clearly “identify” that person under the IRPA for purposes of class certification, though it is not obvious that another court would reach the same conclusion.

Moreover, the court’s ruling concerned the issue of class certification, but did not directly address the merits of the underlying right of publicity claim itself. While class actions present potentially greater liability for companies, even individual claims can be problematic. Accordingly, companies that use user-generated content in connection with their social media posts should take note of this case for the questions it did not answer, and assess their use of such content with an eye to the potential for publicity-rights-related claims.

More generally, Dancel is a good reminder that user-generated content posted on social media platforms is not necessarily freely available for use in other contexts. We saw this in the well-known case AFP v. Morel, in which a wire service and photo agency were held to have infringed the copyright in photographs taken from Twitter without the permission of the photographer who posted them. And another recent case, Goldman v. Breitbart News Network, LLC, held that merely embedding a tweet (i.e., linking to a tweet) containing a photograph without the photographer’s permission constitutes infringement.

 

 

The post What’s in a (User)Name? appeared first on Socially Aware Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

New York is now one of the 43 states where “revenge porn,” the posting of explicit photographs or videos to the Internet without the subject’s consent, is punishable by law. See how far the states have come – find out how many had criminalized revenge porn as of 2014, when Socially Aware first covered the issue.

YouTube announced that it will not allow channels that promote anti-vaccination videos to run advertisements because such videos violate the platform’s policy, which, among other things, disallows the monetization of “dangerous content.” Many of the companies whose ads appeared alongside anti-vaccination content say they were not aware it was happening. Find out how that could be possible.

Senator John Kennedy (R-LA) has introduced a bill that would give Internet users considerably more control over their personal data by mandating that social media companies inform registrants—in simple, easy-to-understand terms—that they are entering into an agreement licensing their personal data to the company. Coined the Own Your Own Data Act, the legislation would also require social media platforms to make it easy for their registrants to cancel the licensing agreement and obtain the collected data and any analysis of it.

Another privacy bill, this one proposed by Senators Ed Markey (D-MA) and Josh Hawley (R-MO), would amend the Children’s Online Privacy Protection Act (COPPA) to completely prohibit the running of targeted advertisements on websites targeted to children. Find out how else the bill would amend COPPA, and how long companies would have to comply with the amendment if it became law.

The debate over whether politicians have a right to block people on social media rages on.

The United States isn’t the only country whose president favors social media as a vehicle for sharing his views.

A #TwitterLaw symposium is being held at the University of Idaho College of Law next month. Road trip, anyone?

Even the British Royal Family has to contend with social media trolls.

The post YouTube disallows ads on anti-vax content; privacy bills aim to extend children’s protections from Internet harm, secure users’ control over data appeared first on Socially Aware Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Socially Aware Blog | Social Media Lawye.. by Alex Van Der Wolk, Mercedes Samavi .. - 2M ago

One of the next big items in Europe will be the expansion of “ePrivacy,” (which, among other things, regulates the use of cookies on websites). While the ePrivacy reform is still being worked on by EU lawmakers, one of the items the ePrivacy Regulation is expected to update is the use of “cookie walls.” Recently, the Austrian and UK data protection authorities (DPAs) issued enforcement actions involving the use of cookie walls, albeit with different findings and conclusions.

Cookie Walls

A cookie wall blocks individuals from accessing a website unless they first accept the use of cookies and similar technologies. The practice of using cookie walls is not prohibited under the current ePrivacy Directive.

However, the European Data Protection Board (EDPB), the successor to the Article 29 Working Party, has issued a non-binding opinion that the use of cookie walls should be prohibited under new EU ePrivacy rules. The EDPB argues that cookie walls run contrary to the General Data Protection Regulation (GDPR): “In order for consent to be freely given as required by the GDPR, access to services and functionalities must not be made conditional on the consent of a user to the processing of personal data or the processing of information related to or processed by the terminal equipment of end-users, meaning that cookie walls should be explicitly prohibited.”

However, the negotiations around the upcoming ePrivacy Regulation are still ongoing, so it is unclear whether cookie walls will be explicitly prohibited in the final version.

The Facts

Two recent cases in Europe related to the online offerings of newspapers: the Austrian newspaper Der Standard in Austria and the United States’ Washington Post in the UK.

For each newspaper online, individuals are presented with the choice of either a free-access option with cookies or a paid-for access option without cookies. There is no free-access option without cookies.

The Austrian DPA’s view

The Austrian DPA dismissed a complaint in November 2018 by an individual who had argued that Der Standard’s cookie wall rendered the individual’s consent not freely given and thus invalid under Article 7(4) GDPR.

The Austrian DPA indicated that cookie walls are not prohibited; Der Standard’s cookie wall provides a degree of choice that results in freely given consent. First, an individual is in full control of the situation – Der Standard only places cookies after the individual makes the conscious and informed decision to allow the placement of cookies. Second, the individual can withhold consent by either entering into a paid subscription or leaving Der Standard’s website.

In addition, the Austrian DPA noted that the price of a paid-for access option without cookies should be taken into consideration. If the price is too high, it means that the paid option becomes a negative consequence of withholding consent to cookies, which could invalidate the individual’s consent; here, the Austrian DPA considered Der Standard’s prices to be “not unreasonably high.” In fact, giving consent to cookies results in a positive outcome for the individual, because they gain unlimited access to the newspaper’s articles.

The Austrian DPA did not, however, discuss what would happen if an individual withdrew their consent to a cookiewall. This suggests that there were no concerns in this particular case about whether an individual can validly withdraw consent. (In practice, when an individual withdraws consent, DerStandard’s website simply presents the cookie-wall again.)

The UK DPA’s approach

According to a reported statement, available here, the UK DPA – the Information Commissioner’s Office (ICO) – took a markedly different approach to the Austrian DPA. Towards the end of 2018, the ICO was reported to have issued a warning to the Washington Post. Given that the Post operates out of the United States, and therefore not within the ICO’s direct jurisdiction, the ICO could only issue a statement (rather than trigger any enforcement action). Nevertheless, even though it does not have the same standing as an enforcement action, the ICO’s statement is a good litmus test of how the ICO may react to UK websites with cookie walls.

The ICO purportedly viewed the consent of the Post’s readers to be finely linked to their ability to access the Post’s website, because accepting cookies is the only way to access the articles (apart from paying a monthly fee). In light of this setup, the ICO concluded that the Washington Post was in breach of the GDPR principles because it did not give individuals “a genuine choice and control over how their [personal] data are used.” This, according to the ICO, meant that consent to cookies cannot be freely given and is therefore invalid under Article 7(4) of the GDPR.

How Should Organizations React?

In the context of ePrivacy and its ongoing updates, there is no clear regulatory consensus around the prohibition of cookie walls. The different approaches taken by the UK and Austrian DPAs do not signal accord (or even coordination) amongst the DPAs on cookie walls’ impact on consent. This is surprising, given that this is exactly the sort of area where harmonization over interpretation of the GDPR is expected. It would therefore be helpful for the EDPB to step in and clarify these discrepancies.

In the meantime, organizations should keep a close eye on ePrivacy developments, particularly to monitor for further developments on potential prohibitions of cookie walls or other cookie practices.

The post The Cookie Wall Must Go Up. Or Not? appeared first on Socially Aware Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The cost for violating the Children’s Online Privacy Protection Act (COPPA) has been steadily rising, and companies subject to the law should take heed. Last week, the Federal Trade Commission (FTC) announced a record-setting $5.7 million settlement with the mobile app company Musical.ly for a myriad of COPPA violations, exceeding even the December 2018 $4.95 million COPPA settlement by the New York Attorney General. Notably, two Commissioners issued a statement accompanying the settlement, arguing that the FTC should prioritize holding executives personally responsible for their roles in deliberate violations of the law in the future.

COPPA is intended to ensure parents are informed about, and can control, the online collection of personal information (PI) from their children under age thirteen. Musical.ly (now operating as “TikTok”) is a popular social media application that allows users to create and share lip-sync videos to popular songs. The FTC cited the Shanghai-based company for numerous violations of COPPA, including failure to obtain parental consent and failure to properly delete children’s PI upon a parent’s request.

COPPA Clearly Applied to the Musical.ly App

COPPA applies to an operator of a website, mobile application, or other online service that either (1) is directed to children under thirteen, or (2) has actual knowledge that it collects PI from children under thirteen. According to the FTC complaint, Musical.ly satisfied both prongs. First, citing the factors set forth in the FTC’s COPPA Rule for determining whether an online service is directed to children, the FTC charged that the Musical.ly app was “directed to children” because it targeted children “as one audience.”  Specifically, a large portion of users were underage, the app had song categories like “Disney” and “School,” and the app was used by celebrities who are popular with tweens. The FTC further stated that the “core activity of the app” – creating lip-sync videos – is a “child-oriented activity.”

That final point may arguably be a stretch. (Who doesn’t love karaoke?) But the FTC also accused Musical.ly of having actual knowledge of its users’ ages. The company received thousands of complaints from parents whose children used the app without their consent, and merely perusing the user profile pages and photos (many of which featured a user’s self-reported age, birthdate, or school) would reveal that many users were underage.  Finally, in 2016, the company was made aware that many of its most “followed” users were under thirteen.

What Not to Do When You’re Subject to COPPA

The FTC complaint details the myriad ways in which Musical.ly allegedly failed to comply with COPPA, and companies should take care not to mirror any of its mistakes.

The company allegedly failed to both obtain parental consent before collecting PI from children and post appropriate notices of its practices with respect to its collection of PI from children. Thousands of parents complained to the company that they were never asked for consent before their children signed up. From the app’s launch in 2014 until July 2017, Musical.ly did not request the age of its users, but it did require users to enter a short bio and submit their email, phone number, first and last names, and a photo.  After July 2017, the company screened new users for age, but it did not confirm the age of existing users, nor did it seek parental consent for profiles clearly belonging to children under thirteen.

As reflected in complaints received by the company, the lack of parental notice and consent appalled many parents, particularly given how the application permitted the disclosure of PI.  In addition to enabling children to publicly share videos of themselves, all user profiles were by default set to “public,” while “private” profiles hid only a user’s uploaded videos (that is, the “private” user’s profile and contact information were still viewable). By default, any user could directly message any other user, leading to reports of adults contacting minors. Moreover, up until 2016, the app collected geolocation, which it used to display a list of other users within a 50-mile radius of the user (with whom the user could then interact).

The company also allegedly violated COPPA’s data deletion and retention obligations: if a parent contacted Musical.ly to close his or her child’s account, Musical.ly would close the account but not delete the child’s videos or profile information from its servers. As the FTC recently reminded businesses, COPPA requires deletion of children’s PI if the information no longer serves the purpose for which it was collected.

A Record-Setting Settlement Means “Think Twice About COPPA”

Given the number of missteps by Musical.ly, it may not be surprising that the company ultimately agreed to a record-setting $5.7 million settlement with the FTC. The company agreed to delete the PI of children under thirteen lacking the required parental consent, and it is now subject to a multi-year consent order that imposes a variety of compliance, reporting, and recordkeeping obligations on it.

In the FTC’s own blog post describing the settlement, companies were warned to “think twice before concluding ‘We’re not covered by COPPA.’” Indeed, the Musical.ly case highlights the many ways in which a consumer-facing service might be subject to the law. The settlement also closely follows on a $4.95 million COPPA settlement from the New York Attorney General, potentially signaling an upwards trend in COPPA settlement appetite by regulators. Even the $5.7 million settlement with Musical.ly likely represents a mere fraction of the company’s total potential liability under COPPA: the FTC is authorized to seek up to approximately $40,000 per violation (i.e., per child) in civil penalties (in addition to injunctive relief) for violations of COPPA.

The two Democrat Commissioners also issued a public statement suggesting that the Musical.ly settlement should have gone even further. Commissioners Slaughter and Chopra argue that the FTC should move away from the status quo, where individuals at companies largely avoid personal liability for grievous violations of the law, and hold individuals personally liable if they “made or ratified decisions to knowingly violate the law.” The statement was not specifically limited to COPPA.  While such an expansion of enforcement may be unlikely to occur during the administration of the current Republican-led FTC, the Commissioners’ statement serves as a useful reminder to all companies: effective privacy compliance starts at the top of any organization.

The post Thank You, Next Enforcement: Music Video App Violates COPPA, Will Pay $5.7 Million appeared first on Socially Aware Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In 2019, the European Court of Justice (CJEU) is expected to clarify one of the key open issues in EU copyright law: the extent to which online platforms such as YouTube can be liable for copyright infringement caused by user-generated content—content uploaded on to the Internet by users such as music, videos, literature, photos, or the streaming of live events such as concerts. The CJEU decisions are eagerly awaited by both media and copyright owners and by online platform operators—and will mark yet another stage in the on-going battle of the creative industries against copyright infringements in the online world.

SUMMARY

In September 2018, the German Federal Court of Justice (Bundesgerichtshof, BGH) suspended proceedings in a widely-publicized case concerning YouTube’s liability for copyright infringing user-uploaded content and referred a series of questions regarding the interpretation of several EU copyright provisions to the CJEU for a preliminary ruling. A few days later, the BGH also suspended proceedings in five other high-profile cases concerning the liability of the file hosting service uploaded.net for user files containing copyright infringing content and submitted the same questions again to the CJEU.

Previous rulings by the CJEU have addressed both the application of the safe harbor principle set out in EU E-Commerce Directive 2000/31/EC, which shields hosting providers from liability for hosted unlawful third-party content (see, for example, eBay/L’OrealNetlog/SABAM; and Scarlet/SABAMof which they have no actual knowledge and, separately, the extent of infringement of copyright by hosting of, or linking to, copyright infringing third-party content under the EU Copyright Directive (See GS Media/Sanoma; Filmspeler; and The Pirate Bay). But it is still unclear under which conditions the providers of the various online platforms that store and make available user-generated content, can rely on the safe harbor privilege applying to hosting providers to avoid liability, or whether they must not only take down the infringing content when they obtain knowledge of such content but also compensate the rights holders of such content for damages for copyright infringement.

The questions that the BGH submitted to the CJEU aim to clarify these uncertainties by bringing together the different requirements established by the previous CJEU rulings for (i) affirming a direct copyright infringement by the online platform providers under the EU Copyright Directive and (ii) denying the application of the safe harbor privilege as well as the legal consequences of such a denial (such as the extent of liability for damages). The CJEU will have to consider the differences between the YouTube and uploaded.net business models. The CJEU will hopefully provide much clearer guidelines on key issues such as:

  • to what extent can providers of online services engage with the user content hosted by them;
  • which activities will trigger a liability for copyright infringement irrespective of actual knowledge of a specific infringement;
  • whether they must actively monitor the content uploaded by users for copyright infringements (e.g., by using state-of-the-art efficient filter technologies) to avoid damage claims by rights holders.

In addition, we expect these cases to have an effect on the interpretation of the new Art. 13 of the revision of the EU Copyright Directive that will likely be adopted by the EU legislative institutions in the second quarter of 2019. The current trilogue negotiations among the EU institutions indicate that, under such new Art.13, providers of online content sharing services will be directly liable for copyright infringements by content uploaded to the platform by their users and will not be granted safe harbor under the EU E-Commerce Directive. The providers would then have to ensure that content for which the providers have not obtained a license from the respective rights holders for use on their platforms cannot be displayed on their platform. This means that the providers would have to monitor all content files when uploaded to their platform, making filter technology mandatory for the majority of the platforms (see our previous Client Alert on the draft amendment to the EU Copyright Directive).

BACKGROUND

YouTube: The plaintiff Frank Peterson, a German music producer, has an exclusive artist contract with the singer Sarah Brightman, under which he holds various exclusive rights under copyright to her recordings. Several videos were uploaded to YouTube by unknown users containing works of her newly released studio album “A Winter Symphony” as well as recordings of her concert tour “Symphony Tour.” On Peterson’s demand, YouTube blocked some of the videos which then were re-published on YouTube by users a few days later. As a result, Peterson sued YouTube seeking injunctive relief and claiming disclosure of user information as well as compensation of damages resulting from the copyright infringement. The Hanseatic Higher Regional Court (court of appeal) granted injunctive relief obliging YouTube to take down and to prevent the re-publication of the videos on its platform and requested YouTube to provide Peterson with information on the users who had uploaded the videos under pseudonyms. The damages claim, however, was dismissed. In line with previous decisions of this and other German courts of appeal on YouTube, the court decided that YouTube is not directly liable for copyright infringement because YouTube neither committed the copyright infringements by uploading or appropriating the videos nor was aware of any specific infringements that it had not blocked after the plaintiff notified YouTube of such infringements of his rights.

uploaded”: “uploaded” provides an online cloud service offering users free storage for all kind of files regardless of their content. Upon uploading, the user is supplied with a unique download link for each uploaded file. Different from the circumstances in the The Pirate Bay case, “uploaded” provides neither an index of, nor a search function for, the uploaded user files. However, it permits its users to share their download links for their files on third-party websites that offer categorized link collections including information about the content stored under these links so that other users can access the uploaded files on the “uploaded” system. The service is offered at no charge with a limited download capacity/speed and as a paid version for registered users without such limitations. In addition, “uploaded” incentivizes downloads of files by third-party users by paying the users who upload files a fee of up to €40 per 1,000 downloads. The majority of the files hosted on the defendant’s system (up to 90%, the exact number is under dispute between the parties) are subject to third-party copyrights and were uploaded by users without the rights holders’ consent. Rights holders notified “uploaded” many times of infringing files available on its systems (notices for more than 9,500 works subject to copyright infringement were submitted). The plaintiffs, several publishers, the German mechanical and performance rights organization GEMA, and a German film company argued that “uploaded” is responsible for the infringement of their copyrights in several content files uploaded by users. They seek injunctive relief and claim disclosure of user information as well as compensation of damages from “uploaded”. The Higher Regional Court of Munich (court of appeal) granted injunctive relief obliging “uploaded” to take down and to prevent the re-publication of the files in question but dismissed all claims for damage compensation and disclosure of user information. The court argued that, pursuant to the applicable German liability principles, the defendant did not directly infringe the plaintiffs’ copyrights and is only liable for secondary copyright infringement.

REQUESTS BY BGH FOR PRELIMINARY RULINGS

In both cases, the plaintiffs filed appeals against these judgments with the BGH. The BGH decided that these cases require guidance on the interpretation of EU law provisions by the CJEU, suspended the proceedings and referred the following questions to the CJEU:

1. Do the Providers commit an act of “communication to the public” under the Copyright Directive?

The BGH asks the CJEU whether the provider of an online video platform such as YouTube on the one hand and the provider of a file hosting service such as “uploaded” on the other hand (each of these two hereinafter referred as a “Provider” and jointly the “Providers”), commit an act of “communication to the public” under Art. 3(1) Copyright Directive. 

Does the Provider play a “central role” in making the content available to the public?

Art. 3(1) provides that authors of a work shall be granted “the exclusive right to authorize or prohibit any communication to the public of their works, … including the making available to the public of their works in such a way that members of the public may access them from a place and at a time individually chosen by them.” Consequently, any “communication to the public” of a work without the rights holder’s consent constitutes a copyright infringement.

The Recent CJEU Rulings: In its recent rulings (see above), the CJEU emphasized that an individual assessment on a case-by-case basis is required taking into account several complementary criteria, which are interdependent, to determine whether the platform provider commits an “act of communication to the public”. Firstly, the indispensable role of the user who uploaded and made available the infringing content, and thereby directly communicated the content to the public without the rights holder’s consent, must be taken into account. According to the CJEU, a provider’s activities relating to third-party content are considered to constitute an “act of communication” when the provider acts in full knowledge of the consequences of its action to give its users access to the copyrighted work without the rights holder’s consent (the CJEU calls this the “deliberate nature of the intervention”). These criteria are met if the intervening provider plays a “central/essential role” in making the user content available to the public and acts deliberately. Important indications for such a central role and deliberate activity of a file sharing service are, inter alia, comprehensive classification, indexing and presentation of the hosted files, provision of a search function, deletion of obsolete or faulty files, the intention to generate profit with the user content and the awareness that the platform provides access to a very large number of files infringing third-party copyrights.

Key specifics of the two cases: It is worth looking at the specifics of the two cases that the BGH presented to the CJEU. The two cases have some similarities: The users of both services make copyright protected content available to the public without the respective rights holders’ consent; both YouTube and “uploaded” generate revenues with their platforms; the content upload process is fully automated without any involvement or control by the provider prior to publishing the user uploaded content on the platform; the terms of use for both services prohibit their users from uploading and using the service for content that would infringe third-party rights, including copyrights. However, the BGH also outlines some significant differences:

  • YouTube: The BGH expressly states that, in its opinion, YouTube does not play a “central role” as required by the CJEU in its recent rulings for an “act of communication to the public” provided that YouTube does not have actual knowledge of the specific infringing user content or, upon obtaining such knowledge, YouTube removes or blocks the infringing user content without undue delay. Though YouTube processes and presents search results in the form of rankings and contextual categories and recommends videos to its registered users derived from the videos they previously watched, such search and recommendation functions are completely automated as is the monetization of the videos by advertising. Moreover, YouTube has taken technical measures to prevent and cease the availability of infringing user content on its platform. It informs its users during the automated upload process that the upload of content infringing third-party rights is prohibited by the terms of use, has several options (including a notification button on the platform) for users to notify about unlawful videos and provides rights holders with automated tools on the platform to identify, block, or claim user uploaded content infringing their rights (so-called content id system).
  • “uploaded”: With regard to “uploaded”, the BGH points out that the Provider is aware that a considerable number of copyright infringing files are available for download from its servers and that its remuneration model (based on popularity of the files) creates an incentive to upload copyright protected content. While “uploaded” neither provides an index of, nor a search function for, the uploaded user files, other users can access the files via the above-mentioned link collections hosted on third-party websites. The BGH also emphasizes that the option to upload files anonymously increases the likelihood that users upload content infringing third-party rights. In this context, the court also asked the CJEU whether the fact that content files infringing third-party copyrights account for 90 – 96% of the overall use of the service (i.e., irrespective of the total number of files stored) is relevant for the assessment of the first question. In our opinion, the previous rulings of the CJEU on The Pirate Bay and Filmspeler cases indicate that a provider may be assumed to act deliberately in full knowledge of the consequences of its action if its platform is primarily used for publishing infringing content and the provider was aware of this fact (e.g., from the large number of notices from affected rights holders or user blogs and forums).

2. Are the Providers eligible for the safe harbor privilege under the E-Commerce Directive?

If the CJEU were to decide that the Providers did not commit an act of “communication to the public,” the BGH asks whether these services are eligible for the safe harbor privilege under Art. 14 (1) E-Commerce Directive.

Does the Provider play an “active role” in processing the third-party content?

Art.14 (1) states that an online service that “consists of the storage of information provided by a recipient of the service” shall not be “liable for the information stored at the request of the recipient of the service, on condition that: (a) the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent; or (b) the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information.

According to rulings by the CJEU, this liability privilege is limited to host providers that play a neutral role by technically and automatically processing the user’s content on their platforms. It does not privilege a provider that plays an “active role of such a kind as to give it knowledge of, or control over” the content or the data relating to the content, e.g. by providing assistance to the user’s offers, such as optimizing or promoting the user’s specific content (see eBay/L’Oreal). The BGH does not indicate whether it believes that YouTube or “uploaded” play such an active role.

Is knowledge of the specific infringing user content required?

If the CJEU were to find that a platform provider plays a neutral role in processing the user content, the BGH asks whether the actual knowledge of the infringing user content and the awareness of the facts or circumstances from which the infringing user content becomes apparent must relate to the specific content/infringement in question (e.g., to specific music videos or specific files). In the opinion of the BGH, a mere general awareness of the provider that users published any unlawful content on its platform without knowledge of the specific unlawful item does not suffice for excluding the liability privilege. We also believe that previous rulings of the CJEU indicate that the CJEU takes the same position (see eBay/L’Oreal, para. 120 -124, where the CJEU refers to “the offers for sale in question”, for which the provider must have obtained information “on the basis of which a diligent economic operator should have realized that such offers were unlawful”).

These questions are very important for clarifying the remaining substantial uncertainties as how the safe harbor privilege relates to Art. 3(1) Copyright Directive. Are the criteria for affirming a “central role” of the provider and “acting in full knowledge of the consequences of its action” under Art. 3(1) the same criteria as for affirming an “active role” and “actual knowledge of the illegal information” of a provider under Art. 14(1) E-Commerce Directive?

3. Must the rightsholder notify the Provider of the infringement of its copyright in its work(s) before it can obtain a cease and desist court order?

If the CJEU considers a platform provider to be a neutral host provider eligible for the safe harbor privilege, the BGH asks whether Art. 8(3) Copyright Directive requires a rightsholder to first notify the Provider of an infringement of the rightsholder’s copyright by a specific user content before the rightsholder is entitled to obtain a court order against the Provider to take down such infringing user content.

Art. 8(3) The Copyright Directive sets forth that rightsholders shall be “in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe a copyright or related right.” While the provision does not expressly require the provider being notified of a specific infringement, the BGH states that, in its opinion, the national laws and courts of the EU Member States may provide for such a notification requirement (as it is the case under German law). In addition, the BGH deems such a notification requirement necessary to avoid a conflict with the Art. 15(1) E-Commerce Directive that prohibits the EU Member States from imposing a general obligation on host providers to monitor third-party content hosted on its platform.

4. Are the Providers liable for damages caused by the infringement when they did not commit an act of communication to the public without the rightsholder’s consent but are not eligible for the safe harbor privilege?

If the CJEU considers a platform provider (i) to have not communicated the copyright protected work to the public without the rightsholder’s consent under Art. 3(1) Copyright Directive but (ii) to be excluded from the safe harbor privilege under Art. 14(1) E-Commerce Directive because the Provider played an active role in processing the infringing user content, the BGH asks the CJEU whether

  • the Provider may be considered to be an “infringer” of intellectual property rights under Art. 11 and Art. 13 of the Enforcement Directive, and
  • if the Provider qualified as infringer under these provisions, are the requirements of Art. 13 met when, under German law, any such Provider is only obliged to compensate damages under the condition that the Provider acted with wilful intent (Vorsatz) with regard to both, (i) the act committed by the user who uploaded the infringing content on the Provider’s platform and (ii) the Provider’s act of supporting such infringing act by the user.

Rights holders’ claims against infringers and intermediaries: Art.11 requires the EU Member States to ensure that “the judicial authorities may issue against the infringer” of an intellectual property right “an injunction aimed at prohibiting the continuation of the infringement…” and “that rights holders are in a position to apply for an injunction against intermediaries whose services are used by a third party to infringe an intellectual property right ….” Art.13 requires the EU Member States to “ensure that the competent judicial authorities… order the infringer who knowingly, or with reasonable grounds to know, engaged in an infringing activity, to pay the right holder damages appropriate to the actual prejudice suffered by him/her as a result of the infringement.”

While a provider that commits an “act of communication to the public” without the rights holder’s consent under Art.3(1) Copyright Directive qualifies as infringer under these provisions and has to face not only cease and desist orders but also damages claims under national laws, a provider that is eligible for the safe harbor privilege under Art. 14(1) E-Commerce Directive is considered to be an intermediary that is not liable for damages provided that it does not have actual knowledge of the infringing user content and is not aware of facts from which the infringement is apparent. In the opinion of the BGH, if a provider does not commit an act of communication of a work to the public without the rights holder’s consent but supports such an act..

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview