Loading...

Follow Privacy Law Blog - Proskauer Rose on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Bonnie Yeomans, Stephanie Kapinos and Kevin Milewski

Earlier this month, the FTC sent a letter to Wildec, LLC, the Ukraine-based maker of several mobile dating apps, alleging that the apps were collecting the personal information and location data of users under the age of 13 without first obtaining verifiable parental consent or otherwise complying with the Children’s Online Privacy Protection Act (COPPA). The letter pressed the operator to delete personal information on children (and thereafter comply with COPPA and obtain parental consent before allowing minors to use the apps) and disable any search functions that allow users to locate minors. The letter also advised that the practice of allowing children to create public dating profiles could be deemed an unfair practice under the FTC Act. Subsequently, the three dating apps in question were removed from Apple’s App Store and Google’s Google Play Store following the FTC allegations, showing the real world effects of mere FTC allegations, a response that might ultimately compel Wildec, LLC to comply with the statute (and cause other mobile apps to reexamine their own data collection practices). Wildec has responded to the FTC’s letter by “removing all data from under age accounts” and now prevents minors under the age of 18 from registering on the dating apps.

COPPA was first passed in 1998 (with the COPPA Rule implemented in 2000, and later revamped in 2013), children’s privacy has been on the FTC’s radar, with the agency expanding its enforcement scope to the mobile sphere in 2011 when it brought its first COPPA case involving an app. Generally speaking, websites and online services covered by COPPA must post privacy policies, provide parents with direct notice of their information practices, and get verifiable consent from a parent or guardian before collecting personal information from children. Since the revised COPPA Rule came into effect, the FTC is certainly looking more closely at less traditional areas where violations may occur.

Indeed, in recent months, the FTC and privacy advocates have taken aim at just these modern children’s privacy issues:

  • In April 2019, Unixiz, Inc., the operator of i-Dressup.com agreed to settle FTC allegations that it violated COPPA by failing to obtain parental consent before collecting personal information of children under 13 or take reasonable steps to secure consumer information (resulting in a data breach). i-Dressup allowed users, including children, to play dress-up games and also enter an online community where users could create personal profiles and interact with other users.  If the site was unable to obtain parental consent from under-13 users, they were given a “Safe Mode” membership that barred them from the social features of the site, yet the FTC alleged that i-Dressup still collected personal information despite a lack of parental consent. The FTC also alleged that i-Dressup, among other things, stored and transmitted users’ personal information in plain text and failed to perform vulnerability testing of its network, shortcomings that resulted in a security breach. Under the proposed settlement, i-Dressup agreed to pay a $35,000 civil penalty, and implement a comprehensive data security program and obtain biennial assessments.
  • In February 2019, the operator of the video social networking app Musical.ly (now known as TikTok) agreed to pay $5.7 million and settle FTC charges, the largest fine ever under COPPA, that it allegedly collected personal information from children, despite having knowledge that many children using the app were under 13. The Musical.ly app allowed users to create short videos lip-syncing to music and share those videos and otherwise interact with other users.  According to the complaint, while the site allowed users to change their default setting from public to private so that only approved users could follow them, users’ profile pictures and bios remained public. Beyond the civil penalty, the settlement required the operators to comply with COPPA going forward and take offline all videos made by users under 13. Moreover, following the settlement, the operator announced changes that will place younger U.S. users into a limited, separate app that contains certain privacy protections.

What’s up next for FTC enforcement?  IoT-connected and voice-activated electronic devices and toys have caught the agency’s attention in the last several years.  The 2013 COPPA Rule amendment added several new types of data to the definition of personal information, including a photograph, video, or audio file that contains a child’s image or voice. Seeing the new technologies out in the market, in 2017 the agency released an enforcement statement where it noted that it would not take an enforcement action against an operator for not obtaining parental consent before collecting the audio file with a child’s voice when it is collected solely as a  replacement of written words, as long as it is held for a brief time and only for that purpose. Still, the issue of IoT toys remains an unsettled issue in children’s privacy. This past week, private advocates sent a complaint to the FTC requesting an investigation into the Amazon Echo Dot Kids product, which is a version of its Alexa home assistant.  The complaint alleged that this device violated COPPA by collecting children’s voice recordings and associated it with data from browsing habits, and retaining it indefinitely. It also alleges that Amazon failed to give notice and obtain parental consent for information collected through third parties, because Amazon recommended that parents review third-party service policies, but the complaint revealed that only about 15% of the third-party services targeted towards children actually had posted privacy policies.  Amazon has denied the allegations and stated it complies with COPPA.

We await the resolution of the Echo Dot inquiry and will continue to monitor developments in children’s privacy, particularly with respect to IoT toys and other new technologies that necessarily feature data collection capabilities that implicate COPPA.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Christina Kroll

Senate Bill 561’s smooth sail through the California legislature came to an end on Thursday, May 16.  On the eve of the deadline for all fiscal committees to hear and report on the bills introduced in their house, the Senate Appropriations committee decided to hold the bill. Meaning, SB 561 will not pass out of the Senate this session.

Notably, the controversial bill was a proposed CCPA amendment that threatened to expand the Act’s private right of action by allowing consumers to bring actions when any of their CCPA rights were violated.  Currently, the CCPA only permits consumers to bring actions in the event of a data breach.  For a detailed review of the CCPA please view our previous posts.

The bill would have also affected the way that the Attorney General could enforce the CCPA.  The CCPA provides businesses that have violated the Act 30 days to cure the violation, and allows businesses and third parties to request guidance from the Attorney General on how to comply with the Act.  The bill would have eliminated the 30-day window and would have put the onus on the Attorney General to promulgate any guidance.  A more detailed review of SB 561is available in a previous post.

While this does not necessarily mean it’s the end of the road for SB 561 – after all, the CCPA was resurrected from an inactive file – it does mean good-bye for now.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Laura E. Goldsmith and Mathilde Pepin

In late March, the French Data Protection Authority, Commission Nationale de l’Informatique et des Libertés (“CNIL”) released a model regulation (the “Model Regulation”) governing the use of biometric access controls in the workplace.  Unlike many items of personal information, biometric data (such as a person’s face or fingerprints) is unique and, if stolen or otherwise compromised, cannot be changed to avoid misuse.  Under Article 9 of the GDPR, biometric data collected “for the purpose of uniquely identifying a natural person” is considered “sensitive” and warrants additional protections.  The GDPR authorizes Member States to implement such additional protections.  As such, the French Data Protection Act 78-17 of 6 January 1978, as amended, now provides that employers – whether public or private – wishing to use biometric access controls must comply with binding model regulations adopted by the CNIL, the first of which is the Model Regulation.

The Model Regulation, which the CNIL finalized and adopted following a public consultation, specifies robust requirements for the processing of biometric data for workplace access controls.  Such access controls include the use of a biometric authentication system to allow entry into the workplace (or sensitive workplace areas) or access to certain databases, equipment or computer networks.  Below are some of the key aspects of the Model Regulation:

  • Justify the use of biometrics: The Model Regulation requires employers to justify the use of biometrics based upon the specific context of the workplace (e.g., the presence of dangerous machinery, valuables, confidential materials, or products subject to strict regulation) and demonstrate why the use of other traditional authentication devices (e.g., badges or passwords) is not adequate from a security standpoint. Such justification must be expressly documented by the employer, including the rationale for selecting one biometric feature over another for authentication. The Model Regulation also outlines the various types of biometric access control systems – based on the method of data transmission and storage – and the accompanying data security risks of holding the biometric templates in a central database.  It states that only critical environments would warrant stronger protections involving central databases holding biometric template data.  Otherwise, the biometric data must be stored on a medium which would remain under the individual’s exclusive possession (e.g., badges or smart cards) without any durable copies retained by the employer or its service providers.
  • Maintain strong data security: The Model Regulation details many ways in which employers must maintain robust organizational and technological data security procedures.  The enumerated security measures relate to the data, organization, hardware, software and computer channels, and the employer must audit, at least annually, the implementation of these measures.  The Model Regulation also stipulates maximum retention periods for biometric data.  For example, raw biometric data (such as a photo or audio recording) cannot be retained any longer than necessary to create a biometric template that can be analyzed by the system’s software.  Moreover, any resulting biometric templates must be encrypted and eventually deleted once an employee no longer works at the organization.  The Model Regulation also outlines the types of individual personal data that may reside on a biometric control device and the types of log data that may be collected.
  • Remember GDPR obligations: Beyond the Model Regulation, employers must still comply with applicable provisions of the GDPR with regard to any biometric access control system.  Such compliance  might include data breach notification obligations, recordkeeping requirements and compliance with the individual’s data protection rights. Specifically, the CNIL noted that the collection of biometric data for access control is likely to create a high risk for the rights and freedoms of the individuals.  In light of that, a data protection impact assessment must be carried out by the employer/data controller prior to the implementation of any biometric access control and updated at least every three years.

The above summarizes some of the principal aspects of the Model Regulation at a high level and, as such, the language of the Model Regulation and the CNIL’s FAQ providing for additional practical commentary beyond the text of the Model Regulation should be read closely for specific requirements before instituting biometric access controls within the scope of the Model Regulation.

We note that the protection of biometric data also garners attention in the U.S., where several states have enacted biometric privacy statutes, most notably Illinois, whose statute contains a private right of action and has produced a wave of biometric privacy suits, including those against employers for using biometric timekeeping devices without adequate notice and consent.  Back in the EU, the Model Regulation for biometric access controls in the workplace may conceivably serve as a model for other Member States to follow, and we will continue to follow such potential developments and further actions by the CNIL.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Laura E. Goldsmith

Unwanted robocalls reportedly totaled 26.3 billion calls in 2018, sparking more and more consumer complaints to the FCC and FTC and increased legislative and regulatory activity to combat the practice. Some automated calls are beneficial, such as school closing announcements, bank fraud warnings, and medical notifications, and some caller ID spoofing is justified, such as certain law enforcement or investigatory purposes and domestic violence shelter use.  However, consumers have been inundated with spam calls – often with spoofed local area codes – that display fictitious caller ID information or circumvent caller ID technology in an effort to increase the likelihood consumers will answer or otherwise defraud consumers. To combat the rash of unwanted calls, Congress and federal regulators advanced several measures in 2019 and states have tightened their own telecommunications privacy laws in the past year.  For example, within the last week, the Arkansas governor signed into law S.B. 514, which boosts criminal penalties for illegal call spoofing and creates an oversight process for telecommunications providers.

Even before this wave of regulatory and legislative attention, there were already a number of federal laws that sought to restrict certain unwanted calls, such as the Telephone Consumer Protection Act (TCPA), Truth in Caller ID Act, and various regulations surrounding the Do Not Call Registry and the Telemarketing Sales Rule, which, depending on the statute, are enforced by the FCC or FTC.  In 2017, the FCC adopted rules to allow phone companies to proactively block illegal robocalls and, in February 2019, issued a proposed rulemaking to further combat illegal spoofed texts and international calls and a report on the regulatory and industry progress in combatting illegal robocalls. On the legislative front, within the last week, the Senate Commerce Committee favorably reported a bipartisan bill, S.151 (the TRACED Act), which would, among other things, increase the statute of limitations for FCC enforcement actions against illegal robocallers, require the FCC to promulgate rules regarding the blocking of unauthenticated calls and compel providers to take further actions to stem unwanted robocalls, including implementing the SHAKEN/STIR authentication framework, a technology that uses certificates to validate the source of each call.

On the state level, regulators and legislatures have also taken action, including the recently-passed Arkansas bill that increases protections under existing consumer protection statutes and bolsters criminal penalties for illegal call spoofing. The Arkansas law principally prohibits a person to “cause a caller identification service to transmit misleading or inaccurate caller identification information if the purpose is to defraud, cause harm, or wrongfully obtain anything of value” or use a third party to display or cause to be displayed spoofed caller ID information for any purpose, absent certain law enforcement and public safety exceptions.  Moreover, telecommunication providers are required to report yearly to state regulators about how providers are implementing current technologies – such as SHAKEN/STIR – to block illegal robocalls.  Arkansas’s legislative action on call spoofing follows other states’ enactments in the past year that have strengthened existing laws.  For example, in 2018, South Carolina signed the “Telephone Privacy Protection Act,” which, among other things, prohibited call or text spoofing, Louisiana passed S.B. 204, which enhanced remedies against caller ID spoofing, and Maryland passed the “Caller ID Spoofing Ban of 2018.” Other states, including Illinois, Massachusetts and Ohio, among others, are currently debating bills that prohibit certain robocalling practices.

It remains to be seen whether the federal and state regulation and the adoption of authentication technology will curtail spam calls and texts.  Entities transmitting automated calls or using caller ID spoofing for non-solicitation purposes should reexamine and monitor the changing legal landscape in this area.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Erika C. Collins, Daniel Ornstein, Lloyd B. Chinn, Pinchos Goldberg and Vanessa P. Avello

Per our previous post, the European Parliament and the Member States agreed to adopt new rules that would set the standard for protecting whistleblowers across the EU from dismissal, demotion, and other forms of retaliation when they report breaches of various areas of EU law. According to a press release issued by the European Parliament on April 16, 2019, the Parliament approved these changes by an overwhelming majority. The new rules require that employers create safe reporting channels within their organization, protect whistleblowers who bypass internal reporting channels and directly alert outside authorities, including the media under certain circumstances, and require that national authorities provide independent information regarding whistleblowing. This legislation marks a significant departure from the jurisdiction-specific approach that has resulted in disparate protection across Europe, with some jurisdictions, like Germany and France, offering relatively limited protection when compared to other jurisdictions, such as the UK. These changes, if approved by the EU ministers, will set a uniform baseline and therefore considerably increase whistleblower protections in the EU. Member States will have two years to achieve compliance. We will continue to monitor this development.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Laura E. Goldsmith

Unwanted robocalls reportedly totaled 26.3 billion calls in 2018, sparking more and more consumer complaints to the FCC and FTC and increased legislative and regulatory activity to combat the practice. Some automated calls are beneficial, such as school closing announcements, bank fraud warnings, and medical notifications, and some caller ID spoofing is justified, such as certain law enforcement or investigatory purposes and domestic violence shelter use.  However, consumers have been inundated with spam calls – often with spoofed local area codes – that display fictitious caller ID information or circumvent caller ID technology in an effort to increase the likelihood consumers will answer or otherwise defraud consumers. To combat the rash of unwanted calls, Congress and federal regulators advanced several measures in 2019 and states have tightened their own telecommunications privacy laws in the past year.  For example, within the last week, the Arkansas governor signed into law S.B. 514, which boosts criminal penalties for illegal call spoofing and creates an oversight process for telecommunications providers.

Even before this wave of regulatory and legislative attention, there were already a number of federal laws that sought to restrict certain unwanted calls, such as the Telephone Consumer Protection Act (TCPA), Truth in Caller ID Act, and various regulations surrounding the Do Not Call Registry and the Telemarketing Sales Rule, which, depending on the statute, are enforced by the FCC or FTC.  In 2017, the FCC adopted rules to allow phone companies to proactively block illegal robocalls and, in February 2019, issued a proposed rulemaking to further combat illegal spoofed texts and international calls and a report on the regulatory and industry progress in combatting illegal robocalls. On the legislative front, within the last week, the Senate Commerce Committee favorably reported a bipartisan bill, S.151 (the TRACED Act), which would, among other things, increase the statute of limitations for FCC enforcement actions against illegal robocallers, require the FCC to promulgate rules regarding the blocking of unauthenticated calls and compel providers to take further actions to stem unwanted robocalls, including implementing the SHAKEN/STIR authentication framework, a technology that uses certificates to validate the source of each call.

On the state level, regulators and legislatures have also taken action, including the recently-passed Arkansas bill that increases protections under existing consumer protection statutes and bolsters criminal penalties for illegal call spoofing. The Arkansas law principally prohibits a person to “cause a caller identification service to transmit misleading or inaccurate caller identification information if the purpose is to defraud, cause harm, or wrongfully obtain anything of value” or use a third party to display or cause to be displayed spoofed caller ID information for any purpose, absent certain law enforcement and public safety exceptions.  Moreover, telecommunication providers are required to report yearly to state regulators about how providers are implementing current technologies – such as SHAKEN/STIR – to block illegal robocalls.  Arkansas’s legislative action on call spoofing follows other states’ enactments in the past year that have strengthened existing laws.  For example, in 2018, South Carolina signed the “Telephone Privacy Protection Act,” which, among other things, prohibited call or text spoofing, Louisiana passed S.B. 204, which enhanced remedies against caller ID spoofing, and Maryland passed the “Caller ID Spoofing Ban of 2018.” Other states, including Illinois, Massachusetts and Ohio, among others, are currently debating bills that prohibit certain robocalling practices.

It remains to be seen whether the federal and state regulation and the adoption of authentication technology will curtail spam calls and texts.  Entities transmitting automated calls or using caller ID spoofing for non-solicitation purposes should reexamine and monitor the changing legal landscape in this area.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Tiffany Quach and Alexa Meera Singh

On January 30, 2019, the Office of the New York Attorney General (“NY AG”) and the Office of the Florida Attorney General (“Florida AG”) announced settlements with Devumi LLC and its offshoot companies (“Devumi”), which sold fake social media engagement, such as followers, likes and views, on various social media platforms. According to the NY AG, such social media engagement is fake in that “it purports to reflect the activity and authentic favor of actual people on the platform, when in fact the activity was not generated by actual people and/or does not reflect genuine interest.”

These settlements are the first in the United States to find that selling fake social media engagement constitutes illegal deception and that using stolen social media identities to engage in online activity is illegal. The NY AG emphasized that the New York settlement sends a “clear message that anyone profiting off of deception and impersonation is breaking the law and will be held accountable.”

Devumi grossed $15 million dollars from 2015 to 2017 by selling fake social media engagement through bots and “sock puppet” accounts. A sock puppet account is the account of one person who is pretending to be many others. In other words, it is multiple accounts controlled by the same user. Some of the fake accounts used real social media profiles without the profile owner’s consent or knowledge.

The company also sold endorsements from social media “influencers” without disclosing that the endorsements were paid for. An influencer is a social media user with a substantial following, who monetizes their online popularity by endorsing goods, services and events to their followers.

According to the settlements, these business practices deceived customers by affecting their perception of opinions, ideas and goods and thus affecting their decision-making concerning what goods to buy and which opinions and ideas had garnered public support. Devumi further deceived its own customers, some of whom were unaware that they were purchasing inauthentic endorsement and fake social media activity.

Under the New York settlement:

  • The NY AG found that Devumi’s conduct violated:
    • New York Executive Law §63(12) and New York General Business Law §§ 349 and 350, which prohibit misrepresentation, deceptive acts or practices and false advertising, and
    • New York Penal Law §190.25, which prohibits impersonating a real person and, with the intent to obtain a benefit or defraud another, doing an act or communicating by internet website or electronic means;
  • Devumi did not admit or deny any wrongdoing and agreed to pay a $50,000 fine to cover the cost of the inquiry;
  • Devumi is prohibited from:
    • engaging in the advertising, promotion, offering for sale or sale of social media engagement from non-existent people that purports to be from existing people;
    • misrepresenting, expressly or by implication, that social media engagement from non-existent people that purports to be from existing people is authentic social media engagement;
    • misrepresenting, expressly or by implication, that an endorser of a product or service is an independent user or ordinary consumer of the product or service; or
    • failing to disclose any material connection between an endorser and (1) Devumi, (2) any other individual or entity affiliated with the product or service, or (3) the product or service;
  • Devumi must take reasonable steps to ensure compliance with state law if and when engaging with endorsers and endorsements. To sufficiently comply, Devumi must:
    • provide endorsers with a clear written statement of his or her responsibilities to disclose a material connection with Devumi in any online video, social media posting, or other communication, and Devumi must obtain a signed and dated statement from the endorser, acknowledging receipt of that statement and expressly agreeing to comply with it;
    • establish a system, which at a minimum must include reviewing online videos and social media postings, to monitor and review the representations and disclosures of such endorsers;
    • conduct an initial review of all endorsements before compensating any endorser for an endorsement campaign; and
    • terminate and cease payment to any endorser who does not clearly and conspicuously disclose his or her material relationship with Devumi, or misrepresents him or herself in any manner; and
  • Upon the NY AG’s request, Devumi must produce reports that showing the results of its monitoring efforts.

Under the Florida settlement:

  • The Florida AG has investigated Devumi’s practices pursuant to the Florida Deceptive and Unfair Trade Practices Act (“FDUTPA”);
  • Devumi shall comply with FDUTPA;
  • Devumi must pay $50,000 in investigative fees; and
  • Devumi is prohibited from:
    • purchasing, guiding, controlling, or managing any social media accounts generated using any natural person’s personal information, regardless of whether such social media accounts originated within Devumi, or via third-party providers;
    • utilizing any natural person’s personal information without that natural person’s express, written consent to generate social media account activity intended to increase consumers’ popularity or social media presence;
    • scraping any natural person’s personal information without express, written consent, regardless of whether that information is available to the public;
    • advertising, selling, or offering to sell products or services from natural persons when the social media account activity intended to increase consumers’ popularity or social media presence are from bots;
    • making any misrepresentation regarding or fraudulently endorsing any individual, product or service;
    • providing paid endorsements without clearly and conspicuously disclosing that the recipient compensated Devumi for such endorsements;
    • making any representation in the course of providing paid endorsements which would lead a consumer acting reasonably to believe that the social media accounts making the endorsements are owned or operated by natural persons when in fact such accounts are bots; and
    • misrepresenting that Devumi’s products or services are approved by any social media platform or are risk-free.

Notably, neither settlement addresses the legality of customers purchasing fake social media engagement.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Erika C. Collins, Daniel Ornstein, Lloyd B. Chinn and Vanessa P. Avello

According to a press release issued by the European Commission today, the European Parliament and the Member States have agreed to adopt new rules that set the standard for protecting individuals who blow the whistle on breaches of EU law from dismissal, demotion, and other forms of retaliation. This reform, which was first proposed by the European Commission in April 2018, seeks to replace the patchwork of whistleblower protections that currently exist across the Member States with a uniform approach. If formally adopted by the Parliament and Council, the new rules would protect those who report violations of various areas of EU law, including data protection, and Member States could extend protection to other areas of the law as well. Employers would have an obligation to create safe reporting channels within the organization, and whistleblowers, while encouraged to report internally first, also would be protected when reporting to public authorities. Additionally, whistleblowers could safely report violations directly to the media if no action was taken, if a report to the authorities would be futile, or when the violation is an “imminent” or “manifest” danger to the public interest. Lastly, the new rules would require that national authorities inform citizens and train public authorities on various aspects of whistleblowing. We will continue to monitor this development.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Kelly McMullon

With less than a month to go until the UK is due to leave the EU (at 11pm GMT/12pm CET on 29 March 2019), there is still much uncertainty as to whether, and if so how, the UK will exit the EU (commonly dubbed “Brexit”). In light of this uncertainty we outline what will happen, and what should be considered, depending on how things play out especially given the important votes due to take place within the UK Parliament this week.

What happens if there is a deal?

Currently, as the UK is part of the EU and so has implemented the General Data Protection Regulation (the “GDPR”), there are unrestricted personal data flows between the UK and the rest of the EU.

If the UK and EU are able to agree a deal as to how Brexit will be implemented (officially the “Agreement on the Withdrawal of the United Kingdom from the European Union”, or “withdrawal agreement”), that will mean that the EU and UK will enter into a transition period (to 31 December 2020, or possibly later) during which time the EU and UK will seek to agree to a new long term trade deal.

During this transition period the UK must abide by all EU rules. With respect to data protection considerations that means that personal data can continue to flow freely during this transition period. The EU will use this time to assess whether the UK’s data protection practices are essentially equivalent to the EU’s and “endeavour to adopt” an adequacy decision to seek to ensure the continued free flow of personal data after the transition period.

The EU has recognized a limited number of countries as providing “adequate” protection for individuals’ personal data, such that personal data can be transferred freely from the EU to these non-EU jurisdictions. The list currently includes Israel, transfers made under the Privacy Shield framework in the USA, Switzerland, and most recently Japan.

The UK will have to be assessed like any other country that wishes to receive an “adequacy decision”. The UK has a head start given that it has implemented the GDPR, but the result of the adequacy assessment is not a foregone conclusion. The EU will look at all aspects of UK data privacy protection including the rule of law and the access public authorities have to personal data. On the latter, for instance, the European Court of Justice has been concerned about the access the UK’s security services can have to personal data. The UK Government has sought to resolve this concern.

Meanwhile, the UK will incorporate the GDPR into UK law with references to EU bodies/legislation instead referring to the appropriate UK bodies and incorporated legislation.

What happens if there is no deal?

In a “no deal” Brexit, there will be no transitional arrangements in place. Though the UK will still incorporate the GDPR into UK law, the UK will be seen as a third country by the EU. Businesses should therefore consider the following areas that could require action:

Privacy Documentation: Consider, and if necessary update, privacy related documentation and agreements including references to the EU, UK, and the European Economic Area (“EEA”), references to relevant privacy legislation and associated terminology. The EEA is the EU Member States plus Iceland, Liechtenstein and Norway.

International Transfers: Consider and map any personal data flows between the UK and EU/EEA. The UK Government has already made clear that with respect to transfers from the UK to the EU/EEA, the UK will view the EU/EEA as adequate. The UK will also view as adequate the laws of any other country that has already received an adequacy decision, though exporters from the countries considered adequate will need to comply with any local law.

With respect to personal data transfers from the EU/EEA to the UK, the EU/EEA will consider the UK as a third country. The EU/EEA will not consider the UK “adequate”, and so those transfers will need to occur on another lawful basis, such as binding corporate rules (“BCRs”) or standard contractual clauses. The EU/EEA will, in time, assess the UK’s adequacy, though there would be no agreed timeframe and the process can take a number of years.

Therefore, in this situation, there will be a need for businesses to ensure that there are appropriate safeguards in place (or that there is an exception that can be relied upon) to lawfully transfer personal data from the EU/EEA to the UK.

If that safeguard is binding corporate rules, consider where the current lead authority is. If the current lead authority is the UK, then that will need to change to an EU lead authority. BCRs will also need to be updated to ensure that the UK is considered a third country.

The UK and the EU would both need to approve any future BCRs.

One-Stop Shop: If the UK was previously the lead authority, then the business will need to consider if any of its other operations in the EU could be the lead authority instead.

EU Representative: If a business is based in the UK and does not otherwise have operations in the EU but targets data subjects in the EU or monitors the behaviour of data subjects in the EU, the business should consider whether it needs to appoint an EU representative. The UK will also replicate this provision such that if a business is based outside of the UK, but targets data subjects in the UK or monitors the behaviour of data subjects in the UK, then an UK representative should be appointed.

Breach Reporting: If a breach occurs in both the UK and in the EU, then the business will need to report the breach to both the ICO and the relevant data protection authority/ies in the EU. This could lead to fines being imposed by both the ICO and the EU data protection authority/ies.

I’ve heard that Brexit could be delayed or even halted, what happens then?

At the time of writing, it is not clear if Brexit will be delayed or halted, but what is clear is unless and until the UK exits the EU, then the status quo will continue. Whilst the UK is a member of the EU and EEA, personal data can continue to flow freely.

If you have any questions as to what your business needs to do next, then please contact Kelly McMullon or another member of Proskauer’s Privacy & Cybersecurity Practice Group.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Kristen J. Mathews, Stephanie Kapinos and Kevin Milewski

California already has some of the strongest data privacy laws in the United States, but within the past week state legislators, with the backing of the California Attorney General Xavier Becerra, have proposed two new bills that would strengthen California’s data privacy laws even more. One bill (SB 561) would amend key sections of the California Consumer Privacy Act (the “CCPA”), which we have previously blogged about when it was first enacted and when it was subsequently amended, and the other bill (AB 1130) would expand the definition of “personal information” under California’s data breach notification law to include biometric information and government-issued ID numbers (e.g., passport numbers).

California Consumer Privacy Act Amendment

SB 561 (the “CCPA Bill”) would modify some key elements of the CCPA, which was first passed on June 28, 2018 and is slated to become operative on January 1, 2020. The CCPA Bill expands the private right of action under the CCPA and limits two protective measures for companies previously built into the law.

Private Right of Action

The CCPA Bill allows for an expanded private right of action by California citizens under the Act. As the law is currently written, only the California Attorney General can sue for most violations (note: there is a private right of action under Section 1798.150 limited to consumers whose personal information “is subject to an unauthorized access and exfiltration, theft, or disclosure as a result of the business’ violation of the duty to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information”). Under the currently existing CCPA, a consumer may only bring a private lawsuit if they first provide the business with 30 days written notice identifying the specific provisions of the CCPA that have been violated. If the business cures the breach, the private lawsuit may not be initiated. However, the CCPA Bill would remove this 30-day cure period, as detailed further below.

The CCPA Bill expands Section 1798.150(a)(1) and 1798.150(c) to allow for a private right of action under the CCPA for “any consumer whose rights under this title are violated”, not just violations involving unauthorized access, theft, or disclosure of information. The Attorney General’s goal in this regard are to provide more recourse to consumer’s when the CCPA is violated.

Attorney General Opinions

The CCPA Bill revises the option under Section 1798.155(a) for a business or third party to seek the opinion of the Attorney General for guidance on how to comply with the CCPA. The amendment would strike this option and instead require the Attorney General to publish general public guidance about the law.

30-Day Cure Period

The CCPA Bill also deletes the 30-day cure period currently provided for under the law. Section 1798.155(b) allows businesses in violation of the CCPA 30 days after being notified of alleged noncompliance to cure the alleged violations before a civil action can be commenced. The CCPA Bill would allow for enforcement under the CCPA immediately, without prior notice.

Data Breach Notification Law Amendment

In addition to changes under the CCPA, on February 21, 2019, AB 1130 (the “Notification Bill”) was introduced and would expand California’s definition of “personal information” under its breach notification law to include biometric information and government-issued identification numbers (presumably to include such information as passport numbers, as the law already states that a driver’s license or state ID card number fall under the definition of personal information, when combined with an individual’s name). The bill is seemingly a direct response to the recent breaches which potentially compromised the passport numbers of millions of California residents.

Under California’s data breach notification law, notification obligations are only triggered for breaches involving “personal information,” which is currently defined as a first name or initial and last name in conjunction with a social security number, driver’s license number, California identification card number, account number or financial card number in combination with a password, medical information, health insurance information, or information collected through an automated license plate recognition system.

The Notification Bill proposes to expand the definition of personal information by adding “other government-issued identification numbers[s]” and “[u]nique biometric data generated from measurements or technical analysis of human body characteristics, such as a fingerprint, retina, or iris image, or other unique physical representation or digital representation of biometric data.”

While some other states have expanded the scope of their own breach notification laws in recent years, the Notification Bill is significant because California has long served as a guidepost for other states drafting or amending their own data breach notification laws. Many other states already include government-issued identification numbers and biometric data in their definitions of personal information, but California’s amendment could inspire additional states to expand their laws.

With both proposed bills, it is clear that data privacy remains high on the agenda of California legislators and the Attorney General. We will continue to monitor updates on California privacy laws, particularly as the CCPA effective date gets closer.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview