Loading...

Follow Privacy Matters - DLA Piper Blogs on Feedspot

Continue with Google
Continue with Facebook
or

Valid

By Andrea BATALLA

Article 35.1 of the General Data Protection Regulation (“RGPD”) provides that organisations processing personal data have to carry out privacy impact assessments where processing activities are likely to pose a high risk to the rights and freedoms of individuals. In particular, privacy impact assessments are aimed at identifying the activities that carry such a risk and at establishing the most appropriate control measures to minimise that risk before processing activities begin.

Until now, the criteria to determine if this risk existed, was not clear. Almost a year after the RGPD came into force, the Spanish Data Protection Agency (“AEPD”) published on 6 May 2019 an indicative list of processing activities that the AEPD considers likely to generate a high risk for the rights and freedoms of the persons whose data are processed. In particular, the more criteria on the list are met by a specific processing activity, the greater the risk involved and the greater the certainty of the need to carry out a privacy impact assessment. Thus, the AEPD has defined that a privacy impact assessment will be necessary in cases where the processing activities meet two or more criteria of the list.

The list obtained the favorable opinion of the European Data Protection Committee and is published on the AEPD’s website.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The first GDPR fine was issued in Italy by the Garante for the lack of implementation of privacy security measures following a data breach on the so-called Rousseau platform operating the websites of the Movimento 5 Stelle party.

The fact of the case relating to the Rousseau platform

Several websites affiliated to the Italian political party Movimento 5 Stelle are run, through a data processor, through the platform named Rousseau. The platform had suffered a data breach during summer 2017 that led the Italian data protection authority, the Garante, to require the implementation of many security measures, in addition to the obligation to update the privacy information notice to give additional transparency to the data processing activities performed.

The lack of privacy-related security measures challenged

While they timely updated the privacy information notice, the Italian data protection authority raised its concerns as to the lack of implementation on the Rousseau platform of some of the following GDPR related security measures:

  1. a vulnerability assessment to be periodically repeated which was performed on the platform, but according to the Garante left issues around the usage of an old software which was no longer updated by the supplier so that the implementation of patches was complicated and time-consuming;
  2. a system aimed at strengthening passwords to be used for the creation of the accounts and to avoid the risk of brute force attacks which was adopted on the platform;
  3. secure protocols and digital certificates to protect data during their transit and reduce the risk for users to be attracted by fake sites which are measures put in place on Rousseau platform;
  4. solutions aimed at increasing the level of security of the storage of passwords due to the weak cryptographic algorithms which no longer became an issue for the majority of users;
  5. auditing measures obliging to keep the recording of the accesses and operations completed (the so-called logs) on ​​the database of the Rousseau system to guarantee the integrity of data and at least the ex-post control of the activities carried out on the system which remained an unsolved issue.

In particular, the lack of adoption of measures relating to the storage of log files regarding the activities performed by the IT support personnel of the platform was the most relevant matter of the dispute. There was a tracking of the access by the IT support personnel of the platform to only some pages could be tracked. Also, no recording of performed operations occurred.

Additionally, the Garante challenged that system administrators were using shared accounts with quite large privileges in the operation of the platform. Such circumstance was an issue, also in the light of the possibility for such system administrators to access to special categories of personal data, such as those on political opinion.

Finally, also the security measures aimed at anonymizing the activities performed through the e-voting system were considered to be not adequate.

The first GDPR fine issued in Italy

Due to the challenged indicated above, the Garante held that there was the breach of article 32 of the GDPR for the lack of appropriate technical and organizational measures on the Rousseau platform and it issued a € 50,000 fine.

Interestingly, the fine was not against the Movimento 5 Stelle that is the data controller of the platform but against the Rousseau association that is the data processor. For the first time, the data protection authority did not consider that the data controller is liable for whatever performed by the data processor and recognized that there could be a liability of the data processor, without liability of the data controller.

Also, the decision is interesting as it gives a more precise understanding of the security measures that privacy authorities expect to have in place regarding a platform processing large amounts of personal data. Indeed, while the Italian privacy code was quite specific on the required minimum security measures, the GDPR requires an assessment as to the adequacy of the security measures adopted by the data controller in the light of the accountability principle.

Finally, it is worth it to mention that the proceeding initiated before May 2018, but the Italian data protection authority issued a fine under the GDPR since the Rousseau platform had not adopted security measures required through an order issued after the 25th of May 2018. As a consequence, also pending proceedings might lead to following up fines under the EU General Data Protection Regulation.

For more information, you can contact Giulio Coraggio at giulio.coraggio@dlapiper.com

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The United Arab Emirates (UAE) federal government has issued Federal Law No. 2 of 2019 on the Use of Information and Communication Technology (ICT) in Health Fields (“ICT Health Law”).

The objectives of this law are to:

  • ensure the optimal use of ICT in health fields;
  • ensure safety and security of health data and information.

It is to be supplemented by implementing regulations, which are yet to be published.

The following are some key features of the ICT Health Law.

Broad application

The law uses two broad definitions which are central to its application. These are:

  • Competent Entity – “Any entity in the State providing medical services, health insurance or national health insurance services, brokerage services, claims management services or electronic services in the medical field of any entity related, whether directly or indirectly, to the implementation of the provisions hereof”; and
  • Health Information – “The health information that were processed and were given a visual, audible or readable indication, and that may be attributed to the health sector, whether related to the health or insurance facilities or entities or to the health services beneficiaries”.

The ICT Health Law therefore appears to have a very broad application within the UAE.  As it is a federal law, and explicitly applies to free zones, then where there is an inconsistency between the ICT Health Law and an emirate level or free zone law, the ICT Health Law will apply to the extent of the inconsistency.  This raises questions about the treatment of data coming from, for example, the Dubai Health Care City (“DHCC”), as well as from other free zones with personal data laws, such as the Dubai International Financial Centre (“DIFC”) and Abu Dhabi Global Markets, particularly around the transfer of data out of those free zones to other countries, as discussed in the following section.

Data Localisation

The ICT Health Law imposes a general prohibition on the transfer, storage, generation or processing of Health Information and data related to the provision of health services in the UAE to countries outside the UAE.  As noted above, Health Information is given a broad meaning, covering all data that “may be attributed to the health sector”.

This should be contrasted with the DHCC’s Data Health Protection Regulation, which does allow transfers of patient health information to countries which the DIFC has determined to have an adequate level of jurisdiction, or where the transfer is authorised by the patient or otherwise necessary for the on-going provision of healthcare services to the patient.  None of these exceptions are expressly catered for in the ICT Health Law.

The ICT Health Law does, however, provide for an exception where the relevant health authority and the Ministry of Health and Prevention have agreed on certain cases where the transfer, storage, generation or processing of the health information outside the UAE may be allowed.  It is not yet clear whether such approvals will be made on a case-by-case basis, or whether this will be in the form of a list of permitted types of transfers, or whether the same exceptions will apply across each health authority in each emirate.

Failure to comply with the requirements of the ICT Health Law in respect of data localization is punishable by a fine of between AED 500,000 – 700,000 (approximately USD 136,000 – 190,500).

Given that many health care service providers and health sector participants will already be using cloud based systems it is not clear to us if the Ministry or relevant health authorities are planning to enforce this immediately, or if there will be some grace period.

Confidentiality and restrictions on use of health data

In addition to existing patient confidentiality rules in the UAE, the ICT Health Law also imposes an obligation on those “circulating” information about a patient to keep it confidential and to only use it for health purposes, unless the written consent of the patient has been obtained for such use or disclosure. However, an exception to this is set out in the law where the patient information is to be used for scientific and clinical research purposes, provided that the identity of patients is not disclosed and that the ethics and rules of scientific research are respected.

Minimum Retention Period

The ICT Health Law also sets out retention requirements in relation to health information. Article 20 of the Law specifies that health information should be kept for a period commensurate to the purpose to which they relate, but that this period should be no less than 25 years from the date the last health procedure was carried out on the person to whom the information relates. This is longer than retention periods in other medical laws in the UAE, and longer than current civil and criminal limitation of liability periods in the UAE.

Data Exchange

Finally, the ICT Health Law also seeks to create a centralized health data exchange which will be coordinated by the Ministry of Health and Prevention. Certain actors in the healthcare sector will be required to participate in the exchange and may be required to share certain of the Health Information they obtain from patients with the exchange.

It is not yet clear what data will be required to be shared, nor what is the framework which will apply for doing so.  It is expected that further information in respect of the centralized data exchange will be set out in the implementing regulations.

Key Takeaways

The ICT Health Law is an important law that may have a profound impact on how health care is currently provided for in the UAE, as well as how innovation in the sector, such as tele-health, might be fostered.  In particular,  its implementation will be of particular importance to those in the health sector that are using or providing cloud and IT services based outside of the UAE.

For further information, please contact Eamon Holley (Partner, Dubai) or Ben Nolan (Associate, Dubai).

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

by Patrick Van Eecke & Gilles Hachez

A little less than a month ago, the Belgian House of Representatives appointed the new commissioner and directors of the Belgian Data Protection Authority (DPA), as we explained in our blogpost here.

Today, little less than eleven months after the establishment of the DPA, the new commissioner, Mr David Stevens, and the new directors are finally taking the oath. While Mr Stevens expects that it will still take some time to get the Belgian DPA running at full speed (the 60 people working for the DPA still need to be dispatched among the five different entities making up the Belgian DPA), his message is crystal clear: the era of “sitting back and relaxing” is over. The revamped DPA will now take a more active stance, and not just “keep the machine running”.

To illustrate the need of a new approach, Mr Stevens pointed to the comparatively low amount of data breach notifications in Belgium (442 as of January 2019 in Belgium, compared with 21.000 in Netherlands in 2018). From now on, the Belgian DPA wants to be a modern supervisor, he added, a supervisor that will come out and make clear to the sectors and companies what is expected from them, but at the same time expecting companies to meet them halfway.

According to Mr Stevens, the lack of an Executive Committee certainly led some companies to procrastinate their compliance with the new European data protection rules. While recalling that the Belgian DPA has the competence to issue fines of up to 20 million euros or 4% of total worldwide annual turnover (whichever is higher), the new commissioner signalled that, if necessary, the Belgian DPA would not hesitate to issue fines to those not playing by the rules. It seems the new appointments finally gave the Belgian DPA the teeth it needed, and judging from Mr Stevens’ comments, it is prepared to use them.

For further information, please contact patrick.vaneecke@dlapiper.com or gilles.hachez@dlapiper.com 

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The European Data Protection Board has released for consultation a new set of guidelines on the topic of the processing of personal data in the context of online services (“Guidelines“). In particular, the Guidelines focus on the circumstances in which it is appropriate to use performance of a contract (Art. 6(1)(b)) as the lawful basis for processing personal data, and those in which other bases, such as consent (Art. 6(1)(a)) or legitimate interests (Art. 6(1)(f)) are preferable.  The Guidelines attempt to correct perceived ‘bad practice’ resulting from an overly broad application of the performance of a contract basis.  They can also be seen as building on previous advice which the EDPB and its forerunner, the Article 29 Working Party, have given on the issue of lawful bases for processing, in particular Opinion 06/2014.  Lawful basis has also come up recently, in relation to consent, in the Planet49 case, and these new Guidelines also address consent (at least tangentially) in an online context.

Key Takeaways

Some of the key points from the guidelines are:

·         Necessity.  The concept of what is necessary for the performance of a contract is not simply equivalent to what is written into a contract.  This is a well-established point, but one which is worth emphasising.  A website operator cannot artificially expand the scope of Art. 6(1)(b) through the way in which it drafts the online Terms and Conditions, for example by listing processing activities which are not strictly necessary in order to deliver the services requested by the customer.

·         Contracted services versus wider business model.   To help understand the distinction between what is, and what is not, necessary to perform a contract for the purposes of Art. 6(1)(b), the guidelines draw a helpful distinction between what is necessary to deliver the contracted services to the individual, on the one hand, and what is necessary for the controller’s wider business model, on the other hand.  Two examples of activities which are likely to fall into the latter category are (i) service improvement / development of the website; and (ii) advertising of related services (including online behavioural advertising facilitated by cookie technologies) on the website.

·         Purpose limitation.  The principle of purpose limitation (i.e. only collecting personal data for certain specified purposes) dictates that, where a controller wishes to rely on Art. 6(1)(b), the contract must clearly and specifically state the relevant purposes for processing.  Vague expressions (‘delivering the services’ or ‘administering the contract’) are to be avoided.  This notably aligns very closely with the EDPB’s previous guidance in relation to transparency, which cautioned against vague expressions and similarly imprecise language in the context of privacy notices generally.

·         Entering into a contract ≠ consent.  The EDPB is concerned about a risk of misunderstanding, by either the data subject or the controller, that a data subject’s acceptance of, or agreement to, a set of T&Cs, will constitute consent for the purposes of data protection (i.e. Art. 6(1)(a)).  Website sign-up forms which are woolly or misleading on this point are common.   If express agreement to T&Cs is desirable for legal evidential reasons, then this should be dealt with separately from an acknowledgement of the privacy notice / privacy policy (the contents of which should be clear about the actual lawful basis for processing)

Context Specific Examples

The guidelines conclude by focusing on some common processing scenarios in an online context, and analysing whether (or not) Art. 6(1)(b) would be applicable.

As indicated above, the guidelines are clear that ‘service improvement’ type activities (e.g. collecting metrics on how data subjects use and engage with the website in order to improve the website) are not necessary for the performance of a contract – this is fairly uncontroversial, and most controllers would properly opt for legitimate interests or potentially consent. Also uncontroversial (as it was dealt with in Opinion 06/2014), although perhaps more surprising to some controllers, is the guidance that fraud prevention processes will not normally benefit from Art. 6(1)(b).  In an online context, these might range from customer address verification checks, to sophisticated behavioural analysis tools combating e-commerce payment fraud. The EDPB’s view is that, whilst they may be essential for the safe operation of the wider business model, they go beyond what is objectively necessary for the performance of a contract with a data subject.

Personalised Services

The most interesting example chosen is in relation to the personalisation of content.  A huge array of websites offer services which are at least partly personalised to an identified user, with the personalisation based on their past activities on the website, their purchase history (so-called ‘recommendation engines’) or their expressly stated preferences.   In the EDPB’s view, where the personalisation is an “intrinsic aspect of an online service“, then the Art. 6(1)(b) basis will be available to support the underlying data processing.   However, the EDPB carefully distinguishes between services where the personalisation is the core component of the service (for example, a news aggregation service which pulls in content from different places based on your stated preferences, think Apple News or Feedly), to one where the personalisation is ancillary.  For example, the core service offered by your typical e-commerce site is the sale of products –  showing a user a personalised virtual storefront, with a personalised list of products is not, normally, intrinsic to that service, and therefore not necessary for the performance of the contract for Art. 6(1)(b) purposes.

James Clark, Senior Associate, DLA Piper UK LLP

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Denise Lebeau-Marianna and Caroline Chancé

On 11 April 2019, the French Data Protection Supervisory Authority (CNIL) published two draft standards intending to provide practical guidance in relation to the processing of personal data for HR management and whistleblowing systems.

The purposes of such standards is to:

  • Assist businesses in their compliance process, and
  • Help controllers with the preparation of data protection impact assessments (PIA), where required;

by detailing what the CNIL considers as compliant and best practice.

The standards, which in a way replace the former “simplified standards” and “single authorizations” (formerly known as simplified declarations with the CNIL, prior to the entry into application of the GDPR), describe in particular:

  • The purposes for which the personal data may be processed,
  • The legal grounds which may be used,
  • The categories of personal data that may be processed,
  • The recipients of the personal data,
  • The acceptable retention periods,
  • How to inform data subject,
  • How to manage their rights,
  • The recommended security measures to be implemented, and
  • Whether or not a PIA is required.

They also provide practical examples.

Such standards are not mandatory (unlike the standard regulations – see e.g., our post on the recent standard regulation for biometric systems in the workplace adopted by the CNIL recently). But if a controller decides to departs from them, based on its specific situation, then it must be able to demonstrate there is a real need for doing so and take all appropriate measures to ensure compliance of the processing with data protection rules.

For now, the standards are mere drafts, open to public consultation until:

  • May 10 for the draft standard on HR management,
  • May 31 for the one on whistleblowing schemes.

At the end of the public consultation, the CNIL may modify its drafts before they are approved and published in their final version. We will provide a more detailed analysis of the two standards once they will be definitely adopted.

For more information, please contact Denise Lebeau-Marianna or Caroline Chancé.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In recent years, internet platforms which host user generated content (including major social media sites) have been the subject of widespread scrutiny.  Whether fairly or unfairly, a range of social ills, from cyber bulling, to election interference, to terrorism have been linked in some way with these platforms.  Of particular concern has been a perception that these sites hide behind their legal categorisation as mere ‘intermediaries’ or ‘distributors’ of content.  Unlike publishers (e.g. traditional media outlets) – who are required to exercise editorial control and responsibility over their content – these websites remain one step removed.  However, the incredible power and influence that many of these websites undoubtedly have, and their often ever-present role in the  daily lives of their users, has led to demands for greater regulation and oversight, in order to force the websites to do more to curb the perceived harms caused by the content which they host.

In the United Kingdom, the Department for Digital, Culture, Media and Sport (“DCMS“) has this week published its ‘Online Harms White Paper’. This paper sets out a framework for tackling online content which is harmful to individual users, particularly children, or threatens key values within the UK.

The Framework (an overview)

The Government will establish a new statutory duty of care which will require companies to take more responsibility for the safety of their users. All companies who fall within the scope of this framework will need to be able to show that they are fulfilling their duty of care.

Clear and accessible terms and conditions will need to be displayed to users, including to children and other vulnerable users.  An independent regulator will then assess how effectively these terms are being enforced as part of any regulatory action. This regulator will have an array of powers in order to allow it to take effective enforcement action against companies who have breached the new statutory duty of care.

To assist the companies in complying with the new legal duty, the regulator will provide a code of practice.

Who will be affected?

The Government is proposing that the framework should apply to companies that allow users to share or  discover user-generated content or interact with each other online. As this would cover a substantial number of companies, the regulator would take a risk based approach, focusing on companies which pose the greatest risk of harm to users.

The Regulator

The Government is currently considering whether the regulator should be a new or existing body. In the medium term, the Government is anticipating that the regulator will be funded by industry. The Government is also exploring options such as fees, charges or levies to enable the regulator to be more sustainable. The anticipation is that, once the regulator is secure, it will be able to fund the production of the codes of practice, the enforcement of the duty of care and the preparation of transparency reports.

Powers of the Regulator

The Government hopes for the regulator to have a range of enforcement powers. These would include the power to impose significant fines which would assist in ensuring that all companies within scope of the framework comply with their duty of care.

Comment

As these proposals evolve, it will be interesting to see how some of the broad objectives of the framework are turned into concrete obligations, and how these obligations will align with existing legal frameworks – in particular data protection, consumer protection and intellectual property laws.  Already, the UK’s data protection regulator (the Information Commissioner’s Office, “ICO“) has issued a public statement on the paper, in which it welcomes the paper, but also clearly points to the fact that it has already been doing work in this space (in particular its on-going investigation into political campaigning and alleged election interference, which has already led to well-publicised enforcement action against Facebook).  Overseas, the Irish Data Protection Commission (the lead supervisory authority for many well-known social media sites) is investigating Facebook, Instagram, Twitter and LinkedIn in relation to potential breaches of the GDPR.[1]   Clearly, the UK Government will need to think carefully about how a new set of obligations and a new regulator can best work alongside the GDPR and the ICO.

James Clark, Senior Associate, and Christopher Wilkinson, Trainee Solicitor, DLA Piper UK LLP

[1] See the DPC’s 2018 Annual Report https://www.dataprotection.ie/sites/default/files/uploads/2019-03/DPC%20Annual%20Report%2025%20May%20-%2031%20December%202018.pdf

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Denise Lebeau-Marianna (Partner) & Alexandre Balducci (Associate)

  1. Why did the CNIL adopt a specific regulation for the use of biometric data processing in the workplace?

In accordance with Article 9 (4) of the General Data Protection Regulation (GDPR) which provides that “Member States may maintain or introduce further conditions, including limitations, with regard to the processing of genetic data, biometric data or data concerning health“, the French Supervisory Authority (CNIL) has been granted by the revised French data protection act 78-17 of 6 January 1978 (FDPA), the power to issue “standard regulations to ensure the security of personal data processing systems and to regulate the processing of , genetic data, biometric data and health data”.

Based on such power, the CNIL has adopted on 10 January 2019, further to a sectorial consultation with public bodies and private organisations, its first standard regulation that lays down legally binding rules applicable to data controllers subject to French Law, who use biometric systems to control access to premises, devices and applications at work (the “Regulation“).

The Regulation has been published by the CNIL on 28 March 2019 and is accompanied by a FAQ which provides practical guidance to companies on how to comply with these requirements

  1. What is the exact scope of such Regulation?

The Regulation prescribes specific requirements for the processing, by a public or private employer, of biometric data to control accesses to work premises, to information systems or applications used in the context of business tasks entrusted to data subjects (i.e., employees, agents, interns and contractors).

The provisions of the Regulation are in line with the CNIL’s previous guidelines regarding the processing of biometric data in the workplace.

The CNIL reminds that the rules resulting from the Regulation do not exclude the application of the general obligations under the GDPR, notably regarding compliance with the general principles for processing personal data, data subject rights and international transfers. The Regulation merely supplements the GDPR and its application is compulsory when an organisation wishes to implement a processing of personal data that falls within its scope.

  1. What are the requirements set forth by the Regulation?

Given the particular sensitivity of biometric data, the Regulation sets out stringent obligations to data controllers regarding the conditions of processing of such biometric data in the workplace:

  • Limited purposes: the purposes of the processing are strictly limited to access control of premises requiring restricted access or access control to a limited number of devices and IT professional applications, which must be clearly identified by the organization.
  • Proportionality: the organization must demonstrate that it is not possible to achieve the above purposes by means other than the processing of biometric data. The data controller must document why such a high level of protection is needed given the context at hand and evidence why the processing of biometric data is the most relevant mean to ensure security.
  • Data Minimization: the access control system based on biometric data must rely on limited categories of personal data – which are listed in the Regulation – for each of the following categories: (i) identification of the data collected by the employer or its personnel and (ii) data generated by the system (log files).
  • Restrictions applicable to biometric data: only biometric authentication based on morphological characteristics of data subjects may be used and the biometric mean selected (e.g., use of iris recognition rather than fingerprints) must be documented and justified. Biometric authentication based on biological sampling (e.g., saliva or blood) is prohibited for the purposes of the Regulation.
  • Types of biometric templates: for the purpose of the Regulation, a template is a set of measurements of an individual’s morphological characteristics. The Regulation defines three template types which correspond to various levels of data subject control over the way their biometric data is processed by the employer:
    • Type 1” template is the most protective of individual rights: it is stored on a medium which remains under the individual’s exclusive possession (e.g., token or badge) .
    • Type 2” template is under the shared control between the individual and the employer: there is a centralized template database which is encrypted and may only be activated under the control of the individual at hand.
    • Type 3” template is under the exclusive control of the employer and creates the highest risks for individual privacy in the event of a data breach as there is a centralized template database.

The Regulation specifies that “Type 1” template must be used as a matter of principle, whereas “Type 2” and “Type 3” templates shall only be used under exceptional and justified circumstances for critical environments where the loss of a token or badge would have particularly serious consequences (e.g., access to nuclear plant or surgical suite).

  • Restricted access to the data: internal “authorisation profiles” must be implemented to access biometric data. Only the personnel having legitimate business needs, depending on their function, may access biometric data (accesses differ depending on the personnel collecting the biometric data) ;
  • Limited retention: the organization must comply with mandatory retention periods and retention modalities: e.g., raw biometric data used to create biometric templates shall be destroyed as soon as a template has been created; the biometric template shall be deleted once the data subject’s access authorisation has stopped or has been withdrawn; log data and identification data shall be retained for 6 months further to the recording thereof – but may be kept in archive mode if required by law of for contentious purpose in compliance with the applicable statute of limitation.
  • Data Subject Information: data subject must be provided with specific and individual information prior to the time of enrolment of their biometric characteristics in the system.
  • Security measures: the data controller must take any useful precautions, taking in consideration the nature of the data and the risks raised by the processing for data subjects and their rights, to preserve the availability, integrity and confidentiality of the data. The data controller is thus requested to implement a list of measures exhaustively described in the Regulation or to evidence that the measures taken are equivalent thereto. Such measures must include (i) measures related to the data, (ii) measures related to the organization, (iii) measures related to the hardware, (iv) measures related to the software, (v) measures related to the IT channels, including notably state-of-the-art encryption. All these measures must be audited on a yearly basis and be revised in accordance with the CNIL recommendations, as updated from time to time.
  • Data Privacy Impact Assessment (DPIA): In line with the CNIL list of data processing activities that require a DPIA (see our previous post on the topic here), any employer who wishes to implement a processing of biometric data within the scope of the Regulation must conduct a DPIA in this respect, and update it at least every three years.

To clarify the reading of Regulation, the CNIL also provides a FAQ (e.g., definition of biometry, of standard regulation, of biometric data processing, the legal ground to use for such data processing, how to determine the status of controller and processor). For more information, please read the Regulation and the FAQ.

For more information, please contact Denise Lebeau-Marianna (Partner) and Alexandre Balducci (Associate), DLA Piper France LLP

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The EU’s High Level Expert Group on Artificial Intelligence (“AI HLEG“) has published its much anticipated guidelines on the ethical use of AI, with the title ‘Ethics Guidelines for Trustworthy AI‘ (“Guidelines“).  The Guidelines articulate a set of non-binding but influential principles for the ethical development and implementation of AI systems in  Europe.  They are particularly focused on AI systems which impact human beings, either because the systems make decisions which effect humans, or because they replace roles previously performed by humans.   They represent a roadmap for future law and policy making in this area, much of which is likely to take place in the fields of privacy, security and data regulation.

Background

The AI HLEG was set up by the EU Commission in 2018, and tasked with producing a set of guidelines to support the Commission’s vision of an ethical, secure and cutting-edge AI industry in Europe.  An initial draft of the Guidelines was submitted for public consultation, and received detailed feedback from a number of stakeholders, including industry bodies with a particular interest in AI systems with a human impact, such as Insurance Europe, private companies particularly invested in the field, and from academics.

Key Principles and Requirements

The Guidelines create the concept of ‘trustworthy AI’, and identify three core components to any trustworthy system: (i) that it is lawful; (ii) that it is ethical; and (iii) that it is robust.  Focusing on points (ii) and (iii), the Guidelines develop seven requirements for a trustworthy AI system:

  1. Human agency and oversight;
  2. Technical Robustness and safety;
  3. Privacy and data governance;
  4. Transparency;
  5. Diversity, non-discrimination and fairness;
  6. Societal and environmental well-being; and
  7. Accountability.

It is notable that many of these broad ethical requirements can be mapped closely on to existing privacy law obligations, in particular those contained in the GDPR.  For example: the right to challenge and review automated decision making; the right to object to profiling; the right to transparency and to receive detailed information about the use of your personal information; the requirement for technical and organisational security to guard against harm; and the requirement for accountability on the part of data controllers.

Practical Application

The final section of the Guidelines sets out an ‘assessment list’, which is essentially a practical checklist decided to concretise the broad requirements listed above into a set of controls which can be used to assess an AI system in its development phase.  During a pilot phase, the assessment list will be tested by stakeholders to see how well it works in practice, leading to feedback in early 2020.  In general, any organisation looking to either develop or implement new AI systems would be well-served to have regard to both the requirements and to the assessment list, as these serve as helpful indicators of the likely alignment of the AI system both with future regulations which may emerge out of the guidelines, as well as with current laws, such as the GDPR, which share many of the same themes.

James Clark, Senior Associate, DLA Piper UK LLP

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Joonas Dammert

Background

The Finnish Parliament has approved the new general Act on the Secondary Use of Social Welfare and Health Care Data (Laki sosiaali- ja terveystietojen toissijaisesta käytöstä, based on government proposal HE 159/2017) in March 2019. The Act shall become effective within the following weeks.

The Act is a welcome change to the old regime where national provisions concerning the subject matter have been scattered into different regulations, namely the Patient’s Rights Act (1992/785), Act on Electronic Processing of Social and Health Care Customer Data (2007/159), Bio Bank Act (2012/688) and Medicines Act (1987/395). This fragmentation has, unsurprisingly, lead to a heavy administrative burden for the secondary users of social and health care data by parallel and slow licence procedures with various authorities.

The new Act codifies the relevant legislation and broadens the possibilities to, under certain conditions, utilize and combine for secondary purposes personal data collected in relation to public or private social and health care operations.

As for the data subjects, the main purpose is to ensure full compliance with the applicable data protection legislation while processing sensitive social and health care data for secondary purposes. The Act complements the GDPR and introduces reinforced data security requirements and strict authorization procedures.

The Act governs the transfers of personal social and health care data from data controllers responsible for the primary purpose of processing to an established IT ecosystem controlled by the licence authority. These are mainly administrators of major national registers, inter alia: the Social Insurance Institution (KELA), the Population Register Center (Väestörekisterikeskus), the Statistics Finland (Tilastokeskus) and the Pension Security Center (Eläketurvakeskus), National Supervisory Authority For Health and Welfare (Valvira), Finnish Institute of Occupational Health (Työterveyslaitos) and Finnish Medicines Agency (Fimea).

The secondary use of personal data stored at the registers of the aforementioned data controllers shall be allowed for permitted purposes under a fixed-term revocable licence. The decisions on licenses are subject to an appeal. The licence authority shall be a new ‘one-stop-shop’ operating under the supervision of the Ministry of Social Affairs and Health (Sosiaali-ja Terveysministeriö).

The license may be applied for educational, information management as well as innovation and development activities going beyond traditional research purposes reflected under GDPR 89 article.

License available for innovation and development activities has been promoted as an important opportunity for businesses to utilize and combine social and health care data with their existing technical and commercial data as well as to reap the benefits of a ‘one-stop-shop’ mechanism where they can acquire a license for data obtained from different data controllers. All of this means there are better opportunities for innovative product development by e.g. start-ups and pharmaceutical companies, which may generate considerable external societal advantages as well.
The data subjects are protected against secondary use by a requirement that an explicit consent shall be the only applicable legal basis for processing concerning innovation and development activities since none of the other legal basis under GDPR article 9(2) would be applicable.

Data subjects should be able to control their consents trough a dynamic digital ecosystem hosted by a pre-selected service provider in order to communicate better with the licence authority as well as to make alterations and withdrawals. The consent must cover the processing operations of the licence authority and the secondary user each and are strictly subject to the terms of the consent. The basic idea is that a data subject can flexibly consent to the original and secondary use at once or later on.

The potential secondary users for innovation and development purposes may alternatively request information in an anonymized form. In these cases, the processing does not rely on consent and the licence authority must determine if anonymization is possible under GDPR 9(2) g) and 86 articles. This way, the principle of publicity and privacy shall be balanced on case-by-case basis.

Furthermore, the licence applicant must have an authorized person in charge and a pre-approved utilization plan in place in order to be admissible for a licence.

Procedure

The license authority has the jurisdiction to issue licenses and supervise the compliance with the license terms. The authority shall also be responsible for the compilation, combining and transferring the data for licensees. In addition to the licence procedures, the authority has jurisdiction to compile and anonymize data from different registers based on information requests described above.

Another key operator as regards to license procedures is the service provider mentioned above that maintains a digital ecosystem for the licence procedure and the subsequent secondary use of data. The service provider shall control the licensee’s user rights in the ecosystem and keep user registers.

The governmental proposal concerning the Act has provoked plenty of public discussion as regards to sensitive data and privacy. The Parliamentary vote were an extremely tight one as the Act passed by a vote of 92 to 80 with the left-wing parties demonstrating the most notable opposition.

For further information, please contact joonas.dammert@dlapiper.com

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview