Loading...

Follow Security, Privacy And The Law on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Editors’ Note: The following article was originally published as part of Lex Mundi’s Blockchain Whitepaper Series, which you can find here.

What data privacy concerns should practicioners have relating to blockchain technology? Answering the question involves understanding first the personal information implicated by a specific blockchain application, and then analyzing the relevant legal regimes that govern the personal information.

Personal Information

Data privacy does not implicate all information, but personal identifying information. This means different things in different legal regimes, but it typically does not mean business commercial information, trade secret information, or intellectual property. While these kinds of information are sensitive and can be subject to various regulatory and contractual protections, they are not the kind of information that brings up data privacy concerns. Rather, information like an individual’s name, address, identification number, financial information, health and biometric information, and so on are the kinds of information that implicate data privacy concerns. Blockchain as used in financial and commercial transactions, or as might be used in robust supply chain technology, might not necessarily implicate data privacy concerns if the information does not contain or cannot otherwise be linked to personal identifying information. In other words, if all a block contains is hashed information about the details of a transaction, but is not otherwise linked to a person, it will not implicate data privacy concerns. But if personal identifying information is provided, or can be linked to an individual, then data privacy concerns will be implicated.

Blockchain Applications

Cryptocurrency and Smart Contracts – Anonymity v. Pseudonimity

Despite its superficial promise of anonymity, as a distributed ledger technology that obfuscates personal and other data but provides information about transactions to every node with access to the ledger, blockchain technology has the potential to create privacy headaches depending on the specific application. Cryptocurrency provides the best (and at this point in time, most relevant) example. The promise of cryptocurrency – and one of its potential dangers – was that it would allow for anonymous transactions. But in reality, blockchain technology in the cryptocurrency space provides pseudonymity. On the one hand, this means that parties can engage in transactions without revealing their actual identities. But if a determined party has access to certain information in the public domain regarding certain transactions, that party might be able to de-anonymize individuals based on the context surrounding those transactions. Pseudonymity, in other words, does not provide a user of blockchain technology with an absolute assurance that her identity will not be discovered.

Smart contracts function much the same way, with the same sort of concerns regarding true anonymity, allowing for transactions to have such information as price and type of good to define the transaction without the need for personal identifying information. But a third party wishing to learn the identity of an individual on either side of a transaction as represented in a smart contract might be able to do so if it is able to locate the appropriate information from the public domain.

Private Blockchain Applications

While the applications we have considered concern public ledgers, where anyone can gain access, private blockchains, which are limited to specific users, dimish some risks but heighten others. Private blockchains diminish the risk that someone on the node can gain access and discover private identifying information using information from the public domain. But private blockchain applications can heighten the risk based on the data obtained. For example, a private blockchain might be used by a health insurance company or health provider to maintain electronic health records (EHRs), which contain personal health information. A private blockchain containing EHR could dramatically increase efficiency by allowing those with access – the insurer and the provider – easy access to the chain of information in a patient’s history. But the sensititve of that information would implicate a number of legal regimes and would require robust security to maintain privacy.

Legal Regimes

United States

In the U.S., there is no overarching data privacy protection law at the federal level. Rather, at the federal level, data privacy is regulated by sector specific laws, such as HIPAA for personal health information or the GLBA for banking and financial information. Entities like the Federal Trade Commission have broad law enforcement powers relating to data privacy, but that is based on their protection against unfair and deceptive commercial behavior. In the absence of a federal data privacy law, each state has its own data privacy law, but these tend to focus on data breaches rather than create robust data privacy regimes (California has recently passed a new law that will act as a comprehensive data privacy regime when it goes into effect). To that end, data privacy concerns with blockchain technology in the U.S. concern whether the blockchain is secure (that is, whether it can be or has been subject to a data breach), and whether it contains personal identifying information that is subject to a federal sector-specific law.

European Union

Perhaps the most worrisome aspects of blockchain is the potential application of the General Data Protection Regulation, or GDPR. Under the GDPR, data subjects (that is, any individual in the EU) has a number of data privacy rights, including the right to have personal information corrected or deleted. Additionally, the GDPR creates certain restructions on data transfer across borders – outside of the EU, if a country is deemed not to have adequate data privacy protections, restrictions are placed on transfer to that country. Blockchain can hardly be thought to be compatible with the GDPR. The nature of the technology simply does not allow for deletion or correction of information. Further, with regard to public blockchains, there is no way to police cross-border transfer of information; information that appears on the blockchain in a new transaction appears on each node wherever that node might be.

In some respects, these concerns are theoretical. To the extent that transactions in a blockchain application do not contain personal information, then GDPR risks are diminished. But a private blockchain will be subject to these restrictions if the ledger is subject to the GDPR and contains private information. If a company is considering blockchain technology for its data processing and that processing implicates the GDPR, the infancy of both the GDPR and blockchain applications can make it difficult to navigate these difficulties. But EU supervisory authorities (that is, each country’s data privacy legal regime) are taking notice. For example, the French data privacy authority recently published guidance on blockchain technology, strongly recommending data minimization (that is, limiting the amount of data that exists in each block). Practitioners should keep abreast of such guidances to best advise their clients.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Start-up companies know that, when potential investors kick the tires, they will look carefully at the company’s business model and IP portfolio.  These days, investors are also likely to look at whether the company is in compliance with privacy and data security laws.  Cybersecurity has become increasingly important for business of all sizes.  While identity thieves may focus on the target rich environments of large-scale enterprises, any company that stores personal data or sensitive business information is vulnerable.  Moreover, even in the absence of a data breach or other adverse event, companies that fail to adequately protect sensitive data can find themselves subject to liability.

For start-up companies, it is important to protect personal data and sensitive information right out of the gate.  Putting appropriate systems in place at the outset will be much more cost-effective than having to change everything a couple of years later, and, more importantly, will better protect your company from loss and third-party liabilities.

Every company that collects personal information – including information about its own employees and customers – needs to have a plan for keeping that information secure.  Such a plan must have proactive and reactive elements.  Proactively, the company must understand what kinds of sensitive information it has and take reasonable precautions to safeguard it.  Reactively, the company must have a plan for handling a data breach or other unauthorized disclosure of sensitive information, and must be able to implement that plan extremely quickly; this is not just good business, but a legal requirement in all 50 states.

While we cannot cover all aspects of data security and privacy here, we offer these 10 tips that may be particularly relevant to start-ups.

  1. Inventory and lock up your sensitive information. Make an inventory of the information that your company has related to individuals, including its employees, contractors, customers, partners, and vendors.  Such information includes Social Security numbers, financial account passwords, driver’s license numbers,  health information, and credit card numbers.  In some jurisdictions, any information relating to a living individual who can be identified (such as name plus contact information) could trigger privacy laws and must be treated as sensitive.  Sensitive information, whether electronically or on paper, should be secured.  Any duplicates or electronic backups must be similarly secured.  It is prudent to avoid making more copies than are needed to run the business, making sure to preserve the ability to restore deleted files if needed.
  2. Limit access on a “need to know” basis. Think about who really needs access to your company’s sensitive information, and limit access to those individuals.  This sounds obvious, but such formalities are often overlooked in the start-up context.  Identifying the specific individuals with access to sensitive data on the written inventory is an important part of building a cybersecurity plan, and will help your company create and follow appropriate protocols.
  3. Use antivirus programs. Make sure that each electronic device used in your company’s business is equipped with antivirus software, and keep that software up to date.
  4. Update your software. Make sure that your company installs every software update as soon as it is available.  It was widely reported that the massive Equifax data breach occurred as a result of Equifax’s failure to install a patch to its web-facing Apache software which had been available for two months.  Do not let this happen to you!
  5. Use encryption. When sending highly sensitive information outside the company, use encryption.
  6. Deal carefully with vendors. If your company uses vendors, consider whether they need to have access to your company’s sensitive information.  If not, don’t provide access.  If access is required, limit it to what is really needed.  In any event, you need to make sure that the vendor has appropriate safeguards in place.  As we noted in a recent blog post, you are known by the company you keep, and a vendor’s carelessness may well be attributed to your company.
  7. Don’t forget the paper. When we think of privacy and data security, we usually think of what resides on computers and other electronic devices.  But the unauthorized disclosure of paper copies of sensitive information can be just as devastating.  Make sure that such papers are locked up, provide keys only to employees with a need to access the information, and dispose of any hard copies by shredding rather than by throwing them away in a trash can or recycling bin.
  8. Be aware of the reporting laws. In order to prepare a plan of action in the event of a data breach, you need to know what laws apply and what the reporting requirements are.  Some jurisdictions have extremely short reporting deadlines, such as 72 hours in the European Union (which can apply to US-based companies if data about Europeans is disclosed).
  9. Know what country your data is in, so as to avoid inadvertently triggering the laws of other jurisdictions. If your business is based in the United States, and does not utilize sensitive data from people in other countries, it is best to keep your data within the United States.  This means that it may not be appropriate to use off-the-shelf cloud services such as Amazon Web Services or Microsoft Azure, at least without taking steps when setting up your account to specify that your company’s data must be kept onshore.  If your company stores data in the European Union, it may be unable to freely move that data anywhere else, and the legal requirements of the GDPR can be onerous.
  10. Repeat. Revisit your privacy and data security plan at least once per year, as well as every time the company experiences a significant change such as an acquisition or taking on a new line of business.

Although time and funds are often scarce at the start-up stage, if your company is highly regulated (e.g., health care or financial services), or deals with data from people outside of the United States, getting legal advice early on can prevent costly problems from occurring later.  You will also want to contact an attorney immediately if your company experiences a data security incident or an unauthorized disclosure.  In such situations, time is of the essence.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

On 21 January 2019, the French Data Protection Authority (the “French DPA”) fined Google LLC 50 million euros for breach of the GDPR.

As we reported on this blog, just after GDPR became applicable, noyb.eu (None of Your Business), the non-profit privacy organization set up by Max Schrems, the Austrian lawyer who initiated the action against Facebook that led to the invalidation of the Safe Harbor, and a French organization called “La Quadrature du Net”, filed the first complaints based on GDPR. These complaints targeted major technology companies such as Google, Facebook, Instagram, Whatsapp and Linkedin before various European DPAs. The French DPA is the first one to render a decision against one of these tech giants.

In its decision, the DPA explains that it investigated the Android’s user “click path” from the creation of a Google account to the day-to-day use of the smartphone and found that Google was in breach of two of the GDPR main principles:

  • Lack of transparency and inadequate information

Under the GDPR, data controllers must disclose to individuals whose personal data is processed certain information, and that information must be written in a concise, transparent, intelligible and easily accessible way, using clear and plain language.

According to the French DPA, the information provided by Google to its users is not sufficiently clear and plain. The DPA also noted that key information, such as the data processing purposes, the data storage periods or the categories of personal data used for Google ads personalization, is excessively disseminated across several documents (sometimes requiring 5 or 6 clicks by the user before reaching the actual information).

  • Lack of valid consent regarding the ads personalization

The GDPR provides that any data processing must be done on the basis of one of the legal basis listed in the GDPR, which includes consent.

Google argued that they rely on users’ consent to process data for ads personalization purposes. However, the French DPA found that this was not a valid legal basis, because Google users’ consent is not sufficiently informed and is neither “specific” nor “unambiguous”.  In particular, the French DPA noted that Google users are asked to tick the boxes “I agree to Google’s Terms of Service” and “I agree to the processing of my information as described above and further explained in the Privacy Policy” in order to create a Google account; the French DPA concluded this method of securing was inappropriate because it was “bundled”.

This is not the first time a fine is issued for breach of the GDPR, but it is by far the biggest although still far away from the maximum limit which is 4% of the worldwide sales. The French DPA explained that the amount fined and the publicity of the decision are justified by “the severity of the infringements observed regarding the essential principles of the GDPR: transparency, information and consent”.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Can a fingerprint alone provide “testimony” about a person?  Earlier this month, a federal court in California said yes.  But the court was not engaging in a highly-localized form of palm-reading; rather, the question arose in the ever-evolving field of how to balance law enforcement needs and individual citizens’ privacy interests as new technologies emerge.

The United States District Court for the Northern District of California has been a hotspot for privacy-related litigation, but this case—In the Matter of the Search of a Residence in Oakland, California, No. 4-19-70053 (N.D. Cal. Jan. 10, 2019)—arose out of a simple warrant request.  Law enforcement agents, investigating two individuals suspected of extortion via Facebook messenger, sought a warrant to search the suspects’ house and seize their electronic devices.  They further sought “the authority to compel any individual present at the time of the search to press a finger (including a thumb) or utilize other biometric features, such as facial or iris recognition, for the purposes of unlocking the digital devices found in order to permit a search of the contents.”

The extent to which digital devices such as cell phones are subject to search is, of course, a major Fourth Amendment issue.  As digital devices become methods by which crimes can be committed (as in the extortion accusation here) their value to law enforcement grows, while at the same time their increasing centrality to our lives raises the stakes on the privacy interests involved as well.

So, can law enforcement compel someone to unlock a digital device with their fingerprint or other biometric identifier?  One way of approaching the question is recognize that in this situation, a fingerprint is functioning like a password or a numeric passcode.  The law is settled that someone cannot be compelled to give up a passcode to a device, but perhaps not for the most intuitively obvious reason:  The doctrine relies not so much on the Fourth Amendment’s guarantee of privacy, but rather the Fifth Amendment’s guarantee against self-incrimination.  It relies on the idea that a password is, in itself, testimonial, as the very act of sharing a password can reveal something about its creator’s innermost thoughts.  Imagine, for example, that Colonel Mustard’s laptop password was “Me+LeadPipe+Conservatory”—compelling him to write that down for the police would be tantamount to forcing a confession.  Murder confessions in passwords may be rare, but most of us probably have at least one password that would reveal something private about ourselves (even if it’s just an unhealthy devotion to chocolate syrup).

But, on the other hand, (again setting aside palmistry), a fingerprint does not reveal anything about the inner workings of its owner’s mind.  And, for this reason, as the court noted, the collection of fingerprints and other biometric identifiers (such as blood samples or voiceprints) is not usually considered to raise an issue of self-incrimination.

The court chose function over form.  Recognizing that when it comes to unlocking devices, a fingerprint does the same thing as a passcode, the court found that the use of a fingerprint to unlock a device is a “testimonial” act.  How?  According to the court, the act of unlocking a phone with a fingerprint “concedes that the phone was in the possession and control of the suspect, and authenticates ownership or access to the phone and all of its digital contents.”  Call it the “Cinderella Rule” (which perhaps comes with an “OJ corollary”): you can “testify” you own (or don’t own) a shoe/glove/device, with all the implications that may follow, using not words but limbs.

In a way, this seems like a Fifth Amendment solution to a Fourth Amendment problem.  The real issue is that our mobile devices contain so much of our lives that we want to make sure privacy interests are respected.  In daily life, we make that happen by installing passcodes or using biometric identifiers to lock the devices.  In a legal context, the Fifth Amendment protection for passcodes (and now biometrics) is just a doctrinal justification to ensure that privacy interests receive their proper protection in the Fourth Amendment analysis.

Or is it?  Perhaps the Fifth Amendment protection against involuntary self-incrimination is just the right fit as biometrics advance and become more common.  It may be convenient to unlock a phone with a finger, without having to memorize a passcode, but it’s also a lot harder to get a new fingerprint than to come up with a new passcode.  You can walk into a restaurant and choose to give a pseudonym to the maître d’, but if that restaurant has a camera that uses facial recognition to identify customers, you can’t choose to use a new face.  There is a recognition, therefore, embodied in the ever-growing body of privacy laws dealing with biometrics, that biometric identifiers should only be used with the consent of the subject.  And so it is perhaps fitting that, as courts grapple with striking the right balance between law enforcement interests and privacy interests, that the Constitution’s venerable protection against involuntary self-incrimination plays a central role.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

On January 10, 2019, Advocate General Szpunar issued his much awaited opinion in the Google case that was referred to the European Court of Justice by the French “Conseil d’Etat”, the highest administrative court of the country.  The Conseil d’Etat basically asked the European Court of Justice to follow-up on its Google Spain decision: is the right to be forgotten – i.e., the right of individuals to request an operator of a search engine to remove links to web pages containing information relating to them – national, European or worldwide?

Since the Google Spain decision, many individuals have asked Google to delete information that can be obtained when their names were entered in the search engine. In France, a number of individuals whose requests had been turned down, filed complaints.  On May 21, 2015 the French Data Protection Authority (“the French DPA”) ordered Google to delete the links from the results that could be obtained through all the extensions or domain names of the search engine (including google.com for example). The reasons given by the DPA were that there is one global data processing, that the various domain names are simply a technical path to access the data and that this is necessary for an effective implementation of the right to be forgotten.

Google took the view that they had no obligation to do this and on March 10, 2016 the French DPA fined Google (in an amount of 100,000€) for not complying with its previous decision.  Google appealed and on July 19, 2017 the “Conseil d’Etat” held that there was indeed a single data processing and referred three questions to the European Court of Justice.

A Worldwide Right to be Forgotten?  Non mais….

The first question is, basically, whether the French DPA is right: when an individual requests delinking, does such delinking have to be effective worldwide, irrespective of the place where the search is made, even though a search is made from a country outside the EU?  The Advocate General’s answer is a “No but….” European laws do not have extra-territorial effects except in certain areas such as antitrust. Data protection and the freedom of information are fundamental rights protected by the European Charter of Human Rights, but to give them extra-territorial effect would create a dangerous precedent. However, there may be circumstances in which the interest of the EU requires that the Data Protection Directive should be enforced beyond the EU borders. The Advocate General, however, took the view that this case did not fall within this exception.

National or European domain names?  Oui.

The second question is whether the search engine has to delete the links that would normally be accessed using the domain name of the country where the claimant made the delinking request or more broadly from what can be accessed using all EU domain names? In other words, is the territorial scope national or European?  Not surprisingly, the Advocate General said that in a single common market, the right to be forgotten had to be enforced at the EU level.

National or European IP addresses?

The third question is very practical. Even though the search engine restricts the access to links on EU domain names, internet users based in the EU can still access the links simply by using a non-EU domain name. For example, a French user can very well use google.ca (the Canadian domain name) rather than google.fr (the French one). There is a technical solution to overcome this and it is what is called “geo-blocking”. Geo-blocking allows to restrict access depending on the origin of the user’s IP address, irrespective of the local domain name that  is used. The question is whether search engines must use the geo-blocking technique when they implement the right to be forgotten? And if they have to, should they restrict access for IP addresses of the country where the claimant made the delinking request or for all EU IP addresses?

The Advocate General noted that the Google Spain decision did not help in that it had not gone into the details of what Google had to do in practice but the Court had nevertheless made clear that the protection of personal data must be fully effective. As a consequence, the search engine has to do whatever is necessary and this includes “geo-blocking”, irrespective of the local domain name which is used.

A decision is expected to be rendered by the Court in a few months. In a majority of cases, the Court follows the opinion of the Advocate General. It did so for example in the Schrems v/ Facebook case which led to the invalidation of the Safe Harbor but not in the Google Spain case.

In any event, it should also be kept in mind that the rules discussed in this case are those set out in the 1995 Directive which applied when the relevant facts took place, but which has now been replaced by the GDPR – in which the right to be forgotten is enshrined, as we commented on this blog – and that the territorial scope of the GDPR is much broader than that of the 1995 Directive.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Data breaches – always critically important to those with responsibility for storing, transporting and protecting electronic information – have become an all-consuming topic of late. Stories about data theft dominate political headlines, boardroom discussions, and family meetings around the dinner table.  They, of course, have also been the subject of government investigations and private litigation.

The current environment is not unlike other moments in our recent past that seemed to have captured the attention of Wall Street, K Street and Main Street, including the financial reporting scandals of the early 2000s.  There are some important parallels between the concerns raised by those accounting scandals involving companies like Enron and WorldCom, and the concerns raised by data breaches and theft today.  With the accounting scandals, the integrity of our financial markets – impacting the institutional and retail investor alike – was at stake.  Today, data breaches threaten the integrity of the very lifeblood – electronic information – that makes virtually all of our personal, political and business transactions and interactions possible.

During the financial reporting scandals in the early 2000s, regulators and private litigants looked not only to the entities and their executives committing the fraud, but also to the so-called “watchdogs” – financial statement auditors.  As the SEC has remarked regarding financial statement auditors

Auditors serve as critical gatekeepers – experts charged with making sure that the process that companies use to prepare and report financial information are ones that are built on strength and integrity.  Investors rely on auditors and need them to do their job and do it very well.[i]

Statements like these have been echoed and amplified by private litigants for decades, with allegations that the auditor must have known about the fraud, or should have discovered it. It did not matter to the plaintiffs’ bar what the fraud was, or whether it touched on a subject within the auditor’s purview.

In response, the accounting profession developed a series of standards and requirements that minimized the auditor’s exposure in litigation for things not squarely within the area of an auditor’s responsibility.  Those standards, promulgated by the American Institute of Certified Public Accountants (AICPA), clarify the scope of an auditor’s responsibility, define the responsibilities of the audit client, and dictate how the results of the auditor’s work should be communicated.  They are intended to “reduce[ ] the risk that management may inappropriately rely on the auditor to protect management against certain risks or to perform certain functions that are management’s responsibility.” AU-C § 210, Terms of Engagement, A22.

To date, there has not been a flood of litigation or government inquiries directed at cybersecurity auditors when data breaches occur.  But cybersecurity professionals conducting incident response procedures or security risk assessments would be wise to adopt the maxim, hope for the best, prepare for the worst.  To that end, here are three measures developed by accounting firms that can be easily incorporated into a cybersecurity audit engagement letter.

  • Define the scope, objectives, and limitations of the work.

AICPA professional standards require that auditors document the terms of an audit engagement in a written engagement letter to confirm “that a common understanding of the terms of the audit engagement exists” between the auditor and client.  AU-C § 210.  The written engagement agreement must speak to the scope and objective of the audit engagement, and make clear that, “because of the inherent limitations of an audit … an unavoidable risk exists that some material misstatements may not be detected, even though the audit is properly planned and performed in accordance with [generally accepted auditing standards].”  AU-C § 210.10.    The AICPA standards provide a sample engagement letter with language addressing these requirements.

We will conduct our audit in accordance with auditing standards generally accepted in the United States of America (GAAS). Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free from material misstatement … Because of the inherent limitations of an audit, … an unavoidable risk that some material misstatements may not be detected exists, even though the audit is properly planned and performed in accordance with GAAS.

Through this language, the auditor identifies (i) which professional standards will govern the work (GAAS), (ii) the objective of the work to be performed (to obtain “reasonable assurance” that the financial statements are free of material misstatement), and (iii) the “unavoidable risk” that even if the audit is correctly planned and performed, a material misstatement may go undetected.  The AICPA also requires financial statement auditors to describe in the engagement letter the expected form and content of the report to be issued at the conclusion of the work.

Likewise, cybersecurity auditors should incorporate these features into their engagement agreements.  For instance, where a cybersecurity professional is performing a security risk assessment, the engagement agreement should explicitly state which frameworks its process will follow, including those accepted by HIPAA, Office of Civil Rights, Payment Card Industry Security Standards Council, and the Center for Medicare & Medicaid Services.  The agreement should also clarify where there are inherent limitations that would prevent any audit or assessment, even those perfectly planned and executed, from catching all security related issues.  Finally, cybersecurity auditors should preview the expected content of any report to issue after the work is complete.

  • Define the client’s responsibilities.

Financial statement auditors are required by professional standards to have their clients confirm in the engagement letter certain of the client’s responsibilities in connection with the audit.  AU-C § 210.10.  For example, the audit client’s management must acknowledge that they are responsible for preparing the financial statements, as well as giving the auditor access to all relevant information and persons within the organization from whom the auditor deems it necessary to obtain evidence.  AU-C § 210.06.b.

Cybersecurity auditors should also insist that their engagement agreements detail their client’s responsibilities that are important to the audit or assessment.  For instance, the client should agree that it has the responsibility to provide all necessary information concerning any security incident which prompted the need for the cybersecurity professional’s services.  The client should also agree to grant access to all systems within the scope of the audit, as well as to all individuals with sufficient authority to schedule testing and deal with other issues as they arise.  A cybersecurity auditor may also want the client to agree to maintain a stable configuration over their network until the audit is complete (or as stable as is reasonably feasible).  Finally, the cybersecurity auditor should make clear in the engagement agreement that the client’s failure to comply with its obligations may preclude the auditor from completing its work.

  • Precisely communicate the result of the auditor’s work.

Even with a perfectly articulated engagement agreement that incorporates the first two elements described above, the work necessary to minimize litigation risk is far from complete.  The end product of the auditor’s work must also precisely communicate the auditor’s conclusions and define, again, the scope of the work supporting the conclusions, and any inherent limitations.  It is not enough to rely on the language of an engagement agreement, because a report generated at the conclusion of the work is a separate statement that an unhappy client, or possibly a third party, could claim contains misrepresentations.  In that scenario, a cybersecurity auditor will want to point to any limitations and caveats in the final report itself to make clear that the report as a whole is accurate and a fair representation of the work performed and conclusions reached.

For these reasons, AICPA professional standards require financial statement auditors to identify in their audit report which financial statements covering which periods were audited, explain management’s responsibilities in preparing the financial statement, and explain the nature of the auditor’s work and inherent limitations thereof.  In addition, the AICPA maps out precisely the content of an “unmodified” or “clean” audit opinion:

When expressing an unmodified on financial statements, the auditor’s opinion should state that the financial statements present fairly, in all material respects, the financial position of the entity as of the balance sheet date and the results of its operations and cash flows for the period then ended[.]

AU-C § 700.35 (emphasis added).  This language is key, because it ties directly to the scope of the work that the auditor agrees to perform in the engagement agreement (as required by AU-C § 210, see above).  To wit, the auditor agreed to “plan and perform the audit to obtain reasonable assurance about whether the financial statements are free from material misstatement.”  Thus, the description of work performed, responsibilities, and scope of the opinion described in the engagement agreement mirror those set forth in the final report.  Cybersecurity professionals should similarly structure their final reports to avoid any potential confusion between the content and scope of its report and the nature and limitations of the work agreed to by the parties at the outset of the arrangement.

[i] SEC Chair Mary Jo White, Remarks at the Securities Enforcement Forum, October 9, 2013

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

On January 10, 2019, Massachusetts Governor Charlie Baker signed a new law that amends its data breach reporting law, and requires credit reporting agencies such as Equifax to provide a free credit freeze to consumers.  The new law, “An Act Relative to Consumer Protection from Security Breaches,” also requires companies to offer up to three years of free credit monitoring to victims of a security breach, and force companies to disclose breaches in a timely and public notification.

For businesses reporting data breaches, the type of information that must be provided to the state have been expanded, to now include:

  • the name and address of the person or agency that experienced the breach of security
  • name and title of the person or agency reporting the breach of security
  • their relationship to the person or agency that experienced the breach of security
  • the type of person or agency reporting the breach of security
  • the person responsible for the breach of security, if known;
  • the type of personal information compromised, including, but not limited to, Social Security number, driver’s license number, financial account number, credit or debit card number or other data
  • whether the person or agency maintains a written information security program; and
  • a report to the Attorney General and the Director of Consumer Affairs and Business Regulation certifying their credit monitoring services comply with this newly amended law.

Breaches involving Social Security numbers will now have additional requirements:  credit monitoring services at no cost for a period of not less than 18 months (42 months if it was breach involving a
consumer reporting agency.”

And if the person or agency that experienced a breach of security is owned by another person or corporation, the notice to the consumer must now include the name of the parent or affiliated corporation.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

HIPAA was signed into law on August 21, 1996, over 22 years ago.  As a 22 year-old, HIPAA is no longer a child, but not quite a full-fledged adult.  And, as a 22 year-old, it could be considered a part of the Millennial generation.  As we look to the year ahead for HIPAA, what can its status as a Millennial tell us about what is to come?

Wikipedia says Millennials are characterized by “increased use and familiarity with communications, media, and digital technologies,”  That sounds like the current issues that are challenging HIPAA covered entities:  communications (e.g., the growing use of email and testing by patients); media (e.g., the impact of social media on the provision of health care); and digital technologies (e.g., EHRs, blockchain).  Of course, Millennials also like craft beer and poke bowls, so this analogy does have some limits.

What else is in store for HIPAA in 2019?

  • More data from non-HIPAA regulated data sources (e.g., remote monitoring devices and wearables), which will challenging HIPAA’s goal of greater interoperability and creating more concerns about privacy and data security.
  • Nevertheless, there will be more data exchange and more sophisticated uses of data (as Cigna’s merger with Express Scripts and CVS’s merger with Aetna start to be effectuated).
  • More methods of accessing and moving data:
  • Increasing state privacy regulation (e.g., California Consumer Privacy Act) and a Democratic House of Representatives will drive a push for revisions and updated to the HIPAA statute and regulations:
  • State attorneys general will take a larger role in enforcing HIPAA, as the ones from Arizona, Arkansas, Florida, Indiana, Iowa, Kansas, Kentucky, Louisiana, Minnesota, Nebraska, North Carolina, and Wisconsin did in December 2018, when they sued Medical Informatics Engineering, Inc., operating as Enterprise Health, LLC and K&L Holdings, and NoMoreClipboard, LLC, and joined an existing civil suit over a HIPAA breach impacting 3.9 million individuals.
  • More and bigger breaches will occur (because there’s more data, more uses of data, more movement of data, and more value to data).
  • More and bigger efforts by the plaintiff’s class action bar to turn HIPAA breaches into $$$.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Editors’ Note:  This is the sixth in our third annual series examining important trends in data privacy and cybersecurity during the new year.  Our previous entries were on cryptocurrencyemerging threatsstate law trends, comparing the GDPR with COPPA, and energy and security.  Up next:  federal enforcement.

Social media companies’ and search engines’ revenue models are based on creating valuable advertising platforms for marketers.  These platforms allow advertisers to reach a broad and engaged user-base at a fraction of the cost of traditional advertising, and allow them to do so on highly targeted bases.  Advertisers can market their products based on users’ search terms, demographics, location, affiliations, interests, and much more.  The extensive amount of personal data utilized by online advertising platforms creates attendant data-privacy concerns for users and lawmakers.

Data-privacy concerns are heightened in the context of political advertising.  As a result of the foreign interference in the 2016 U.S. Presidential election, users are not only wary about who is funding and organizing the political advertisements interspersed in their social media feeds, but also how much personal data those advertisers have access to, and how that data can be used.  Since the 2016 election, online advertising platforms such as Facebook, Twitter, and Google have responded to these (and other) concerns by adopting comprehensive political advertising policies.  While federal lawmakers have yet to act, some states have enacted online political advertising regulatory regimes.

In 2019, a few more states may enact online political advertising reforms, but with a divided government, federal legislation is unlikely to come to fruition.  Accordingly, the most consequential changes to online political advertising regulations in 2019 will likely come in the form of self-regulation.  And evolving social and community norms surrounding data privacy will contribute to any such changes.

Post-2016 Election Regulation of Online Political Advertising

In the wake of the 2016 election, several states enacted laws to regulate online political advertising.  These laws generally impose disclosure requirements on online advertising platforms, and obligate large platforms to maintain a database of political advertisements.  For example, California’s 2018 Social Media DISCLOSE Act requires online platforms to include “paid for by” disclaimers or hyperlinks to payers’ identifying information in certain California political advertisements.  It also requires platforms to maintain publicly-available databases of political advertisements, including information about the payer’s identity, the cost of the ad, and the reach of the ad.  New York’s Democracy Protection Act similarly institutes disclosure and disclaimer requirements for online political advertisements.  The law also obligates platforms to create databases for independent-expenditure advertisements, and requires platforms to verify that independent-expenditure advertisers are registered with the state board of elections.   Washington State and Maryland have also implemented online political advertising disclosure reforms.

A handful of state reforms notwithstanding, most of the post-2016 regulation of online political advertisements has come in the form of self-regulation.  The largest online advertising platforms – Twitter, Facebook, and Google – have developed (and are continuously modifying) robust policies that involve disclosure requirements, public databases of political advertisements, advertiser-identity verification processes, and, for Google, some ad-targeting restrictions.

Twitter’s policy requires advertisers who want to air “political content” ads in the U.S. to complete a certification process, which varies depending on the type of political content ad.  An individual who wants to air an issue ad – an ad that “refer[s] to an election or a clearly identified candidate,” or “that advocate[s] for legislative issues of national importance” – must provide Twitter with a U.S. government-issued photo ID and a U.S. mailing address.  An organization must supply its EIN or Tax ID number and a U.S. mailing address.  For “political campaigning ads” – in relevant part, those “that advocate for or against a clearly identified candidate for Federal office” – individuals must provide a U.S. passport, a government-issued photo ID with a U.S. mailing address, and a notarized form affirming the accuracy of the submitted information.  An organization that is not registered with the FEC and that wants to run a political campaigning ad must have a natural person submit his or her passport number, other identifying information, and a U.S. mailing address.  Once this identifying information is submitted, Twitter sends a paper form to the provided mailing address to verify its legitimacy.

For Facebook, any advertiser that wants to run an “election-related or issue ad” must comply with an authorization process that includes identity and location confirmation.  To confirm that the advertiser has a U.S. location, Facebook requires the advertiser to enter its address, then sends a letter to the address.  The letter directs advertisers to a URL where it must enter a code included in the letter.  To confirm the advertiser’s identity, Facebook requires advertisers to upload an image of her U.S. driver’s license, state identification card, or passport, to enter her zip code, and to enter the last four digits of her social security number.

Google’s political advertising policy also requires verification through the submission of individual or organizational identifying information for certain types of political advertising.  Uniquely, Google also requires advertisers to complete this verification process before they can target users based on users’ political affiliations, ideologies, and opinions.  If John Doe wants to market his political rally by advertising to Republicans on Google, he will first have to verify his identity and location in the U.S.  For that matter, if ACME Corp. wants to market its widgets to pro-choice advocates, it will have to do the same.

Forecasting 2019: Industry Self-Regulation & Evolving Data Privacy Norms

For social media companies (and other online advertising platforms), 2019 will likely be an important and challenging year when it comes to online political advertising.  As an initial matter, the status of state-level regulation is in flux.  Recent Democratic pick-ups in Maine, Colorado, New Mexico, Nevada, Connecticut, and Illinois make those states possible candidates for online political advertising reforms.  New regulatory regimes may prompt platforms to adopt special advertising rules for some jurisdictions, or to forego advertising in some state elections altogether.  At the same time, a Maryland lawsuit calls into question the constitutionality of state disclosure regimes, as applied to media organizations.

On the federal level, social media companies such as Facebook have expressed support for the Honest Ads Act, which would increase disclosure requirements, mandate political advertisement databases, and require platforms to make reasonable efforts to ensure that foreigners do not buy political advertisements.  While Congressional Democrats will almost certainly shepherd the Act through the House, it is exceedingly unlikely to survive in the Republican-controlled Senate.

With a lack of federal regulation and only minimal state regulation, changes in online political advertising regulation in 2019 will likely come in the form of self-regulation.  Whether and how social media companies (and other online advertising platforms) change their political advertising policies will depend on how social and community norms evolve along a number of fronts.  Most relevantly for present purposes, this includes data privacy norms.

Social media companies’ self-regulation of political advertising requires a difficult balancing act.  On the one hand, the companies’ revenue models revolve around advertising; and more particularly, an advertising product that allows marketers to reach highly targeted audiences.  On the other hand, social media companies must ensure that their advertising policies conform to social and community norms, lest their user bases become disaffected, causing a drop in user numbers or user engagement, and, correspondingly, a less desirable audience for advertisers.  Further, social media companies may tailor their policies to address concerns raised by lawmakers, so as to not invite more stringent regulation.  These factors create an incentive to restrict political advertising practices in some situations.

While there are several ingredients that go in to this chemistry of pro- and anti- self-regulatory incentives, evolving data-privacy norms play an important role in forecasting industry self-regulation of political advertising in 2019.  In the coming year, data-privacy regimes in Europe and California will frequently be in the news, and public scrutiny of data policies will surely persist.  Furthermore, campaign finance and political advertising practices are likely to be at the center of a Democratic presidential primary in which several candidates will be pushing for democratic reforms.  As these inputs cause social and community norms to evolve, online political advertising policies may need to evolve along with them.

In particular, data-privacy norms related to online political advertising could shift based on what personal data should be utilized for political advertising purposes and who should be allowed to use that personal data for political advertising.  As to the former, online advertisers are allowed to microtarget their ads based on a host of highly-specific user information.  Community and social norms may evolve to become less tolerant of microtargeting when advertisements include political messaging or are targeted based on political affiliations or beliefs.  This may prompt platforms to institute restrictions surrounding what personal data can be used to target political advertising, and how political personal data can be used in advertising.

As to who users will tolerate receiving targeted political advertisements from, lawmakers and users have thus far been mostly concerned with foreign actors exploiting social media to interfere in our democracy.  That is why platforms have taken steps to attempt to verify that political advertisers are U.S. persons.  But as norms around money-in-politics continue to shift in 2019, lawmakers and users may also grow weary of how domestic organizations – often anonymously – utilize user data for political messaging.  These changed norms could prompt platforms to restrict the targeting capabilities of certain types of political organizations; namely, so-called “Dark Money” groups or “SuperPACs.”

Conclusion

In sum, evolving norms surrounding data privacy and money-in-politics may intersect in 2019 to prompt significant changes in online political advertising policies.  The precise nature and extent of these changes are difficult to predict; however, online advertising platforms should be attuned to these evolving norms in order to respond with appropriate policy changes.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Many companies share personal information they gather directly from individuals with “business partners” who use the information for their own direct marketing purposes. It is the case, for example, of companies that provide services on the internet free of charge but gather and sell the data related to their users to business partners. As the Washington Post recently learned, companies with this business model may find it challenging to comply with the European requirements, especially considering the new definition of consent set out in the GDPR.

The French Data Protection Authority (the “French DPA”) recently published a list of 5 basic principles to be complied with when sharing personal information with business partners:

  1. Before sharing personal information with business partners, you must ensure that the individual gave his/her consent.
  2. The data collection form must provide a way for the individuals to identify the partners.

    For example: by providing the full list of business partners on the data collection form or by providing a link to the full list.

  1. You must update the individuals when there is a change in the list, especially for additions of new business partners.

    The French DPA suggests to provide in each direct marketing message a link to an up-to-date list. In addition, each new partner must provide information on how it processes the individual’s personal information when the first contact is made or at the latest within a month of such contact.

  1. The consent obtained from the individuals on behalf of business partners is only valid for such partners.

    It means that these partners cannot share the personal information they received with their own partners without obtaining a new informed consent from the individuals who must be told who these new partners are.

  1. When the business partners first contact the individuals they must specify from whom they obtained the individual’s personal information and how the individual can exercise their GDPR rights. In particular, they must inform the individuals of their right to object at any time to the processing of their personal information for direct marketing purposes.

    The right to object may be exercised by the individuals by contacting either the new partner or the company that collected the information initially.

These principles are drawn from the e-Privacy Directive which was implemented in each European Union Member State legislation (in France the relevant provision is Article L. 34-5 of the Postal and Electronic Communications Code). The ePrivacy Directive should be replaced in the coming months by the ePrivacy Regulation which will apply directly in all Member States, in the same way as the GDPR, which will allow a greater harmonization of direct marketing rules in the EU.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview