Control Freaks and Bond Villains

The hippy ethos that birthed early management of the internet is beginning to look quaint. Even as a military project, the core internet concept was a decentralized network of unlimited nodes that could reroute itself around danger and destruction. No one could control it because no one could truly manage it. And that was the primary feature, not a bug.

Well, not anymore.

I suppose it shouldn’t surprise us that the forces insisting on dominating their societies are generally opposed to an open internet where all information can be free. Dictators gonna dictate.

Beginning July 17, 2019, the government of Kazakhstan began intercepting all HTTPS internet traffic inside its borders. Local Kazakh ISPs must force their users to install a government-issued certificate into all devices to allow local government agents to decrypt users’ HTTPS traffic, examine its content, re-encrypt with a government certificate and send it on to its intended destination. This is the electronic equivalent of opening every envelope, photocopying the material inside, stuffing that material in a government envelope and (sometimes) sending it to the expected recipient. Except with web sites.

According to ZDNet, the Kazakh government, unsurprisingly, said the measure was “aimed at enhancing the protection of citizens, government bodies and private companies from hacker attacks, Internet fraudsters and other types of cyber threats.” As Robin Hood could have told you, the Sheriff’s actions taken to protect travelers and control brigands can easily result in government control of all traffic and information, especially when that was the plan all along. Security Boulevard reports that “Since Wednesday, all internet users in Kazakhstan have been redirected to a page instructing users to download and install the new certificate.

This is not the first time that Kazakhstan has attempted to force its citizens to install root certificate, and in 2015 the Kazakhs even applied with Mozilla to have Kazakh root certificate included in Firefox (Mozilla politely declined).

Despite creative technical solutions, we all know that Kazakhstan is not alone in restricting the internet access of its citizens. For one (gargantuan) example, China’s population of 800 million has deeply restricted internet access, and, according to the Washington Post, the Chinese citizenry can’t access Google, Facebook, YouTube or the New York Times, among many, many, many others. The Great Firewall of China, which involves legislation, government monitoring action, technology limitations and cooperation from internet and telecommunications companies. China recently clamped down on WhatsApp and VPNs, which had returned a modicum of control and privacy to the people. And China has taken these efforts two steps beyond nearly anyone else in the world by building a culture of investigation and shame, where its citizens could find their pictures on local billboard for boorish traffic or internet behavior, or in jail for questioning the ruling party on the internet. All this is well documented.

23 countries in Asia and 7 in Africa restrict torrents, pornography, political media and social media. The only two European nations that have the same restrictions are Turkey and Belarus. Politicians in the U.S. and Europe had hoped that the internet would serve as a force for freedom, knowledge and unlimited communications. Countries like Russia, Cuba and Nigeria also see the internet’s potential, but they prefer to throttle the net to choke off this potential threat to their one-party rule governments.

For these countries, there is no such thing as private. They think of privacy in context – you may keep thoughts or actions private from companies, but not the government. On the micro level, it reminds me of family dynamics –When your teenagers talk about privacy, they mean keeping information private from the adults in their lives, not friends, strangers, or even companies. Controlling governments sing the song of privacy, as long as information is not kept from them, it can be hidden from others.

The promise of Internet freedom is slipping further away from more people each year as dictators and real life versions of movie villains figure out how to use the technology for surveillance of everyday people and how to limit access to “dangerous” ideas of liberty. ICANN, the internet control organization set up by the U.S. two decades ago, has proven itself bloated and ineffective to protect the interests of private internet users.  In fact, it would be surprising if the current leaders of ICANN even felt that such protections were within its purview.

The internet is truly a global phenomenon, but it is managed at local levels, leaving certain populations vulnerable to spying and manipulation by their own governments. Those running the system seem to have resigned themselves to allowing national governments to greatly restrict the human rights of their own citizens.

A tool can be used in many different ways.  A hammer can help build a beautiful home or can be the implement of torture and murder. The internet can be a tool for freedom of thought and expression, where everyone has a publishing and communication platform.  Or it can be a tool for repression. We have come to accept more of the latter than I believed possible.

Post Script —

Also, after a harrowing last 2-5 years where freedom to speak on the internet (and social media) has exploded into horrible real-life consequences, large and small, even the most libertarian and laissez faire of First World residents is slapping the screen to find some way to moderate the flow of ignorance, evil, insanity, inanity and stupidity. This is the other side of the story and fodder for a different post.

And it is also probably time to run an updated discussion of ICANN and its role in internet management.  We heard a great deal about internet leadership in 2016, but not so much lately. Stay Tuned.

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more global & domestic internet developments, see the National Law Review Communications, Medis & Intenet law page.

No Means No

Researchers from the International Computer Science Institute found up to 1,325 Android applications (apps) gathering data from devices despite being explicitly denied permission.

The study looked at more than 88,000 apps from the Google Play store, and tracked data transfers post denial of permission. The 1,325 apps used tools, embedded within their code, that take personal data from Wi-Fi connections and metadata stored in photos.

Consent presents itself in different ways in the world of privacy. The GDPR is clear in defining consent as it pertains to user content. Recital 32 notes that “Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data…” Consumers pursuant to the CCPA can opt-out of having their personal data sold.

The specificity of consent has always been a tricky subject.  For decades, companies have offered customers the right to either opt in or out of “marketing,” often in exchange for direct payments. Yet, the promises have been slickly unspecific, so that a consumer never really knows what particular choices are being selected.

Does the option include data collection, if so how much? Does it include email, text, phone, postal contacts for every campaign or just some? The GDPR’s specificity provision is supposed to address this problem. But companies are choosing to not offer these options or ignore the consumer’s choice altogether.

Earlier this decade, General Motors caused a media dust-up by admitting it would continue collecting information about specific drivers and vehicles even if those drivers refused the Onstar system or turned it off. Now that policy is built into the Onstar terms of service. GM owners are left without a choice on privacy, and are bystanders to their driving and geolocation data being collected and used.

Apps can monitor people’s movements, finances, and health information. Because of these privacy risks, app platforms like Google and Apple make strict demands of developers including safe storage and processing of data. Seven years ago, Apple, whose app store has almost 1.8 million apps, issued a statement claiming that “Apps that collect or transmit a user’s contact data without their prior permission are in violation of our guidelines.”

Studies like this remind us mere data subjects that some rules were made to be broken. And even engaging with devices that have become a necessity to us in our daily lives may cause us to share personal information. Even more, simply saying no to data collection does not seem to suffice.

It will be interesting to see over the next couple of years whether tighter option laws like the GDPR and the CCPA can not only cajole app developers to provide specific choices to their customers, and actually honor those choices.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more on internet and data privacy concerns, see the National Law Review Communications, Media & Internet page.

Lessons in Becoming a Second Rate Intellectual Power – Through Privacy Regulation!

The EU’s endless regulation imposed on data usage has spooled over into academia, providing another lesson in kneecapping your own society by overregulating it. And they wonder why none of the big internet companies arose from the EU (or ever will). This time, the European data regulators seem to be doing everything they can to hamstring clinical trials and drive the research (and the resulting tens of billions of dollars of annual spend) outside the EU. That’s bad for pharma and biotech companies, but it’s also bad for universities that want to attract, retain, and teach top-notch talent.

The European Data Protection Board’s Opinion 3/2019 (the “Opinion”) fires an early and self-wounding shot in the coming war over the GDPR meaning and application of “informed consent.” The EU Board insists on defining “informed consent” in a manner that would cripple most serious health research on humans and human tissue that could have taken place in European hospitals and universities.

As discussed in a US law review article from Former Microsoft Chief Privacy Counsel Mike Hintz called Science and Privacy: Data Protection Laws and Their Impact on Research (14 Washington Journal of Law, Technology & Arts 103 (2019)), noted in a recent IAPP story from Hintz and Gary LaFever, both the strict interpretation of “informed consent” and the GDPR’s right to withdraw consent can both cripple serious clinical trials. Further, according to LaFever and Hintz, researchers have raised concerns that “requirements to obtain consent for accessing data for research purposes can lead to inadequate sample sizes, delays and other costs that can interfere with efforts to produce timely and useful research results.”

A clinical researcher must have a “legal basis” to use personal information, especially health information, in trials.  One of the primary legal basis options is simply gaining permission from the test subject for data use.  Only this is not so simple.

On its face, the GDPR requires clear affirmative consent for using personal data (including health data) to be “freely given, specific, informed and unambiguous.” The Opinion clarifies that nearly all operations of a clinical trial – start to finish – are considered regulated transactions involving use of personal information, and special “explicit consent” is required for use of health data. Explicit consent requirements are satisfied by written statements signed by the data subject.

That consent would need to include, among other things:

  • the purpose of each of the processing operations for which consent is sought,
  • what (type of) data will be collected and used, and
  • the existence of the right to withdraw consent.

The Opinion is clear that the EU Board authors believe the nature of clinical trials to be one of  an imbalance of power between the data subject and the sponsor of the trial, so that consent for use of personal data would likely be coercive and not “freely given.” This raises the specter that not only can the data subject pull out of trials at any time (or insist his/ her data be removed upon completion of the trial), but EU Privacy Regulators are likely to simply cancel the right to use personal health data because the signatures could not be freely given where the trial sponsor had an imbalance of power over the data subject. Imagine spending years and tens of millions of euros conducting clinical trials, only to have the results rendered meaningless because, suddenly, the trial participants are of an insufficient sample size.

Further, if the clinical trial operator does not get permission to use personal information for analytics, academic publication/presentation, or any other use of the trial results, then the trial operator cannot use the results in these manners. This means that either the trial sponsor insists on broad permissions to use clinical results for almost any purpose (which would raise the specter of coercive permissions), or the trial is hobbled by inability to use data in opportunities that might arise later. All in all, using subject permission as a basis for supporting legal use of personal data creates unnecessary problems for clinical trials.

That leaves the following legal bases for use of personal data in clinical trials:

  • a task carried out in the public interest under Article 6(1)(e) in conjunction with Article 9(2), (i) or (j) of the GDPR; or

  • the legitimate interests of the controller under Article 6(1)(f) in conjunction with Article 9(2) (j) of the GDPR;

Not every clinical trial will be able to establish it is being conducted in the public interest, especially where the trial doesn’t fall “within the mandate, missions and tasks vested in a public or private body by national law.”  Relying on this basis means that a trial could be challenged later as not supported by national law, and unless the researchers have legislators or regulators pass or promulgate a clear statement of support for the research, this basis is vulnerable to privacy regulators’ whims.

Further, as observed by Hintze and LaFever, relying on “the legal basis involves a balancing test between those legitimate interests pursued by the controller or by a third party and the risks to the interests or rights of the data subject.” So even the most controller-centric of legal supports can be reversed if the local privacy regulator feels that a legitimate use is outweighed by the interests of the data subject.  I suppose the case of Henrietta Lacks, if arising in the EU in the present day, would be a clear situation where a non-scientific regulator can squelch a clinical trial because the data subjects rights to privacy were considered more important than any trial using her genetic material.

So none of the “legal basis” options is either easy or guaranteed not to be reversed later, once millions in resources have been spent on the clinical trial. Further, as Hintze observes, “The GDPR also includes data minimization principles, including retention limitations which may be in tension with the idea that researchers need to gather and retain large volumes of data to conduct big data analytics tools and machine learning.” Meaning that privacy regulators could step in and decide that a clinician has been too ambitious in her use of personal data in violation of data minimization rules and shut down further use of data for scientific purposes.

The regulators emphasize that “appropriate safeguards” will help protect clinical trials from interference, but I read such promises in the inverse.  If a hacker gains access to data in a clinical trial, or if some of this data is accidentally emailed to the wrong people, or if one of the 50,000 lost laptops each day contains clinical research, then the regulators will pounce with both feet and attack the academic institution (rarely paragons of cutting edge data security) as demonstrating a lack of appropriate safeguards.  Recent staggeringly high fines against Marriott and British Airways demonstrate the presumption of the ICO, at least, that an entity suffering a hack or losing data some other way will be viciously punished.

If clinicians choosing where to set human trials knew about this all-encompassing privacy law and how it throws the very nature of their trials into suspicion and possible jeopardy, I can’t see why they would risk holding trials with residents of the European Economic Zone. The uncertainty and risk involved in the aggressively intrusive privacy regulators now having specific interest in clinical trials may drive important academic work overseas. If we see a data breach in a European university or an academic enforcement action based on the laws cited above, it will drive home the risks.

In that case, this particular European shot in the privacy wars is likely to end up pushing serious researchers out of Europe, to the detriment of academic and intellectual life in the Union.

Damaging friendly fire indeed.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

Privacy Concerns Loom as Direct-to-Consumer Genetic Testing Industry Grows

The market for direct-to-consumer (“DTC”) genetic testing has increased dramatically over recent years as more people are using at-home DNA tests. The global market for this industry is projected to hit $2.5 billion by 2024.  Many consumers subscribe to DTC genetic testing because they can provide insights into genetic backgrounds and ancestry.  However, as more consumers’ genetic data becomes available and is shared, legal experts are growing concerned that safeguards implemented by U.S. companies are not enough to protect consumers from privacy risks.

Some states vary in the manner by which they regulate genetic testing.  According to the National Conference of State Legislatures, the majority of states have “taken steps to safeguard [genetic] information beyond the protections provided for other types of health information.”  Most states generally have restrictions on how certain parties can carry out particular actions without consent.  Rhode Island and Washington require that companies receive written authorization to disclose genetic information.  Alaska, Colorado, Florida, Georgia, and Louisiana have each defined genetic information as “personal property.”  Despite these safeguards, some of these laws still do not adequately address critical privacy and security issues relative to genomic data.

Many testing companies also share and sell genetic data to third parties – albeit in accordance with “take-it-or-leave-it” privacy policies.  This genetic data often contains highly sensitive information about a consumer’s identity and health, such as ancestry, personal traits, and disease propensity.

Further, despite promises made in privacy policies, companies cannot guarantee privacy or data protection.  While a large number of companies only share genetic data when given explicit consent from consumers, there are other companies that have less strict safeguards. In some cases, companies share genetic data on a “de-identified” basis.  However, concerns remain relative to the ability to effectively de-identify genetic data.  Therefore, even when a company agrees to only share de-identified data, privacy concerns may persist because an emerging consensus is that genetic data cannot truly be de-identified. For instance, some report that the existence of powerful computing algorithms accessible to Big Data analysts makes it very challenging to prevent data from being de-identified.

To complicate matters, patients have historically come to expect their health information will be protected because the Health Insurance Portability and Accountability Act (“HIPAA”) governs most patient information. Given patients’ expectations of privacy under HIPAA, many consumers assume that this information is maintained and stored securely.  Yet, HIPAA does not typically govern the activities of DTC genetic testing companies – leaving consumers to agree to privacy and security protections buried in click-through privacy policies.  To protect patient genetic privacy, the Federal Trade Commission (“FTC”) has recommended that consumers withhold purchasing a kit until they have scrutinized the company’s website and privacy practices regarding how genomic data is used, stored and disclosed.

Although the regulation of DTC genetic testing companies remains uncertain, it is increasingly evident that consumers expect robust privacy and security controls.  As such, even in the absence of clear privacy or security regulations, DTC genetic testing companies should consider implementing robust privacy and security programs to manage these risks.  Companies should also approach data sharing with caution.  For further guidance, companies in this space may want to review Privacy-Best-Practices-for-Consumer-Genetic-Testing-Services-FINAL issued by the Future of Privacy Forum in July 2018.  Further, the legal and regulatory privacy landscape is rapidly expanding and evolving such that DTC genetic testing companies and the consumers they serve should be watchful of changes to how genetic information may be collected, used and shared over time.

 

©2019 Epstein Becker & Green, P.C. All rights reserved.
This article written by Brian Hedgeman and Alaap B. Shah from Epstein Becker & Green, P.C.

Federal Privacy Law – Could It Happen in 2019?

This was a busy week for activity and discussions on the federal level regarding existing privacy laws – namely the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). But the real question is, could a federal privacy law actually happen in 2019? Cybersecurity issues and the possibility of a federal privacy law were in the spotlight at the recent Senate Judiciary Committee hearing. This week also saw the introduction of bipartisan federal legislation regarding Internet of Things (IoT)-connected devices.

Senate Judiciary Committee Hearing on GDPR and CCPA

Let’s start by discussing this week’s hearing before the Senate Judiciary Committee in Washington. On March 12, the Committee convened a hearing entitled GDPR & CCPA: Opt-ins, Consumer Control, and the Impact on Competition and Innovation.  The Committee received testimony from several interested parties who discussed the pros and cons of both laws from various perspectives. One thing was clear – technology has outpaced the law, and several of those who provided testimony to the Committee argued strongly for one uniform federal privacy law rather than the collection of 50 different state laws.

Some of the testimony focused on the impact of the GDPR, both on businesses and economic concerns, and some felt it is too early yet to truly know the full impact. Others discussed ethical concerns regarding data use, competition, artificial intelligence, and the necessity for meaningful enforcement by the Federal Trade Commission (FTC).

One thing made clear by the testimony presented is that people want their data protected, and maybe they even want to prevent it from being shared and sold, but the current landscape makes that difficult for consumers to navigate. The reality is that many of us simply can’t keep track of every privacy policy we read, or every “cookie” we consent to. It’s also increasingly clear that putting the burden on consumers to opt in/opt out or try to figure out the puzzle of where our data is going and how it’s used, may not be the most effective means of legislating privacy protections.

Model Federal Privacy Law

Several of the presenters at the Senate hearing included legislative proposals for a federal privacy law. (See the link included above to the Committee website with links to individual testimony). Recently, the U.S. Chamber of Commerce also released its version of a model federal privacy law. The model legislation proposal contains consumer opt-out rights and a deletion option, and would empower the FTC to enforce violations and impose civil penalties for violations.

IoT Federal Legislation Is Back – Sort of

In 2017, federal legislation regarding IoT was introduced but didn’t pass. This week, the Internet of Things Cybersecurity Improvement Act of 2019 was introduced in Congress in a bipartisan effort to impose cybersecurity standards on IoT devices purchased by the federal government. The new bipartisan bill’s supporters acknowledge the proliferation of internet-connected things and devices and the risks to the federal government of IoT cybersecurity vulnerabilities. This latest federal legislation applies to federal government purchases of IoT devices and not to a broader audience. We recently discussed the California IoT law that was enacted last year. Effective January 1, 2020, all IoT devices sold in California will require a manufacturer to equip the device with “reasonable security feature or features” to “protect the device and any information contained therein from unauthorized access, destruction, use modification or disclosure.”

The convergence of the new California law and the prospect of federal IoT legislation begs the question of whether the changes to California law and on the federal level would be enough to drive change in the industry to increase the security of all IoT devices. The even bigger question is whether there is the political will in 2019 to drive change to enact a comprehensive federal privacy law. That remains to be seen as the year progresses.

 

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
This post was written by Deborah A. George of Robinson & Cole LLP.

Save the Internet Act of 2019 Introduced

On 6 March 2019, Democrats in the House and Senate introduced the “Save the Internet Act of 2019.” The three-page bill (1) repeals the FCC’s Restoring Internet Freedom Order released in early 2018, as adopted by the Republican-led FCC under Chairman Ajit Pai; (2) prohibits the FCC from reissuing the RIF Order or adopting rules substantively similar to those adopted in the RIF Order; and (3) restores the Open Internet Order released in 2015, as adopted by the Democratic-led FCC under Chairman Tom Wheeler.

Major Impacts:

  • Broadband Internet Access Service (BIAS) is reclassified as a “telecommunications service,” potentially subject to all provisions in Title II of the Communications Act.

  • The three bright line rules of the Open Internet Order are restored: (1) no blocking of access to lawful content, (2) no throttling of Internet speeds, exclusive of reasonable network management practices, and (3) no paid prioritization.

  • Reinstates FCC oversight of Internet exchange traffic (transit and peering), the General Conduct Rule that authorizes the FCC to address anti-competitive practices of broadband providers, and the FCC’s primary enforcement authority over the Open Internet Order’s rules and policies.

  • Per the Open Internet Order, BIAS and all highspeed Internet access services remain subject to the FCC’s exclusive jurisdiction and the revenues derived from these services remain exempt from USF contribution obligations.

  • The prescriptive service disclosure and marketing rules of the Open Internet Order, subject to the small service provider exemption, would apply in lieu of the Transparency Rule adopted in the RIF Order.

FCC Chairman Pai promptly issued a statement strongly defending the merits and benefits of the RIF Order.

KH Assessment

  • From a political perspective, Save the Internet Act of 2019 garners support from many individuals and major edge providers committed to net neutrality principles but faces challenges in the Republican-controlled Senate.

  • In comments filed in the proceeding culminating in the RIF Order, the major wireline and wireless broadband providers supported a legislative solution that codified the no blocking and no throttling principles but not the no-paid prioritization prohibition or classifying BIAS as a telecommunications service.

It is highly unlikely that the legislation will be enacted as introduced. Though still unlikely, there is a better chance that a legislative compromise may be reached.

 

© 2019 Keller and Heckman LLP.

CCPA Part 2 – What Does Your Business Need to Know? Consumer Requests and Notice to Consumers of Personal Information Collected

This week we continue our series of articles on the California Consumer Privacy Act of 2018 (CCPA). We’ve been discussing the broad nature of this privacy law and answering some general questions, such as what is it? Who does it apply to? What protections are included for consumers? How does it affect businesses? What rights do consumers have regarding their personal information? What happens if there is a violation? This series is a follow up to our earlier post on the CCPA.

In Part 1 of this series, we discussed the purpose of the CCPA, the types of businesses impacted, and the rights of consumers regarding their personal information. This week we’ll review consumer requests and businesses obligations regarding data collection, the categories and specific pieces of personal information the business has collected, and how the categories of personal information shall be used.

We begin with two questions regarding data collection:

  • What notice does a business need to provide to the consumer to tell a consumer what personal information it collects?
  • What is a business required to do if that consumer makes a verified request to disclose the categories and specific pieces of personal information the business has collected?

First, the CCPA requires businesses to notify a consumer, at or before the point of collection, as to the categories of personal information to be collected and the purposes for which the categories of personal information shall be used. A business shall not collect additional categories of personal information or use personal information collected for additional purposes without providing the consumer with notice consistent with this section. Cal. Civ. Code §1798.100.

Second, under the CCPA, businesses shall, upon request of the consumer, be required to inform consumers as to the categories of personal information to be collected and the purposes for which the categories of personal information shall be used. The CCPA states that “a business that receives a verifiable consumer request from a consumer to access personal information shall promptly take steps to disclose and deliver, free of charge to the consumer, the personal information required by this section. The information may be delivered by mail or electronically, and if provided electronically, the information shall be in a portable and, to the extent technically feasible, in a readily useable format that allows the consumer to transmit this information to another entity without hindrance. A business may provide personal information to a consumer at any time, but shall not be required to provide personal information to a consumer more than twice in a 12-month period.” Section 1798.100 (d).

Section 1798.130 (a) states that to comply with the law, a business shall, in a form that is reasonably accessible to consumers, (1) make available to consumers two or more designated methods for submitting requests for information required to be disclosed, including, at a minimum, a toll-free telephone number, and if the business maintains an Internet web site, a web dite address; and (2) disclose and deliver the required information to a consumer free of charge within forty-five (45) days of receiving a verifiable request from the consumer.

Many have suggested during the rule-making process that there should be an easy to follow and standardized process for consumers to make their requests so that it’s clear for both consumers and businesses that a consumer has made the verified request. This would be welcome so that it would make this aspect of compliance simpler for the consumer as well as the business.

When businesses respond to consumers’ requests, having a clear website privacy policy that explains the types of information collected, a documented process for consumers to make a verified requests, a protocol for responding to consumer requests, audit logs of consumer requests and business responses, a dedicated website link, and clear and understandable language in  privacy notices, are all suggestions that will help businesses respond to consumers and provide documentation of the business’ response.

As we continue to explore the CCPA and its provisions, we strive to understand the law and translate the rights conferred by the law into business operations, processes and practices to ensure compliance with the law. In the coming weeks, we’ll focus on understanding more of these provisions and the challenges they present.

 

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
This post was written by Deborah A. George of Robinson & Cole LLP.

Six Flags Raises Red Flags: Illinois Supreme Court Weighs In On BIPA

On January 25, the Illinois Supreme Court held that a person can seek liquidated damages based on a technical violation of the Illinois Biometric Information Privacy Act (BIPA), even if that person has suffered no actual injury as a result of the violation. Rosenbach v. Six Flags Entertainment Corp. No. 123186 (Ill. Jan. 25, 2019) presents operational and legal issues for companies that collect fingerprints, facial scans, or other images that may be considered biometric information.

As we have previously addressed, BIPA requires Illinois businesses that collect biometric information from employees and consumers to, among other things, adopt written policies, notify individuals, and obtain written releases. A handful of other states impose similar requirements, but the Illinois BIPA is unique because it provides individuals whose data has been collected with a private right of action for violations of the statute.

Now, the Illinois Supreme Court has held that even technical violations may be actionable.  BIPA requires that businesses use a “reasonable standard of care” when storing, transmitting, or protecting biometric data, so as to protect the privacy of the person who provides the data. The rules are detailed. Among other things, BIPA requires businesses collecting or storing biometric data to do the following:

  • establish a written policy with a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information;
  • notify individuals in writing that the information is being collected or stored and the purpose and length of time for which the biometric identifier will be collected, stored, and used;
  • obtain a written release from the individual; and
  • not disclose biometric information to a third party without the individual’s consent.

The Illinois Supreme Court has now held that a plaintiff may be entitled to up to $5,000 in liquidated damages if a company violates any of these requirements, even without proof of actual damages.

In Rosenbach, the plaintiff’s son’s fingerprint was scanned so that he could use his fingerprint to enter the Six Flags theme park under his season pass. Neither the plaintiff nor her son signed a written release or were given written notice as required by BIPA. The plaintiff did not allege that she or her son suffered a specific injury but claimed that if she had known that Six Flags collected biometric data, she would not have purchased a pass for her son. The plaintiff brought a class action on behalf of all similarly situated theme park customers and sued for maximum damages ($5,000 per violation) under BIPA. The Illinois appellate court held that plaintiff could not maintain a BIPA action because technical violations did not render a party “aggrieved,” a key element of a BIPA claim.

In a unanimous decision, the Illinois Supreme Court disagreed. The court held that “an individual need not allege some actual injury or adverse effect, beyond violation of his or her rights under the Act, in order to qualify as an ‘aggrieved’ person and be entitled to seek liquidated damages and injunctive relief pursuant to the Act.” Even more pointedly, the court held that when a private entity fails to comply with BIPA’s requirements regarding the collection, retention, disclosure, and destruction of a person’s biometric identifiers or biometric information, that violation alone – in the absence of any actual pecuniary or other injury—constitutes an invasion, impairment, or denial of the person’s statutory rights.

This decision – along with the 200 class actions already filed – shows how important it is for vendors and companies using fingerprint timeclocks or other technologies that may collect biometric information to be aware of BIPA’s requirements.

 

© 2019 Schiff Hardin LLP

Privacy Legislation Proposed in New York

The prevailing wisdom after last year’s enactment of the California Consumer Privacy Act (CCPA) was that it would result in other states enacting consumer privacy legislation. The perceived inevitability of a “50-state solution to privacy” motivated businesses previously opposed to federal privacy legislation to push for its enactment. With state legislatures now convening, we have identified what could be the first such proposed legislation in New York Senate Bill 224.

The proposed legislation is not nearly as extensive as the CCPA and is perhaps more analogous to California’s Shine the Light Law. The proposed legislation would require a “business that retains a customer’s personal information [to] make available to the customer free of charge access to, or copies of, all of the customer’s personal information retained by the business.” It also would require businesses that disclose customer personal information to third parties to disclose certain information to customers about the third parties and the personal information that is shared. Businesses would have to provide this information within 30 days of a customer request and for a twelve-month lookback period. The rights also would have to be disclosed in online privacy notices. Notably, the bill would create a private right of action for violations of its provisions.

We will continue to monitor this legislation and any other proposed legislation.

Copyright © by Ballard Spahr LLP.

This post was written by David M. Stauss of Ballard Spahr LLP.

Law Firm Security: Privacy & Data Security Laws that Affect Your Law Firm

At this point in the cybersecurity game, it’s a given that to prevent a breach, law firms must take every precaution to protect its data as well as the valuable data of its clients. What may not be as clear are the obligations that law firms, or any other third party, owe to certain organizations via industry-specific privacy and data security laws and regulations. These are put in place by foundations, government laws, and agency policies to ensure that they are not vulnerable to cybersecurity attacks.

Privacy and Data Security Laws and Regulations

Although there are many organizations that are subject to these laws, this article will address the most high-profile organizations, including the following:

Health Insurance Portability and Accountability Act (HIPAA)

HIPAA applies to covered entities such as health plans, health care clearinghouses and certain health care providers. Because these entities do not operate in a vacuum and often rely on the services of third-party businesses, there are provisions that allow these entities to share information with business associates and law firms.

business associate “is a person or entity that performs certain functions or activities that involve the use or disclosure of protected health information on behalf of, or provides services to, a covered entity,” according to the U.S. Department of Health & Human Services website.

Before information is shared with a business associate, the entity must first receive satisfactory assurances that the information will only be used for the purposes for which it was obtained, that the information will be safeguarded and that the information will help the covered entity to perform its duties. The satisfactory assurances must be in writing to ensure compliance with privacy and data security laws.

Gramm Leach Bliley Act (GLBA)

The GLBA was enacted to require financial institutions to explain their information-sharing practices to their customers and to safeguard vulnerable customer data from a security breach.

Under the Safeguards Rule of the GLBA, all financial institutions must protect consumer collected information from a security breach. Usually, data collected includes names, addresses and phone numbers; bank and credit card account numbers; income and credit histories; and Social Security numbers.

Further, financial institutions are required to ensure that parties with whom they are doing business must also be able to safeguard data with which they have been entrusted, such as law firms. Financial institutions must “select service providers that can maintain appropriate safeguards. Make sure your contract requires them to maintain safeguards, and oversee their handling of customer information,” according to the FTC website to ensure compliance of privacy and data security laws.

The FTC provides a detailed list of tips that financial institutions, as well as third-parties, can use to set up a strong security system to prevent a data breach of a customer’s information.

Payment Card Industry Data Security Standard (PCI-DSS)

The PCI was founded by American Express, Discover Financial Services, JCB International, MasterCard, and Visa, Inc. with the intent to “develop, enhance, disseminate and assist with the understanding of security standards for payment account security,” according to its website.

The standards apply to all entities that store, process or transmit cardholder data. This would include law firms, of course. The website lists 12 requirements that must be maintained, including:

  1. Install and maintain a firewall configuration to protect cardholder data.
  2. Do not use vendor-supplied defaults for system passwords and other security parameters.
  3. Protect stored cardholder data.
  4. Encrypt transmission of cardholder data across open, public networks.
  5. Use and regularly update anti-virus software or programs.
  6. Develop and maintain secure systems and applications.
  7. Restrict access to cardholder data by business need-to-know.
  8. Assign a unique ID to each person with computer access.
  9. Restrict physical access to cardholder data.
  10. Track and monitor all access to network resources and cardholder data.
  11. Regularly test security systems and processes.
  12. Maintain a policy that addresses privacy and data security laws and regulations for employees and contractors.

Federal Reserve System

The Federal Reserve System issued the Guidance on Managing Outsourcing Risk publication to address concerns about third-party vendors or service providers and the risks of a data breach. The Federal Reserve defines service provider as, “all entities that have entered into a contractual relationship with a financial institution to provide business functions or activities.”

The publication indicates that a financial institution should treat the service provider risk management program commensurate with the level of risk presented by each service provider. “It should focus on outsourced activities that have a substantial impact on a financial institution’s financial condition; are critical to the institution’s ongoing operations; involve sensitive customer information or new bank products or services; or pose material compliance risk,” according to the publication.

An effective program should include the following:

  1. Risk assessments;
  2. Due diligence and selection of service providers;
  3. Contract provisions and considerations;
  4. Incentive compensation review;
  5. Oversight and monitoring of service providers; and
  6. Business continuity and contingency plans.

Federal Deposit Insurance Corporation (FDIC)

The FDIC issued a Guidance for Managing Third-Party Risk where the agency makes clear that an institution’s board of directors and senior management are responsible for the activities and risks associated with third-party vendors. This includes a breach into a third-party’s system. Among other third-party organizations, the publication lists significant organizations where “the relationship has a material effect on the institution’s revenues or expenses; the third party performs critical functions; the third-party stores, accesses, transmits, or performs transactions on sensitive customer information.” All of these could involve law firms that work with financial institutions.

The publication summarizes risks that third-party entities may pose, including strategic risk, reputations risk, operational risk, transaction risk, credit risk, compliance risk, and other risks. It also summarizes a risk management process, which includes the following elements of (1) risk assessment, (2) due diligence in selecting a third party, (3) contract structuring and review, and (4) oversight.

Conclusion

Being a third-party cybersecurity risk may be foreign territory to most law firms. But many organizations have in place privacy and data security laws and regulations to protect systems that could be vulnerable to a cybersecurity breach. It behooves law firms to be aware of these laws and regulations to be able to implement the laws and regulations as thoroughly and as expeditiously as possible.

ARTICLE BY:

© Copyright 2018 PracticePanther