Privilege Dwindles for Data Breach Reports

Data privacy lawyers and cyber security incident response professionals are losing sleep over the growing number of federal courts ordering disclosure of post-data breach forensic reports.  Following the decisions in Capital One and Clark Hill, another district court has recently ordered the defendant in a data breach litigation to turn over the forensic report it believed was protected under the attorney-client privilege and work product doctrines. These three decisions help underscore that maintaining privilege over forensic reports may come down to the thinnest of margins—something organizations should keep in mind given the ever-increasing risk of litigation that can follow a cybersecurity incident.

In May 2019, convenience store and gas station chain Rutter’s received two alerts signaling a possible breach of their internal systems. The same day, Rutter’s hired outside counsel to advise on potential breach notification obligations. Outside counsel immediately hired a forensic investigator to perform an analysis to determine the character and scope of the incident. Once litigation ensued, Rutter’s withheld the forensic report from production on the basis of the attorney-client privilege and work product doctrines. Rutter’s argued that both itself and outside counsel understood the report to be privileged because it was made in anticipation of litigation. The Court rejected this notion.

With respect to the work product doctrine, the Court stated that the doctrine only applies where identifiable or impending litigation is the “primary motivating purpose” of creating the document. The Court found that the forensic report, in this case, was not prepared for the prospect of litigation. The Court relied on the forensic investigator’s statement of work which stated that the purpose of the investigation was to “determine whether unauthorized activity . . . resulted in the compromise of sensitive data.” The Court decided that because Rutter’s did not know whether a breach had even occurred when the forensic investigator was engaged, it could not have unilaterally believed that litigation would result.

The Court was also unpersuaded by the attorney-client privilege argument. Because the forensic report only discussed facts and did not involve “opinions and tactics,” the Court held that the report and related communications were not protected by the attorney-client privilege. The Court emphasized that the attorney-client privilege does not protect communications of fact, nor communications merely because a legal issue can be identified.

The Rutter’s decision comes on the heels of the Capital One and Clark Hill rulings, which both held that the defendants failed to show that the forensic reports were prepared solely in anticipation of litigation. In Capital One, the company hired outside counsel to manage the cybersecurity vendor’s investigation after the breach, however, the company already had a longstanding relationship and pre-existing agreement with the vendor. The Court found that the vendor’s services and the terms of its new agreement were essentially the same both before and after the outside counsel’s involvement. The Court also relied on the fact that the forensic report was eventually shared with Capital One’s internal response team, demonstrating that the report was created for various business purposes.

In response to the data breach in the Clark Hill case, the company hired a vendor to investigate and remediate the systems after the attack. The company also hired outside counsel, who in turn hired a second cybersecurity vendor to assist with litigation stemming from the attack. During the litigation, the company refused to turn over the forensic report prepared by the outside counsel’s vendor. The Court rejected this “two-track” approach finding that the outside counsel’s vendor report has not been prepared exclusively for use in preparation for litigation. Like in Capital One, the Court found, among other things, that the forensic report was shared not only with inside and outside counsel, but also with employees inside the company, IT, and the FBI.

As these cases demonstrate, the legal landscape around responding to security incidents has become filled with traps for the unwary.  A coordinated response led by outside counsel is key to mitigating a data breach and ensuring the lines are not blurred between “ordinary course of business” factual reports and incident reports that are prepared for litigation purposes.

© 2021 Bracewell LLP

Fore more articles on cybersecurity, visit the NLR Communications, Media, Internet, and Privacy Law News section.

Patentablity of COVID19 Software Inventions: Artificial Intelligence (AI), Data Storage & Blockchain

The  Coronavirus pandemic revved up previously scarce funding for scientific research.  Part one of this series addressed the patentability of COVID-19 related Biotech, Pharma & Personal Protective Equipment (PPE) Inventions and whether inventions related to fighting COVID-19  should be patentable.  Both economists and lawmakers are critical of the exclusivity period granted by patents, especially in the case of vaccines and drugs.  Recently, several members of Congress requested “no exclusivity” for any “COVID-19 vaccine, drug, or other therapeutic.”[i]

In this segment, the unique issues related to the intellectual property rights of Coronavirus related software inventions, specifically, Artificial Intelligence (AI), Data Storage & Blockchain are addressed.

Digital Innovations

Historically, Americans have adhered to personalized healthcare and lacked the incentive to set up a digital infrastructure similar to Taiwan’s which has fared far better in combating the spread of a fast-moving virus.[ii]  But as hospitals continue to operate at maximum capacity and with prolonged social distancing, the software sector is teeming with digital solutions for increasing the virtual supply of healthcare to a wider network of patients,[iii] particularly as HHS scales back HIPAA regulations.[iv]  COVID-19 has also spurred other types of digital innovation, such as using AI to predict the next outbreak and electronic hospital bed management, etc.[v]

One area of particular interest is the use of blockchain and data storage in a COVID/post-COVID world.  Blockchains can serve as secure ledgers for the global supply of medical equipment, including respirators, ventilators, dialysis machines, and oxygen masks.[vi]  The Department of Homeland Security has also deemed blockchain managers in food and agricultural distribution as “critical infrastructure workers”.[vii]

Patentability

Many of these digital inventions will have a hard time with respect to patentability, especially those related to data storage such as blockchains.  In 2014, the Supreme Court found computer-related inventions were “abstract ideas” ineligible for patent protection in Alice v. CLS Bank.[viii]  Because computer-implemented programs execute steps that can theoretically be performed by a human being but are only automated by a machine, the Supreme Court concluded that patenting software would be patenting human activity.  This type of patent protection has long been considered by the Court to be too broad and dangerous.

Confusion

The aftermath of Alice is widespread confusion amongst members of the patent bar as well as the USPTO as to how computer-related software patents were to be treated henceforth.[ix]   The USPTO attempted to clarify some of this confusion by a series of Guidelines in 2019.[x]  Although well-received by the IP community, the USPTO’s Guidelines are not binding outside of the agency, meaning they are have little dispositive effect when parties must bring their cases to the Federal Circuit and other courts.[xi]  Indeed, the Federal Circuit has made clear that they are not bound by the USPTO’s guidance.[xii]  The Supreme Court will not provide further clarification and denied cert on all patent eligibility petitions in January of this year.[xiii]

The Future

Before the coronavirus outbreak, Congress was working on patent reform.[xiv]  But the long-awaited legislation was set aside further still as legislators focused on needed measures to address the pandemic.  On top of that, both Senator Tillis and Senator Coons who have spearheaded the efforts for patent reform are now facing reelection battles, making the future congressional leadership on patent reform uncertain.

Conclusion

Patents receive a lot of flak for being company assets, and like many assets, patents are subject to abuse.[xv]  But patents are necessary for innovation, particularly for small and medium-sized companies by carving out a safe haven in the marketplace from the encroachment of larger companies.[xvi]  American leadership in medical innovations had been declining for some time prior to the pandemic[xvii] due to the cumbersome US regulatory and legal environments, particularly for tech start-ups seeking private funding.[xviii]

Not all data storage systems should receive a patent and no vaccine should receive a patent so broad that it snuffs out public access to alternatives.  The USPTO considers novelty, obviousness and breadth when dispensing patent exclusivity, and they revisit the issue of patent validity downstream with inter partes review.  There are measures in place for ensuring good patents so let that system take its course.  A sweeping prohibition of patents is not the right answer.

The opinions stated herein are the sole opinions of the author and do not reflect the views or opinions of the National Law Review or any of its affiliates


[i] Congressional Progressive Leaders Announce Principles On COVID-19 Drug Pricing for Next Coronavirus Response Package, (2020), https://schakowsky.house.gov/media/press-releases/congressional-progressive-leaders-announce-principles-COVID-19-drug-pricing (last visited May 10, 2020).

[ii] Christina Farr, Why telemedicine has been such a bust so far, CNBC (June 30, 2018), https://www.cnbc.com/2018/06/29/why-telemedicine-is-a-bust.html and Nick Aspinwall, Taiwan Is Exporting Its Coronavirus Successes to the World, Foreign Policy (April 9, 2020), https://foreignpolicy.com/2020/04/09/taiwan-is-exporting-its-coronavirus-successes-to-the-world/.

[iii] Joe Harpaz, 5 Reasons Why Telehealth Is Here To Stay (COVID-19 And Beyond), Forbes (May 4, 2020), https://www.forbes.com/sites/joeharpaz/2020/05/04/5-reasons-why-telehealth-here-to-stay-COVID19/#7c4d941753fb.

[iv] Jessica Davis, OCR Lifts HIPAA Penalties for Telehealth Use During COVID-19, Health IT Security (March 18, 2020), https://healthitsecurity.com/news/ocr-lifts-hipaa-penalties-for-telehealth-use-during-COVID-19.

[v] Charles Alessi, The effect of the COVID-19 epidemic on health and care – is this a portent of the ‘new normal’?, HealthcareITNews (March 31, 2020), https://www.healthcareitnews.com/blog/europe/effect-COVID-19-epidemic-health-and-care-portent-new-normal and COVID-19 and AI: Tracking a Virus, Finding a Treatment, Wall Street Journal (April 17, 2020), https://www.wsj.com/podcasts/wsj-the-future-of-everything/COVID-19-and-ai-tracking-a-virus-finding-a-treatment/f064ac83-c202-40f9-8259-426780b36f2c.

[vi] Sara Castellenos, A Cryptocurrency Technology Finds New Use Tackling Coronavirus, Wall Street Journal (April 23, 2020), https://www.wsj.com/articles/a-cryptocurrency-technology-finds-new-use-tackling-coronavirus-11587675966?mod=article_inline.

[vii] Christopher C. Krebs, MEMORANDUM ON IDENTIFICATION OF ESSENTIAL CRITICAL INFRASTRUCTURE WORKERS DURING COVID-19 RESPONSE, Cybersecurity and Infrastructure Security Agency (March 19, 2020), available at https://www.cisa.gov/sites/default/files/publications/CISA-Guidance-on-Essential-Critical-Infrastructure-Workers-1-20-508c.pdf.

[viii] Alice v. CLS Bank, 573 U.S. 208 (2014), available at https://www.supremecourt.gov/opinions/13pdf/13-298_7lh8.pdf.

[ix] David O. Taylor, Confusing Patent Eligibility, 84 Tenn. L. Rev. 157 (2016), available at https://scholar.smu.edu/cgi/viewcontent.cgi?article=1221&context=law_faculty.

[x] 2019 Revised Patent Subject Matter Eligibility Guidance, United States Patent Office (January 7, 2019), available at https://www.federalregister.gov/documents/2019/01/07/2018-28282/2019-revised-patent-subject-matter-eligibility-guidance.

[xi] Steve Brachmann, Latest CAFC Ruling in Cleveland Clinic Case Confirms That USPTO’s 101 Guidance Holds Little Weight, IPWatchDog (April 7, 2019), https://www.ipwatchdog.com/2019/04/07/latest-cafc-ruling-cleveland-clinic-confirms-uspto-101-guidance-holds-little-weight/id=107998/.

[xii] Id.

[xiii] U.S. Supreme Court Denies Pending Patent Eligibility Petitions, Holland and Knight LLP (January 14, 2020), https://www.jdsupra.com/legalnews/u-s-supreme-court-denies-pending-patent-55501/.

[xiv] Tillis and Coons: What We Learned At Patent Reform Hearings, (June 24, 2019), available at https://www.tillis.senate.gov/2019/6/tillis-and-coons-what-we-learned-at-patent-reform-hearings.

[xv] Gene Quinn, Twisting Facts to Capitalize on COVID-19 Tragedy: Fortress v. bioMerieux, IPWatchDog (March 18, 2020), https://www.ipwatchdog.com/2020/03/18/twisting-facts-capitalize-COVID-19-tragedy-fortress-v-biomerieux/id=119941/.

[xvi] Paul R. Michel, To prepare for the next pandemic, Congress should restore patent protections for diagnostic tests, Roll Call (April 28, 2020), https://www.rollcall.com/2020/04/28/to-prepare-for-the-next-pandemic-congress-should-restore-patent-protections-for-diagnostic-tests/.

[xvii] Medical Technology Innovation Scorecard_The race for global leadership, PwC (January 2011), https://www.pwc.com/il/en/pharmaceuticals/assets/innovation-scorecard.pdf.

[xviii] Elizabeth Snell, How Health Privacy Regulations Hinder Telehealth Adoption, HealthITSecurity (May 5, 2015),https://healthitsecurity.com/news/how-health-privacy-regulations-hinder-telehealth-adoption.


Copyright (C) GLOBAL IP Counselors, LLP

For more on patentability, see the National Law Review Intellectual Property law section.

Venmo’ Money: Another Front Opens in the Data Wars

When I see stories about continuing data spats between banks, fintechs and other players in the payments ecosystem, I tend to muse about how the more things change the more they stay the same. And so it is with this story about a bank, PNC, shutting off the flow of customer financial data to a fintech, in this case, the Millennial’s best friend, Venmo. And JP Morgan Chase recently made an announcement dealing with similar issues.

Venmo has to use PNC’s customer’s data in order to allow (for example) Squi to use it to pay P.J. for his share of the brews.  Venmo needs that financial data in order for its system to work.  But Venmo isn’t the only one with a mobile payments solution; the banks have their own competing platform called Zelle.  If you bank with one of the major banks, chances are good that Zelle is already baked into your mobile banking app.  And unlike Venmo, Zelle doesn’t need anyone’s permission but that of its customers to use those data.

You can probably guess the rest.  PNC recently invoked security concerns to largely shut off the data faucet and “poof”, Venmo promptly went dark for PNC customers.  To its aggrieved erstwhile Venmo-loving customers, PNC offered a solution: Zelle.  PNC subtly hinted that its security enhancements were too much for Venmo to handle, the subtext being that PNC customers might be safer using Zelle.

Access to customer data has been up until now a formidable barrier to entry for fintechs and others whose efforts to make the customer payment experience “frictionless” have depended in large measure on others being willing to do the heavy lifting for them.  The author of Venmo article suggests that pressure from customers may force banks to yield any strategic advantage that control of customer data may give them.  So far, however, consumer adoption of mobile payments is still miniscule in the grand scheme of things, so that pressure may not be felt for a very long time, if ever.

In the European Union, the regulators have implemented PSD2 which forces a more open playing field for banking customers. But realistically, it can’t be surprising that the major financial institutions don’t want to open up their customer bases to competitors and get nothing in return – except a potential stampede of customers moving their money. And some of these fintech apps haven’t jumped through the numerous hoops required to be a bank holding company or federally insured – meaning unwitting consumers may have less fraud protection when they move their precious money to a cool-looking fintech app.

A recent study by the Pew Trusts make it clear that consumers are still not fully embracing mobile for any number of reasons.  The prime reason is that current mobile payment options still rely on the same payments ecosystem as credit and debit cards yet mobile payments don’t offer as much consumer protection. As long as that is the case, banks and fintechs and merchants will continue to fight over data and the regulators are likely to weigh in at some point.

It is not unlike the early mobile phone issue when one couldn’t change mobile phone providers without getting a new phone number – that handcuff kept customers with a provider for years but has since gone by the wayside. It is likely we will see some sort of similar solution with banking details.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on fintech & banking data, see the National Law Review Financial Institutions & Banking law page.

Facing Facts: Do We Sacrifice Security Out of Fear?

Long before the dawn of time, humans displayed physical characteristics as identification tools. Animals do the same to distinguish each other. Crows use facial recognition on humans.  Even plants can tell their siblings from unrelated plants of the same species.

We present our physical forms to the world, and different traits identify us to anyone who is paying attention. So why, now that identity theft is rampant and security is challenged, do we place limits on the easiest and best ID system available? Are we sacrificing future security due to fear of an unlikely dystopia?

In one of the latest cases rolling out of Illinois’ private right of action under the state’s Biometric Information Privacy Act (BIPA), Rogers v. BNSF Railway Company[1], the court ruled that a railroad hauling hazardous chemicals through major urban areas needed to change, and probably diminish, its security procedures for who it allows into restricted space. Why? Because the railroad used biometric security to identify authorized entrants, BIPA forces the railroad to receive the consent of each person authorized to enter restricted space, and because BIPA is not preempted by federal rail security regulations.

The court’s decision, based on the fact that federal rail security rules do not specifically regulate biometrics, is a reasonable reading of the law. However, with BIPA not providing exceptions for biometric security, BIPA will impede the adoption and effectiveness of biometric-based security systems, and force some businesses to settle for weaker security. This case illustrates how BIPA reduces security in our most vulnerable and dangerous places.

I can understand some of the reasons Illinois, Texas, Washington and others want to restrict the unchecked use of biometrics. Gathering physical traits – even public traits like faces and voices – into large searchable databases can lead to overreaching by businesses. The company holding the biometric database may run tests and make decisions based on physical properties.  If your voice shows signs of strain, maybe the price of your insurance should rise to cover risk that stress puts on your body. But this kind of concern can be addressed by regulating what can be done with biometric readings.

There are also some concerns that may not have the foundation they once had. Two decades ago, many biometric systems stored bio data as direct copies, so that if someone stole the file, that person would have your fingerprint, voiceprint or iris scan.  Now, nearly all of the better biometric systems store bio readings as algorithms that can’t be read by computers outside the system that took the sample. So some of the safety concerns are no longer valid.

I propose a more nuanced thinking about biometric readings. While requiring data subject consent is harmless in many situations, the consent regime is a problem for security systems that use biometric indications of identity. And these systems are generally the best for securing important spaces.  Despite what you see in the movies, 2019 biometric security systems can be nearly impossible to trick into false positive results. If we want to improve our security for critical infrastructure, we should be encouraging biometrics, not throwing hurdles in the path of people choosing to use it.

Illinois should, at the very least, provide an exception to BIPA for physical security systems, even if that exception is limited to critical facilities like nuclear, rail and hazardous shipping restricted spaces. The state can include limits on how the biometric samples are used by the companies taking them, so that only security needs are served.

The field of biometrics may scare some people, but it is a natural outgrowth of how humans have always told each other apart.  If limit its use for critical security, we are likely to suffer from the decision.

[1] 2019 WL 5699910 (N.D. Ill).


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on biometric identifier privacy, see the National Law Review Communications, Media & Internet law page.

The CCPA Is Approaching: What Businesses Need to Know about the Consumer Privacy Law

The most comprehensive data privacy law in the United States, the California Consumer Privacy Act (CCPA), will take effect on January 1, 2020. The CCPA is an expansive step in U.S. data privacy law, as it enumerates new consumer rights regarding collection and use of personal information, along with corresponding duties for businesses that trade in such information.

While the CCPA is a state law, its scope is sufficiently broad that it will apply to many businesses that may not currently consider themselves to be under the purview of California law. In addition, in the wake of the CCPA, at least a dozen other states have introduced their own comprehensive data privacy legislation, and there is heightened consideration and support for a federal law to address similar issues.

Below, we examine the contours of the CCPA to help you better understand the applicability and requirements of the new law. While portions of the CCPA remain subject to further clarification, the inevitable challenges of compliance, coupled with the growing appetite for stricter data privacy laws in the United States generally, mean that now is the time to ensure that your organization is prepared for the CCPA.

Does the CCPA apply to my business?

Many businesses may rightly wonder if a California law even applies to them, especially if they do not have operations in California. As indicated above, however, the CCPA is not necessarily limited in scope to businesses physically located in California. The law will have an impact throughout the United States and, indeed, worldwide.

The CCPA will have broad reach because it applies to each for-profit business that collects consumers’ personal information, does business in California, and satisfies at least one of three thresholds:

  • Has annual gross revenues in excess of $25 million; or
  • Alone or in combination, annually buys, receives for commercial purposes, sells, or shares for commercial purposes, the personal information of 50,000 or more California consumers; or
  • Derives 50 percent or more of its annual revenues from selling consumers’ personal information

While the CCPA is limited in its application to California consumers, due to the size of the California economy and its population numbers, the act will effectively apply to any data-driven business with operations in the United States.

What is considered “personal information” under the CCPA?

The CCPA’s definition of “personal information” is likely the most expansive interpretation of the term in U.S. privacy law. Per the text of the law, personal information is any “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”

The CCPA goes on to note that while traditional personal identifiers such as name, address, Social Security number, passport, and the like are certainly personal information, so are a number of other categories that may not immediately come to mind, including professional or employment-related information, geolocation data, biometric data, educational information, internet activity, and even inferences drawn from the sorts of data identified above.

As a practical matter, if your business collects any information that could reasonably be linked back to an individual consumer, then you are likely collecting personal information according to the CCPA.

When does a business “collect” personal information under the CCPA?

To “collect” or the “collection” of personal information under the CCPA is any act of “buying, renting, gathering, obtaining, receiving, or accessing any personal information pertaining to a consumer by any means.” Such collection can be active or passive, direct from the consumer or via the purchase of consumer data sets. If your business is collecting personal information directly from consumers, then at or before the point of collection the CCPA imposes a notice obligation on your business to inform consumers about the categories of information to be collected and the purposes for which such information will (or may) be used.

To reiterate, if your business collects any information that could reasonably be linked back to an individual, then you are likely collecting personal information according to the CCPA.

If a business collects personal information but never sells any of it, does the CCPA still apply?

Yes. While there are additional consumer rights related to the sale of personal information, the CCPA applies to businesses that collect personal information solely for internal purposes, or that otherwise do not disclose such information.

What new rights does the CCPA give to California consumers?

The CCPA gives California consumers four primary new rights: the right to receive information on privacy practices and access information, the right to demand deletion of their personal information, the right to prohibit the sale of their information, and the right not to be subject to price discrimination based on their invocation of any of the new rights specified above.

What new obligations does a business have regarding these new consumer rights?

Businesses that fall under the purview of the CCPA have a number of new obligations under the law:

  • A business must take certain steps to assist individual consumers with exercising their rights under the CCPA. This must be accomplished by providing a link on the business’s homepage titled “Do Not Sell My Personal Information” and a separate landing page for the same. In addition, a business must update its privacy policy (or policies), or a California-specific portion of the privacy policy, to include a separate link to the new “Do Not Sell My Personal Information” page.

A business also must provide at least two mechanisms for consumers to exercise their CCPA rights by offering, at a minimum, a dedicated web page for receiving and processing such requests (the CCPA is silent on whether this web page must be separate from or can be combined with the “Do Not Sell My Personal Information” page), and a toll-free 800 number to receive the same.

  • Upon receipt of a verified consumer request to delete personal information, the business must delete that consumer’s personal information within 45 days.
  • Upon receipt of a verified consumer request for information about the collection of that consumer’s personal information, a business must provide the consumer with a report within 45 days that includes the following information from the preceding 12 months:
    • Categories of personal information that the business has collected about the consumer;
    • Specific pieces of personal information that the business possesses about the consumer;
    • Categories of sources from which the business received personal information about the consumer;
    • A corporate statement detailing the commercial reason (or reasons) that the business collected such personal information about the consumer; and
    • The categories of third parties with whom the business has shared the consumer’s personal information.
  • Upon receipt of a verified consumer request for information about the sale of that consumer’s personal information, a business must provide the consumer with a report within 45 days that includes the following information from the preceding 12 months:
    • Categories of personal information that the business has collected about the consumer;
    • Categories of personal information that the business has sold about the consumer;
    • Categories of third parties to whom the business has sold the consumer’s personal information; and
    • The categories of personal information about the consumer that the business disclosed to a third party (or parties) for a business purpose.
  • Finally, a business must further update its privacy policy (or policies), or the California-specific section of such policy(s), to:
    • Identify all new rights afforded consumers by the CCPA;
    • Identify the categories of personal information that the business has collected in the preceding 12 months;
    • Include a corporate statement detailing the commercial reason (or reasons) that the business collected such personal information about the consumer;
    • Identify the categories of personal information that the business has sold in the prior 12 months, or the fact that the business has not sold any such personal information in that time; and
    • Note the categories of third parties with whom a business has shared personal information in the preceding 12 months.

What about employee data gathered by employers for internal workplace purposes?

As currently drafted, nothing in the CCPA carves out an exception for employee data gathered by employers. A “consumer” is simply defined as a “natural person who is a California resident …,” so the law would presumably treat employees like anyone else. However, the California legislature recently passed Bill AB 25, which excludes from the CCPA information collected about a person by a business while the person is acting as a job applicant, employee, owner, officer, director, or contractor of the business, to the extent that information is collected and used exclusively in the employment context. Bill AB 25 also provides an exception for emergency contact information and other information pertaining to the administration of employee benefits. The bill awaits the governor’s signature – he has until October 13, 2019 to sign.

But not so fast – Bill AB 25 only creates a one-year reprieve for employers, rather than a permanent exception. The exceptions listed above will expire on January 1, 2021. By that time, the legislature may choose to extend the exceptions indefinitely, or businesses should be prepared to fully comply with the CCPA.

California employers would thus be wise to start considering the type of employee data they collect, and whether that information may eventually become subject to the CCPA’s requirements (either on January 1, 2021 or thereafter). Personal information is likely to be present in an employee’s job application, browsing history, and information related to payroll processing, to name a few areas. It also includes biometric data, such as fingerprints scanned for time-keeping purposes. Employers who collect employees’ biometric information, for example, would be well advised to review their biometric policies so that eventual compliance with the CCPA can be achieved gradually during this one-year grace period.

Notwithstanding this new legislation, there remains little clarity as to how the law will ultimately be applied in the employer-employee context, if and when the exceptions expire. Employers are encouraged to err on the side of caution and to reach out to experienced legal counsel for further guidance if they satisfy any one of the above thresholds.

What are the penalties for violation of the CCPA?

Violations of the CCPA are enforced by the California Attorney General’s office, which can issue civil monetary fines of up to $2,500 per violation, or $7,500 for each intentional violation. Currently, the California AG’s office must provide notice of any alleged violation and allow for a 30-day cure period before issuing any fine.

Are there any exceptions to the CCPA?

Yes, there are a number of exceptions. First, the CCPA only applies to California consumers and businesses that meet the threshold(s) identified above. If a business operates or conducts a transaction wholly outside of California then the CCPA does not apply.

There are also certain enumerated exceptions to account for federal law, such that the CCPA is pre-empted by HIPAA, the Gramm-Leach-Bliley Act, the Fair Credit Reporting Act as it applies to personal information sold to or purchased from a credit reporting agency, and information subject to the Driver’s Privacy Protection Act.

Would it be fair to say that the CCPA is not very clear, and maybe even a bit confusing?

Yes, it would. The CCPA was drafted, debated, and enacted into law very quickly in the face of some legislative and ballot-driven pressures. As a result, the bill as enacted is a bit confusing and even contains sections that appear to contradict its other parts. The drafters of the CCPA, however, recognized this and have included provisions for the California AG’s office to provide further guidance on its intent and meaning. Amendment efforts also remain underway. As such, it is likely that the CCPA will be an evolving law for at least the short term.

Regardless, the CCPA will impose real-world requirements effective January 1, 2020, and the new wave of consumer privacy legislation it has inspired at the state and federal level is likely to bring even more of the same. It is important to address these issues now, rather than when it is too late.


© 2019 Much Shelist, P.C.

For more on the CCPA legislation, see the National Law Review Consumer Protection law page.

Personal Email Management Service Settles FTC Charges over Allegedly Deceptive Statements to Consumers over Its Access and Use of Subscribers’ Email Accounts

This week, the Federal Trade Commission (FTC) entered into a proposed settlement with Unrollme Inc. (“Unrollme”), a free personal email management service that offers to assist consumers in managing the flood of subscription emails in their inboxes. The FTC alleged that Unrollme made certain deceptive statements to consumers, who may have had privacy concerns, to persuade them to grant the company access to their email accounts. (In re Unrolllme Inc., File No 172 3139 (FTC proposed settlement announced Aug. 8, 2019).

This settlement touches many relevant issues, including the delicate nature of online providers’ privacy practices relating to consumer data collection, the importance for consumers to comprehend the extent of data collection when signing up for and consenting to a new online service or app, and the need for downstream recipients of anonymized market data to understand how such data is collected and processed.  (See also our prior post covering an enforcement action involving user geolocation data collected from a mobile weather app).

A quick glance at headlines announcing the settlement might give the impression that the FTC found Unrollme’s entire business model unlawful or deceptive, but that is not the case.  As described below, the settlement involved only a subset of consumers who received allegedly deceptive emails to coax them into granting access to their email accounts.  The model of providing free products or services in exchange for permission to collect user information for data-driven advertising or ancillary market research remains widespread, though could face some changes when California’s CCPA consumer choice options become effective or in the event Congress passes a comprehensive data privacy law.

As part of the Unrollme registration process, users grant Unrollme access to selected personal email accounts for decluttering purposes.  However, this permission also allows Unrollme to access and scan inboxes for so-called “e-receipts” or emailed receipts from e-commerce transactions. After scanning users’ e-receipt data (which might include billing and shipping addresses and information about the purchased products or services), Unrollme’s parent company, Slice Technologies, Inc., would anonymize the data and package it into market research reports that are sold to various companies, retailers and others.  According to the FTC complaint, when some consumers declined to grant permission to their email accounts during signup, Unrollme, during the relevant time period, tried to make them reconsider by sending allegedly deceptive statements about its access (e.g, “You need to authorize us to access your emails. Don’t worry, this is just to watch for those pesky newsletters, we’ll never touch your personal stuff”).  The FTC claimed that such messages did not tell users that access to their inboxes would also be used to collect e-receipts and to package that data for sale to outside companies, and that thousands of consumers changed their minds and signed up for Unrollme.

As part of the settlement, Unrollme is prohibited from misrepresentations about the extent to which it accesses, collects, uses, stores or shares information in connection with its email management products. Unrollme must also send an email to all current users who enrolled in Unrollme after seeing the allegedly deceptive statements and explain Unrollme’s data collection and usage practices.  Unrollme is also required to delete all e-receipt data obtained from recipients who enrolled in Unrollme after seeing the challenged statements (unless Unrollme receives affirmative consent to maintain such data from the affected consumers).

In an effort at increased transparency, Unrollme’s current home page displays several links to detailed explanations of how the service collects and analyzes user data (e.g., “How we use data”).

Interestingly, this is not the first time Unrollme’s practices have been challenged, as the company faced a privacy suit over its data mining practices last year.  (See Cooper v. Slice Technologies, Inc., No. 17-7102 (S.D.N.Y. June 6, 2018) (dismissing a privacy suit that claimed that Unrollme did not adequately disclose to consumers the extent of its data mining practices, and finding that consumers consented to a privacy policy that expressly allowed such data collection to build market research products and services).


© 2019 Proskauer Rose LLP.
This article is by Jeffrey D Neuburger of Proskauer Rose LLP.
For more on data privacy see the National Law Review Communications, Media & Internet law page.

No Means No

Researchers from the International Computer Science Institute found up to 1,325 Android applications (apps) gathering data from devices despite being explicitly denied permission.

The study looked at more than 88,000 apps from the Google Play store, and tracked data transfers post denial of permission. The 1,325 apps used tools, embedded within their code, that take personal data from Wi-Fi connections and metadata stored in photos.

Consent presents itself in different ways in the world of privacy. The GDPR is clear in defining consent as it pertains to user content. Recital 32 notes that “Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data…” Consumers pursuant to the CCPA can opt-out of having their personal data sold.

The specificity of consent has always been a tricky subject.  For decades, companies have offered customers the right to either opt in or out of “marketing,” often in exchange for direct payments. Yet, the promises have been slickly unspecific, so that a consumer never really knows what particular choices are being selected.

Does the option include data collection, if so how much? Does it include email, text, phone, postal contacts for every campaign or just some? The GDPR’s specificity provision is supposed to address this problem. But companies are choosing to not offer these options or ignore the consumer’s choice altogether.

Earlier this decade, General Motors caused a media dust-up by admitting it would continue collecting information about specific drivers and vehicles even if those drivers refused the Onstar system or turned it off. Now that policy is built into the Onstar terms of service. GM owners are left without a choice on privacy, and are bystanders to their driving and geolocation data being collected and used.

Apps can monitor people’s movements, finances, and health information. Because of these privacy risks, app platforms like Google and Apple make strict demands of developers including safe storage and processing of data. Seven years ago, Apple, whose app store has almost 1.8 million apps, issued a statement claiming that “Apps that collect or transmit a user’s contact data without their prior permission are in violation of our guidelines.”

Studies like this remind us mere data subjects that some rules were made to be broken. And even engaging with devices that have become a necessity to us in our daily lives may cause us to share personal information. Even more, simply saying no to data collection does not seem to suffice.

It will be interesting to see over the next couple of years whether tighter option laws like the GDPR and the CCPA can not only cajole app developers to provide specific choices to their customers, and actually honor those choices.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more on internet and data privacy concerns, see the National Law Review Communications, Media & Internet page.

GDPR May 25th Deadline Approaching – Businesses Globally Will Feel Impact

In less than four months, the General Data Protection Regulation (the “GDPR” or the “Regulation”) will take effect in the European Union/European Economic Area, giving individuals in the EU/EEA greater control over their personal data and imposing a sweeping set of privacy and data protection rules on data controllers and data processors alike. Failure to comply with the Regulation’s requirements could result in substantial fines of up to the greater of €20 million or 4% of a company’s annual worldwide gross revenues. Although many American companies that do not have a physical presence in the EU/EEA may have been ignoring GDPR compliance based on the mistaken belief that the Regulation’s burdens and obligations do not apply outside of the EU/EEA, they are doing so at their own peril.

A common misconception is that the Regulation only applies to EU/EEA-based corporations or multinational corporations with operations within the EU/EEA. However, the GDPR’s broad reach applies to any company that is offering goods or services to individuals located within the EU/EEA or monitoring the behavior of individuals in the EU/EEA, even if the company is located outside of the European territory. All companies within the GDPR’s ambit also must ensure that their data processors (i.e., vendors and other partners) process all personal data on the companies’ behalf in accordance with the Regulation, and are fully liable for any damage caused by their vendors’ non-compliant processing. Unsurprisingly, companies are using indemnity and insurance clauses in data processing agreements with their vendors to contractually shift any damages caused by non-compliant processing activities back onto the non-compliant processors, even if those vendors are not located in the EU/EEA. As a result, many American organizations that do not have direct operations in the EU/EEA nevertheless will need to comply with the GDPR because they are receiving, storing, using, or otherwise processing personal data on behalf of customers or business partners that are subject to the Regulation and its penalties. Indeed, all companies with a direct or indirect connection to the EU/EEA – including business relationships with entities that are covered by the Regulation – should be assessing the potential implications of the GDPR for their businesses.

Compliance with the Regulation is a substantial undertaking that, for most organizations, necessitates a wide range of changes, including:

  • Implementing “Privacy by Default” and “Privacy by Design”;
  • Maintaining appropriate data security;
  • Notifying European data protection agencies and consumers of data breaches on an expedited basis;
  • Taking responsibility for the security and processing of third-party vendors;
  • Conducting “Data Protection Impact Assessments” on new processing activities;
  • Instituting safeguards for cross-border transfers; and
  • Recordkeeping sufficient to demonstrate compliance on demand.

Failure to comply with the Regulation’s requirements carries significant risk. Most prominently, the GDPR empowers regulators to impose fines for non-compliance of up to the greater of €20 million or 4% of worldwide annual gross revenue. In addition to fines, regulators also may block non-compliant companies from accessing the EU/EEA marketplace through a variety of legal and technological methods. Even setting these potential penalties aside, simply being investigated for a potential GDPR violation will be costly, burdensome and disruptive, since during a pending investigation regulators have the authority to demand records demonstrating a company’s compliance, impose temporary data processing bans, and suspend cross-border data flows.

The impending May 25, 2018 deadline means that there are only a few months left for companies to get their compliance programs in place before regulators begin enforcement. In light of the substantial regulatory penalties and serious contractual implications of non-compliance, any company that could be required to meet the Regulation’s obligations should be assessing their current operations and implementing the necessary controls to ensure that they are processing personal data in a GDPR-compliant manner.

 

© 2018 Neal, Gerber & Eisenberg LLP.
More on the GDPR at the NLR European Union Jurisdiction Page.

Navigating Connected Cars in 2017: Data Protection

connected carsIt’s a fact: today’s marketplace has given connected cars the green light. As an OEM or supplier accelerating to create products to meet industry demand, what challenges can you anticipate in 2017? Here we describe where we believe your attention should be focused during the upcoming year…

Data Protection

The manufacturing industry is now one of the most hacked industries. It has been said that the modern day car is a computer on wheels. That is not quite right. The modern-day car is a network of several computers on wheels. Cars today can have 50 or more electrical control units (ECUs) – each of which is analogous to a separate computer – networked together. There will be an estimated 250 million connected cars on roads around the world by 2020. These cars will have 200 or more sensors collecting information about us, our cars and our driving habits.

With significant advances in smart phone car-connectivity and onboard infotainment systems, our cars are collecting more and more information about our daily lives and personal interactions. As a result, privacy and security of connected cars has evolved and quickly risen over the last year to a top priority of carmakers and suppliers. Here are our top 4 tips for addressing these privacy and security issues and concerns in 2017:

  • Practice “security by design.” This is a concept recently espoused by federal regulators, namely, the National Highway Traffic Safety Administration and the Federal Trade Commission, as well as industry self-regulatory organizations. With security by design, a company addresses data security controls “day 1” while products, components and devices are still on the drawing board. Data security practices evolve over time, and the days of building it first and then layering security on top are now over. Risk assessments addressing potential threats and attack targets should be dealt with during the design process. Security design reviews and product testing should be conducted throughout the development process. Secure computing, software development and networking practices should address the security of connections into, from and inside the vehicle.

  • Practice “privacy by design.” While security deals with the safeguards and measures implemented to protect the data from unauthorized access or use, privacy focuses on the right and desire of individuals to keep information about themselves confidential. During the design process, companies should understand and identify what personal information will be collected by a component or device, what notice should be provided to or consent obtained from consumers before collecting that personal information, how should the personal information be used, are those intended uses legal, with whom will the personal information be shared, and is that sharing appropriate and legal. With this information identified, the company can reconcile privacy requirements with security safeguards during the design and development process.

  • Establish an appropriate data security governance model. Executives and senior management can no longer blindly delegate data security to the security engineering team. Regulators, courts and juries are demanding that senior management become involved in and accountable for data security. While the precise governance model will depend on the nature and size of the organization, the company should actively consider what level of executive oversight is appropriate, and then document those conclusions in a data security governance policy. This will serve the dual purposes of enhancing data security of vehicles and component parts, while also bolstering the company’s defenses in the event of a security incident or investigation.

  • Address the entire supply chain. Whether it is the finished vehicle or a component part, most companies relevant to the data security ecosystem will rely on suppliers that play a role in data security. Hardware, software, development tools, assembly, integration and testing may all be provided by one or more suppliers. Companies impacted by this scenario should conduct appropriate due diligence and risk assessments with respect to its suppliers, both at the commencement, as well as periodically throughout, the relationship. Contractual provisions should also be utilized to address data security requirements for the relevant suppliers.

© 2016 Foley & Lardner LLP

EU-US Privacy Shield to Launch August 1, Replacing Safe Harbor

general data protection privacy shieldI. Introduction: Privacy Shield to Go Live August 1 (at Last)

The replacement for Safe Harbor is finally in effect, over nine months after Safe Harbor was struck down by the Court of Justice of the EU in the Schrems case. As most readers will be aware, Privacy Shield provides an important legal mechanism for transferring personal information from the EU to the US. The Department of Commerce (Commerce) has promised to launch a Privacy Shield website on August 1, 2016 that will allow companies to certify compliance with Privacy Shield.

The Privacy Shield documents are comprised of a 44-page “Adequacy Decision” and 104 pages of “Annexes” that contain key information concerning Privacy Shield’s standards and enforcement mechanisms. Companies that are considering certifying under Privacy Shield should review the entire Adequacy Decision and its Annexes, as well as the promised FAQs and other documents that the Department of Commerce will provide on the new Privacy Shield website. A good starting point for companies is Annex II, which contains the essential Privacy Shield “Principles” and a set of “Supplemental Principles” that clarify certain points and provide useful examples for putting Privacy Shield into practice.

Our summary aims to highlight key points and provide a basic roadmap as companies start to get to grips with the new Privacy Shield requirements.

II. Privacy Shield Principles

The Principles set out in Privacy Shield will be largely familiar to companies that had certified under Safe Harbor, but Privacy Shield contains a lot more detail and occasionally demands more stringent standards and actions than Safe Harbor.

1. Notice. Notice must be provided as soon as possible to the individual – preferably at the time the individual is asked to provide personal information. Notice must be given in “clear and conspicuous language.” The company must tell the individual that it participates in Privacy Shield, and must link to the Privacy Shield list that will be published on the Web by Commerce. The company must tell individuals what types of personal information are being collected, for what purposes, and with whom it may be shared. Individuals must be told how to make complaints to the company and its options for resolving disputes (which the company must select from a menu of limited alternatives, as discussed further below). The company must inform the individual of the company’s obligation to disclose personal information in response to lawful requests by public authorities, including for national security or law enforcement. A new requirement calls for the company to describe its liability with regard to transfers of the personal information to third parties (also discussed further below).

2. Choice. Choice comes into play primarily when the data controller wants to disclose personal information to a third party (other than agents under a contract) or use it for a purpose that is materially different than the purpose for which it was collected (which would have been communicated to the individual under the Notice principle). In many instances, consent can be obtained on an opt-out basis, provided that the new use or transfer has been disclosed clearly and conspicuously, and the individual is given a “readily available” means to exercise her choice. Critically, however, the transfer and processing of “sensitive” information requires the affirmative express consent of the individual, subject to a short list of exceptions described in the Supplemental Principles. An opt-out is not sufficient for sensitive information, which includes medical/health, race/ethnicity, political opinions, religious or philosophical beliefs, trade union membership, and information about sexuality. (As before, financial information is not considered sensitive, but companies should recall that risk-based security measures still need to be taken even if opt-out consent is used.)

3. Accountability for Onward Transfer. This Principle contains  some key differences from Safe Harbor and should be carefully reviewed by companies looking at Privacy Shield. Privacy Shield has tightened up the requirements for transferring personal information to a third party who acts as a data controller. It is not possible simply to rely on the transferee being Privacy Shield-certified. The transferor company must enter into a contract with the transferee company that specifies that the information will only be processed for “limited and specified purposes consistent with the consent provided by the individual” and that the transferee will comply with the Principles across the board. If the transferee is acting as the transferor’s agent (i.e., as a “data processor” in EU terminology) then the transferor must also take “reasonable and appropriate steps” to ensure that the transferee is processing the personal information consistently with the Principles. In all cases, the transferee must agree to notify the transferor if the transferee can no longer meet its privacy obligations. Commerce can request a summary or copy of the privacy provisions of a company’s contracts with its agents.

4. Security. The standard for data security is “reasonable and appropriate measures” to protect personal data from being compromised, taking into account the nature of the personal information that is being stored. It’s strongly implied that companies need to perform a risk assessment in order to determine precisely what measures would be reasonable and appropriate. The risk assessment and security measures should be documented in the event of an investigation or audit, and for purposes of the required annual internal review.

5. Data Integrity and Purpose Limitation. Indiscriminate collection of personal information is not permitted under Privacy Shield. Instead, personal information should be gathered for particular purposes, and only information that is relevant to those purposes can be collected. It’s not always possible to anticipate every purpose for which certain personal information might be used, so Privacy Shield allows use for additional purposes that are “not incompatible with the purpose for which it has been collected or subsequently authorized by the individual.” The benchmark for compatible processing is “the expectations of a reasonable person given the context of the collection.” Generally speaking, processing personal information for common business risk-mitigation reasons, such as anti-fraud and security purposes, will be compatible with the original purpose. Personal information cannot be retained for longer than it is needed to perform the processing that is permitted under this Principle. Additionally, companies have an affirmative obligation to take “reasonable steps” to ensure that the personal information they collect and store is “reliable for its intended use, accurate, complete, and current.” These requirements imply that periodic data cleaning may be necessary for uses that extend over a significant period of time.

6. Access. Individuals have the right to know what personal information a company holds concerning them, and to have the information corrected if it is inaccurate, or deleted if it has been processed in violation of the Privacy Shield Principles. There are a couple of exceptions: If the expense providing access is disproportionate to the risks to the individual’s privacy, or if another person’s rights would be violated by giving access, then a company can decline. Companies should use this option sparingly and document its reasons for refusing any access requests.

7. Recourse, Enforcement & Liability. One of the EU Commission’s main objectives in negotiating Privacy Shield was to ensure that the program had sharper teeth than Safe Harbor. Privacy Shield features more proactive enforcement by Commerce and the FTC, and aggrieved individuals who feel their complaints haven’t been satisfactorily resolved can bring the weight of their local DPA and Commerce to bear on the offending company. We describe the recourse, enforcement and liability requirements below in a separate section.

III. Privacy Shield Supplemental Principles

The Supplemental Principles in Annex 2 elaborate on some of the basic Principles (summarized above) and, in some cases, qualify companies’ obligations. The summary below highlights some significant points – but again, companies should read the Supplemental Principles in full to appreciate some of the nuances of the Privacy Shield requirements.

1. Sensitive Personal Data. This section sets out some exceptions to the affirmative opt-in consent requirement that mirror the exceptions in the EU Data Protection Directive.

2. Journalistic Exceptions. Privacy Shield acknowledges the significance of the First Amendment in US law. Personal information that is gathered for journalistic purposes, including from published media sources, is not subject to Privacy Shield’s requirements.

3. Secondary Liability (of ISPs, etc.) Companies acting as mere conduits of personal information, such as ISPs and telecoms providers, are not required to comply with Privacy Shield with regard to the data that travels over their networks.

4. Due Diligence and Audits. Companies performing due diligence and audits are not required to notify individuals whose personal information is processed incidental to the diligence exercise or audit. Security requirements and purpose limitations would still apply.

5. Role of the Data Protection Authorities. The Supplemental Principles describe the role of the DPA panels and the DPAs generally in greater detail. As discussed above, companies processing their own human resources information will be required to cooperate directly with the DPAs, and the Supplemental Principles seem to imply that cooperation includes designating the DPA Panels as those companies’ independent recourse mechanism. In addition to the fees attendant on this choice (capped at $500/year), companies will have to pay translation costs relating to any complaints against them.

6. Self-certification. This section outlines what the self-certification process should look like when the Privacy Shield enrollment website launches. It also contains information about what will happen when a Privacy Shield participant decides to leave the program.

7. Verification. Privacy Shield-certified companies must back up their claims with documentation. We discuss this further in the section below on enforcement.

8. Access. This section describes access requirements in more detail and also gives some guidance as to when access requests can be refused.

9. Human Resources Data. Companies planning to use Privacy Shield for the transfer of EU human resources data will want to review this section carefully. Privacy Shield does not replace or relieve companies from EU employment law obligations. Looking beyond the overseas transfer element, it’s critical to ensure that employee personal information has been collected and is processed in full compliance with applicable EU laws concerning employees.

10. Contracts for Onward Transfers.  US companies are sometimes unaware that all EU data controllers are required to have data processing contracts in place with any data processor, regardless of the processor’s location. Participation in Privacy Shield, by itself, is not enough. If a Privacy Shield-certified data controller wants to transfer the EU-origin personal information to another data controller, it can do so under a contract that requires the transferee to provide the same level of protection as Privacy Shield, except that the transferee can designate an independent recourse mechanism that is not one of the Privacy Shield-specific mechanisms. Companies will need to review their existing and new contracts carefully.

11. Dispute Resolution and Enforcement. We discuss this separately below.

12. Choice – Timing of Opt Out (Direct Marketing). This section focuses on opt-out consent for direct marketing. Companies should provide opt-out choices on all direct marketing communications. The guidance states that “an organization may use information for certain direct marketing purposes when it is impracticable to provide the individual with an opportunity to opt out before using the information, if the organization promptly gives the individual such opportunity at the same time (and upon request at any time) to decline (at no cost to the individual) to receive any further direct marketing communications and the organization complies with the individual’s wishes.” However, companies should keep in mind that the European standard for impracticability here may be tougher than we would expect in the US. In particular, US companies should consider EU requirements for direct marketing via e-mail or text, which typically requires advance consent unless the marketing is to an existing customer and is for goods or services that are similar to the ones previously purchased by the customer.

13. Travel Information. Common sense prevails with regard to travel data – when travel arrangements are being made for an EU employee or customer, the data transfer can take place outside of the Privacy Shield requirements if the customer has given “unambiguous consent” or if the transfer is necessary to fulfill contractual obligations to the customer (including the terms of frequent flyer programs).

14. Pharmaceutical and Medical Products. Pharma companies will want to review the fairly lengthy discussion of how Privacy Shield applies to clinical studies, regulatory compliance, adverse event monitoring and reporting, and other issues specific to the pharma industry. Privacy Shield is broadly helpful – and in some respects clearer than the pending GDPR.

15. Public Record and Publicly Available Information. Some, but not all, of the Principles apply to information obtained from public records or other public sources, subject to various caveats that make this section important to read in full.

16. Access Requests by Public Authorities. Privacy Shield companies have the option of publishing statistics concerning requests by US public authorities for access to EU personal information. However, publishing such statistics is not mandatory.

III. Recourse, Enforcement and Liability

A significant change in Privacy Shield from Safe Harbor is the addition of specific mechanisms for recourse and dispute resolution. One of the major perceived failings of Safe Harbor was that EEA citizens had no reasonable means to obtain relief or even to lodge a complaint. In order to satisfactorily self-certify, US companies will need to put processes in place to handle complaints.

Under Privacy Shield, at a minimum, such recourse mechanisms must include:

1. Independent Investigation and Resolution of Complaints: Readily available independent recourse mechanisms by which each individual’s complaints and disputes are investigated and expeditiously resolved at no cost to the individual … and damages awarded where the applicable law or private-sector initiatives provide;

2. Verification that You Do What You Say: Follow-up procedures for verifying that the attestations and assertions organizations make about their privacy practices are true and that privacy practices have been implemented as presented, and in particular, with regard to cases of non-compliance; and

3. You Must Fix the Problems: Obligations to remedy problems arising out of failure to comply with the Principles by organizations announcing their adherence to them and consequences for such organizations. Sanctions must be sufficiently rigorous to ensure compliance by organizations.

Prompt response to complaints is required and if a company uses an EU Data Protection Authority as a third party recourse mechanism and fails to comply with its advice within 25 days, the DPA may refer the matter to the FTC and the FTC has agreed to give priority consideration to all referrals of non-compliance from EU DPAs.

The verification requirement is more robust than under Safe Harbor. Companies may choose to either self-assess such verification or engage outside compliance reviews. Self-assessment includes certifying that its policies comply with the Principles and that it has procedures in place for training, disciplining misconduct and responding to complaints. Both outside compliance reviews and self-assessment must be conducted once a year.

Privacy Shield certifying organizations have responsibility for onward transfers and retains liability under the Principles if its third party processor violates the Principles, with some exceptions. Third party vendor management and contractual requirements for compliance with the Principles will be important components to manage the risk.

Dispute Resolution

There is ample ground for operational confusion under Privacy Shield, but none more so than with respect to dispute resolution. There are multiple methods available to data subjects (individuals) to lodge complaints, and companies subscribing to Privacy Shield must be prepared to respond through any of those. When companies certify under Privacy Shield, they need to choose an independent enforcement and dispute resolution mechanism. The choices are either:

  • Data Protection Authority Panels
  • Independent Recourse Mechanism

a. IndividualsIndividual data subjects may raise any concerns or complaints to the company itself, which is obligated to respond within 45 days. Individuals also have the option of working through their local DPA, which may in turn contact the company and/or the Department of Commerce to resolve the dispute.

b. Independent RecourseAs discussed above, the Privacy Shield requires that entities provide an independent recourse mechanism, either a private sector alternative dispute resolution provider (such as the American Arbitration Association, BBB, or TRUSTe) or a panel of European DPAs. NOTE THAT THE DPA PANEL IS MANDATORY IF YOU ARE APPLYING TO PRIVACY SHIELD TO PROCESS/TRANSFER HR DATA. For disputes involving HR data that are not resolved internally by the company (or any applicable trade union grievance procedures) to the satisfaction of the employee, the company must direct the employee to the DPA in the jurisdiction where the employee works.

c. Binding ArbitrationA Privacy Shield Panel will be composed of one or three independent arbitrators admitted to practice law in the US, with expertise in US and EU privacy law. Appeal to the Panel is open to individuals who have raised complaints with the organization, used the independent recourse mechanism, and/or sought relief through their DPA, but whose complaint is still fully or partially unresolved. The Panel can only impose equitable relief, such as access or correction. Arbitrations should be concluded within 90 days. Further, both parties may seek judicial review of the arbitral decision under the US Federal Arbitration Act.

Enforcement

In addition to the above discussion on the multiple avenues available to data subjects for complaints, there are other expanded types of enforcement under Privacy Shield. A certifying organization’s compliance may be directly or indirectly monitored by the US Department of Commerce, the FTC (or Department of Transportation), EU DPAs, and private sector independent recourse mechanisms or other privacy self-regulatory bodies.

Privacy Shield brings an expanded role to the Department of Commerce for monitoring and supervising compliance. If you have following Safe Harbor, one of the EU grounds for disapproval was the apparent lack of actual enforcement by US regulatory authorities against self-certifying organizations. The Department of Commerce has committed to a larger role and has greatly increased the size of the program staff.

Some of the new responsibilities of the Department of Commerce under Privacy Shield include:

  • Serving as a liaison between organizations and DPAs for Privacy Shield compliance issues;
  • Conducting searches for false claims by organizations that have never participated in the program and taking the aforementioned corrective action when such false claims are found.
  • Conducting ex officio investigations of those who withdraw from the program or fail to recertify to verify that such organizations are not making any false claims regarding their participation. In the event that it finds any false claims, it will first issue a warning, and then, if the matter is not resolved, refer the matter to the appropriate regulator for enforcement action; and
  • Conducting periodic ex officio compliance reviews which will include sending questionnaires to participating organizations to identify issues that may warrant further follow up action. In particular, such reviews will take place when the Department has received complaints about the organization’s compliance, the organization does not respond satisfactorily to its inquiries and information requests, or there is “credible” evidence that the organization does not comply with its commitments. Organizations will be required to provide a copy of the privacy provisions in their service provider contracts upon request. The Department of Commerce will consult with the appropriate DPAs when necessary;
  • Verifying self-certification requirements by evaluating, among other things, the organization’s privacy policy for the required elements and verifying the organization’s registration with a dispute resolution provider;

Private sector independent recourse mechanisms will have a duty to actively report organizations’ failures to comply with their rulings to the Department of Commerce. Upon receipt of such notification, the Department will remove the organization from the Privacy Shield List.

The above overview illustrates the complexity of Privacy Shield vs. Safe Harbor and the multiplication of authorities in charge of oversight, all of which is likely to result in greater regulatory scrutiny of and compliance costs for participating organizations. By way of contrast, when an organization relies on alternative transfer mechanisms such as the Standard Clauses, the regulatory oversight is performed by EU regulators against the EU company (as data exporter). Therefore, before settling on a transfer mechanism, organizations will want to consider the regulatory involvement and compliance costs associated with each option.

IV. Choosing Your Next Steps

Privacy Shield may not appeal to all US companies. Privacy Shield allows for a degree of flexibility in handling new data flows. However, that comes at the costs of fees, rigorous internal reviews and arguably much more onerous audits and enforcement than the two main alternatives, Binding Corporate Rules for intra-group transfers, and Standard Clauses for controller-to-controller or controller-to-processor transfers (regardless of corporate affiliation). Data transfers within corporate groups may be better addressed by Binding Corporate Rules that speak specifically to the groups’ global privacy practices – or even by the Standard Clauses, particularly for smaller corporations with only a few affiliates. Even outside corporate groups, the Standard Clauses may be adequate if the data flows are straightforward and unlikely to change much over time. An important point to note is that, in comparison to Safe Harbor, Privacy Shield requires more detailed company-to-company contracts when personal information is to be transferred – it’s no longer enough that both companies participate in the program. US companies should consider the potential operational benefits of Privacy Shield against its increased burdens.

It is important to consider timing. The Commerce Department Privacy Shield website will be “open for business” as of August 1. Lest you despair about the possibility of analyzing and updating those contracts that implicate the Accountability for Onward Transfer Principle in order to certify to Privacy Shield, Annex II has provided a bit of a “grace period” for what have been called early joiners.

The Privacy Principles apply immediately upon certification. Recognizing that the Principles will impact commercial relationships with third parties, organizations that certify to the Privacy Shield Framework in the first two months following the Framework’s effective date shall bring existing commercial relationships with third parties into conformity with the Accountability for Onward Transfer Principle as soon as possible, and in any event no later than nine months from the date upon which they certify to the Privacy Shield. During that interim period, where organizations transfer data to a third party, they shall (i) apply the Notice and Choice Principles, and (ii) where personal data is transferred to a third party acting as an agent, ascertain that the agent is obligated to provide at least the same level of protection as is required by the Principles.

If your company determines that Privacy Shield is the right choice, and you are diligent about the ground work required to accurately certify before that two-month window closes, you will be able to take advantage of the nine-month grace period to get those third party relationships into line.

Finally, US companies should stay alert to the legal challenges that the Standard Clauses are currently facing (again driven by concerns about mass surveillance), the possibility that EU regulators may start exacting further commitments when approving BCRs, and the very high likelihood that new legal challenges will be mounted against Privacy Shield shortly after it is implemented. Even if a company adopts Privacy Shield, or instead elects to stick with the Standard Clauses, it may want to get ready to switch if one or the other is struck down by the Court of Justice of the EU. Of course, if the Court of Justice strikes down both Privacy Shield and the Standard Clauses, it will be back to the drawing board for EU and US government negotiators.