Federal Regulators Issue New Cyber Incident Reporting Rule for Banks

On November 18, 2021, the Federal Reserve, Federal Deposit Insurance Corporation, and Office of the Comptroller of the Currency issued a new rule regarding cyber incident reporting obligations for U.S. banks and service providers.

The final rule requires a banking organization to notify its primary federal regulator “as soon as possible and no later than 36 hours after the banking organization determines that a notification incident has occurred.” The rule defines a “notification incident” as a “computer-security incident that has materially disrupted or degraded, or is reasonably likely to materially disrupt or degrade, a banking organization’s—

  1. Ability to carry out banking operations, activities, or processes, or deliver banking products and services to a material portion of its customer base, in the ordinary course of business;
  2. Business line(s), including associated operations, services, functions, and support, that upon failure would result in a material loss of revenue, profit, or franchise value; or
  3. Operations, including associated services, functions and support, as applicable, the failure or discontinuance of which would pose a threat to the financial stability of the United States.”

Under the rule, a “computer-security incident” is “an occurrence that results in actual harm to the confidentiality, integrity, or availability of an information system or the information that the system processes, stores, or transmits.”

Separately, the rule requires a bank service provider to notify each affected banking organization “as soon as possible when the bank service provider determines it has experienced a computer-security incident that has materially disrupted or degraded or is reasonably likely to materially disrupt or degrade, covered services provided to such banking organization for four or more hours.” For purposes of the rule, a bank service provider is one that performs “covered services” (i.e., services subject to the Bank Service Company Act (12 U.S.C. 1861–1867)).

In response to comments received on the agencies’ December 2020 proposed rule, the new rule reflects changes to key definitions and notification provisions applicable to both banks and bank service providers. These changes include, among others, narrowing the definition of a “computer security incident,” replacing the “good faith belief” notification standard for banks with a determination standard, and adding a definition of “covered services” to the bank service provider provisions. With these revisions, the agencies intend to resolve some of the ambiguities in the proposed rule and address commenters’ concerns that the rule would create an undue regulatory burden.

The final rule becomes effective April 1, 2022, and compliance is required by May 1, 2022. The regulators hope this new rule will “help promote early awareness of emerging threats to banking organizations and the broader financial system,” as well as “help the agencies react to these threats before they become systemic.”

Copyright © 2021, Hunton Andrews Kurth LLP. All Rights Reserved.

For more articles on banking regulations, visit the NLR Financial Securities & Banking section.

Privilege Dwindles for Data Breach Reports

Data privacy lawyers and cyber security incident response professionals are losing sleep over the growing number of federal courts ordering disclosure of post-data breach forensic reports.  Following the decisions in Capital One and Clark Hill, another district court has recently ordered the defendant in a data breach litigation to turn over the forensic report it believed was protected under the attorney-client privilege and work product doctrines. These three decisions help underscore that maintaining privilege over forensic reports may come down to the thinnest of margins—something organizations should keep in mind given the ever-increasing risk of litigation that can follow a cybersecurity incident.

In May 2019, convenience store and gas station chain Rutter’s received two alerts signaling a possible breach of their internal systems. The same day, Rutter’s hired outside counsel to advise on potential breach notification obligations. Outside counsel immediately hired a forensic investigator to perform an analysis to determine the character and scope of the incident. Once litigation ensued, Rutter’s withheld the forensic report from production on the basis of the attorney-client privilege and work product doctrines. Rutter’s argued that both itself and outside counsel understood the report to be privileged because it was made in anticipation of litigation. The Court rejected this notion.

With respect to the work product doctrine, the Court stated that the doctrine only applies where identifiable or impending litigation is the “primary motivating purpose” of creating the document. The Court found that the forensic report, in this case, was not prepared for the prospect of litigation. The Court relied on the forensic investigator’s statement of work which stated that the purpose of the investigation was to “determine whether unauthorized activity . . . resulted in the compromise of sensitive data.” The Court decided that because Rutter’s did not know whether a breach had even occurred when the forensic investigator was engaged, it could not have unilaterally believed that litigation would result.

The Court was also unpersuaded by the attorney-client privilege argument. Because the forensic report only discussed facts and did not involve “opinions and tactics,” the Court held that the report and related communications were not protected by the attorney-client privilege. The Court emphasized that the attorney-client privilege does not protect communications of fact, nor communications merely because a legal issue can be identified.

The Rutter’s decision comes on the heels of the Capital One and Clark Hill rulings, which both held that the defendants failed to show that the forensic reports were prepared solely in anticipation of litigation. In Capital One, the company hired outside counsel to manage the cybersecurity vendor’s investigation after the breach, however, the company already had a longstanding relationship and pre-existing agreement with the vendor. The Court found that the vendor’s services and the terms of its new agreement were essentially the same both before and after the outside counsel’s involvement. The Court also relied on the fact that the forensic report was eventually shared with Capital One’s internal response team, demonstrating that the report was created for various business purposes.

In response to the data breach in the Clark Hill case, the company hired a vendor to investigate and remediate the systems after the attack. The company also hired outside counsel, who in turn hired a second cybersecurity vendor to assist with litigation stemming from the attack. During the litigation, the company refused to turn over the forensic report prepared by the outside counsel’s vendor. The Court rejected this “two-track” approach finding that the outside counsel’s vendor report has not been prepared exclusively for use in preparation for litigation. Like in Capital One, the Court found, among other things, that the forensic report was shared not only with inside and outside counsel, but also with employees inside the company, IT, and the FBI.

As these cases demonstrate, the legal landscape around responding to security incidents has become filled with traps for the unwary.  A coordinated response led by outside counsel is key to mitigating a data breach and ensuring the lines are not blurred between “ordinary course of business” factual reports and incident reports that are prepared for litigation purposes.

© 2021 Bracewell LLP

Fore more articles on cybersecurity, visit the NLR Communications, Media, Internet, and Privacy Law News section.

Patentablity of COVID19 Software Inventions: Artificial Intelligence (AI), Data Storage & Blockchain

The  Coronavirus pandemic revved up previously scarce funding for scientific research.  Part one of this series addressed the patentability of COVID-19 related Biotech, Pharma & Personal Protective Equipment (PPE) Inventions and whether inventions related to fighting COVID-19  should be patentable.  Both economists and lawmakers are critical of the exclusivity period granted by patents, especially in the case of vaccines and drugs.  Recently, several members of Congress requested “no exclusivity” for any “COVID-19 vaccine, drug, or other therapeutic.”[i]

In this segment, the unique issues related to the intellectual property rights of Coronavirus related software inventions, specifically, Artificial Intelligence (AI), Data Storage & Blockchain are addressed.

Digital Innovations

Historically, Americans have adhered to personalized healthcare and lacked the incentive to set up a digital infrastructure similar to Taiwan’s which has fared far better in combating the spread of a fast-moving virus.[ii]  But as hospitals continue to operate at maximum capacity and with prolonged social distancing, the software sector is teeming with digital solutions for increasing the virtual supply of healthcare to a wider network of patients,[iii] particularly as HHS scales back HIPAA regulations.[iv]  COVID-19 has also spurred other types of digital innovation, such as using AI to predict the next outbreak and electronic hospital bed management, etc.[v]

One area of particular interest is the use of blockchain and data storage in a COVID/post-COVID world.  Blockchains can serve as secure ledgers for the global supply of medical equipment, including respirators, ventilators, dialysis machines, and oxygen masks.[vi]  The Department of Homeland Security has also deemed blockchain managers in food and agricultural distribution as “critical infrastructure workers”.[vii]

Patentability

Many of these digital inventions will have a hard time with respect to patentability, especially those related to data storage such as blockchains.  In 2014, the Supreme Court found computer-related inventions were “abstract ideas” ineligible for patent protection in Alice v. CLS Bank.[viii]  Because computer-implemented programs execute steps that can theoretically be performed by a human being but are only automated by a machine, the Supreme Court concluded that patenting software would be patenting human activity.  This type of patent protection has long been considered by the Court to be too broad and dangerous.

Confusion

The aftermath of Alice is widespread confusion amongst members of the patent bar as well as the USPTO as to how computer-related software patents were to be treated henceforth.[ix]   The USPTO attempted to clarify some of this confusion by a series of Guidelines in 2019.[x]  Although well-received by the IP community, the USPTO’s Guidelines are not binding outside of the agency, meaning they are have little dispositive effect when parties must bring their cases to the Federal Circuit and other courts.[xi]  Indeed, the Federal Circuit has made clear that they are not bound by the USPTO’s guidance.[xii]  The Supreme Court will not provide further clarification and denied cert on all patent eligibility petitions in January of this year.[xiii]

The Future

Before the coronavirus outbreak, Congress was working on patent reform.[xiv]  But the long-awaited legislation was set aside further still as legislators focused on needed measures to address the pandemic.  On top of that, both Senator Tillis and Senator Coons who have spearheaded the efforts for patent reform are now facing reelection battles, making the future congressional leadership on patent reform uncertain.

Conclusion

Patents receive a lot of flak for being company assets, and like many assets, patents are subject to abuse.[xv]  But patents are necessary for innovation, particularly for small and medium-sized companies by carving out a safe haven in the marketplace from the encroachment of larger companies.[xvi]  American leadership in medical innovations had been declining for some time prior to the pandemic[xvii] due to the cumbersome US regulatory and legal environments, particularly for tech start-ups seeking private funding.[xviii]

Not all data storage systems should receive a patent and no vaccine should receive a patent so broad that it snuffs out public access to alternatives.  The USPTO considers novelty, obviousness and breadth when dispensing patent exclusivity, and they revisit the issue of patent validity downstream with inter partes review.  There are measures in place for ensuring good patents so let that system take its course.  A sweeping prohibition of patents is not the right answer.

The opinions stated herein are the sole opinions of the author and do not reflect the views or opinions of the National Law Review or any of its affiliates


[i] Congressional Progressive Leaders Announce Principles On COVID-19 Drug Pricing for Next Coronavirus Response Package, (2020), https://schakowsky.house.gov/media/press-releases/congressional-progressive-leaders-announce-principles-COVID-19-drug-pricing (last visited May 10, 2020).

[ii] Christina Farr, Why telemedicine has been such a bust so far, CNBC (June 30, 2018), https://www.cnbc.com/2018/06/29/why-telemedicine-is-a-bust.html and Nick Aspinwall, Taiwan Is Exporting Its Coronavirus Successes to the World, Foreign Policy (April 9, 2020), https://foreignpolicy.com/2020/04/09/taiwan-is-exporting-its-coronavirus-successes-to-the-world/.

[iii] Joe Harpaz, 5 Reasons Why Telehealth Is Here To Stay (COVID-19 And Beyond), Forbes (May 4, 2020), https://www.forbes.com/sites/joeharpaz/2020/05/04/5-reasons-why-telehealth-here-to-stay-COVID19/#7c4d941753fb.

[iv] Jessica Davis, OCR Lifts HIPAA Penalties for Telehealth Use During COVID-19, Health IT Security (March 18, 2020), https://healthitsecurity.com/news/ocr-lifts-hipaa-penalties-for-telehealth-use-during-COVID-19.

[v] Charles Alessi, The effect of the COVID-19 epidemic on health and care – is this a portent of the ‘new normal’?, HealthcareITNews (March 31, 2020), https://www.healthcareitnews.com/blog/europe/effect-COVID-19-epidemic-health-and-care-portent-new-normal and COVID-19 and AI: Tracking a Virus, Finding a Treatment, Wall Street Journal (April 17, 2020), https://www.wsj.com/podcasts/wsj-the-future-of-everything/COVID-19-and-ai-tracking-a-virus-finding-a-treatment/f064ac83-c202-40f9-8259-426780b36f2c.

[vi] Sara Castellenos, A Cryptocurrency Technology Finds New Use Tackling Coronavirus, Wall Street Journal (April 23, 2020), https://www.wsj.com/articles/a-cryptocurrency-technology-finds-new-use-tackling-coronavirus-11587675966?mod=article_inline.

[vii] Christopher C. Krebs, MEMORANDUM ON IDENTIFICATION OF ESSENTIAL CRITICAL INFRASTRUCTURE WORKERS DURING COVID-19 RESPONSE, Cybersecurity and Infrastructure Security Agency (March 19, 2020), available at https://www.cisa.gov/sites/default/files/publications/CISA-Guidance-on-Essential-Critical-Infrastructure-Workers-1-20-508c.pdf.

[viii] Alice v. CLS Bank, 573 U.S. 208 (2014), available at https://www.supremecourt.gov/opinions/13pdf/13-298_7lh8.pdf.

[ix] David O. Taylor, Confusing Patent Eligibility, 84 Tenn. L. Rev. 157 (2016), available at https://scholar.smu.edu/cgi/viewcontent.cgi?article=1221&context=law_faculty.

[x] 2019 Revised Patent Subject Matter Eligibility Guidance, United States Patent Office (January 7, 2019), available at https://www.federalregister.gov/documents/2019/01/07/2018-28282/2019-revised-patent-subject-matter-eligibility-guidance.

[xi] Steve Brachmann, Latest CAFC Ruling in Cleveland Clinic Case Confirms That USPTO’s 101 Guidance Holds Little Weight, IPWatchDog (April 7, 2019), https://www.ipwatchdog.com/2019/04/07/latest-cafc-ruling-cleveland-clinic-confirms-uspto-101-guidance-holds-little-weight/id=107998/.

[xii] Id.

[xiii] U.S. Supreme Court Denies Pending Patent Eligibility Petitions, Holland and Knight LLP (January 14, 2020), https://www.jdsupra.com/legalnews/u-s-supreme-court-denies-pending-patent-55501/.

[xiv] Tillis and Coons: What We Learned At Patent Reform Hearings, (June 24, 2019), available at https://www.tillis.senate.gov/2019/6/tillis-and-coons-what-we-learned-at-patent-reform-hearings.

[xv] Gene Quinn, Twisting Facts to Capitalize on COVID-19 Tragedy: Fortress v. bioMerieux, IPWatchDog (March 18, 2020), https://www.ipwatchdog.com/2020/03/18/twisting-facts-capitalize-COVID-19-tragedy-fortress-v-biomerieux/id=119941/.

[xvi] Paul R. Michel, To prepare for the next pandemic, Congress should restore patent protections for diagnostic tests, Roll Call (April 28, 2020), https://www.rollcall.com/2020/04/28/to-prepare-for-the-next-pandemic-congress-should-restore-patent-protections-for-diagnostic-tests/.

[xvii] Medical Technology Innovation Scorecard_The race for global leadership, PwC (January 2011), https://www.pwc.com/il/en/pharmaceuticals/assets/innovation-scorecard.pdf.

[xviii] Elizabeth Snell, How Health Privacy Regulations Hinder Telehealth Adoption, HealthITSecurity (May 5, 2015),https://healthitsecurity.com/news/how-health-privacy-regulations-hinder-telehealth-adoption.


Copyright (C) GLOBAL IP Counselors, LLP

For more on patentability, see the National Law Review Intellectual Property law section.

Venmo’ Money: Another Front Opens in the Data Wars

When I see stories about continuing data spats between banks, fintechs and other players in the payments ecosystem, I tend to muse about how the more things change the more they stay the same. And so it is with this story about a bank, PNC, shutting off the flow of customer financial data to a fintech, in this case, the Millennial’s best friend, Venmo. And JP Morgan Chase recently made an announcement dealing with similar issues.

Venmo has to use PNC’s customer’s data in order to allow (for example) Squi to use it to pay P.J. for his share of the brews.  Venmo needs that financial data in order for its system to work.  But Venmo isn’t the only one with a mobile payments solution; the banks have their own competing platform called Zelle.  If you bank with one of the major banks, chances are good that Zelle is already baked into your mobile banking app.  And unlike Venmo, Zelle doesn’t need anyone’s permission but that of its customers to use those data.

You can probably guess the rest.  PNC recently invoked security concerns to largely shut off the data faucet and “poof”, Venmo promptly went dark for PNC customers.  To its aggrieved erstwhile Venmo-loving customers, PNC offered a solution: Zelle.  PNC subtly hinted that its security enhancements were too much for Venmo to handle, the subtext being that PNC customers might be safer using Zelle.

Access to customer data has been up until now a formidable barrier to entry for fintechs and others whose efforts to make the customer payment experience “frictionless” have depended in large measure on others being willing to do the heavy lifting for them.  The author of Venmo article suggests that pressure from customers may force banks to yield any strategic advantage that control of customer data may give them.  So far, however, consumer adoption of mobile payments is still miniscule in the grand scheme of things, so that pressure may not be felt for a very long time, if ever.

In the European Union, the regulators have implemented PSD2 which forces a more open playing field for banking customers. But realistically, it can’t be surprising that the major financial institutions don’t want to open up their customer bases to competitors and get nothing in return – except a potential stampede of customers moving their money. And some of these fintech apps haven’t jumped through the numerous hoops required to be a bank holding company or federally insured – meaning unwitting consumers may have less fraud protection when they move their precious money to a cool-looking fintech app.

A recent study by the Pew Trusts make it clear that consumers are still not fully embracing mobile for any number of reasons.  The prime reason is that current mobile payment options still rely on the same payments ecosystem as credit and debit cards yet mobile payments don’t offer as much consumer protection. As long as that is the case, banks and fintechs and merchants will continue to fight over data and the regulators are likely to weigh in at some point.

It is not unlike the early mobile phone issue when one couldn’t change mobile phone providers without getting a new phone number – that handcuff kept customers with a provider for years but has since gone by the wayside. It is likely we will see some sort of similar solution with banking details.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on fintech & banking data, see the National Law Review Financial Institutions & Banking law page.

Facing Facts: Do We Sacrifice Security Out of Fear?

Long before the dawn of time, humans displayed physical characteristics as identification tools. Animals do the same to distinguish each other. Crows use facial recognition on humans.  Even plants can tell their siblings from unrelated plants of the same species.

We present our physical forms to the world, and different traits identify us to anyone who is paying attention. So why, now that identity theft is rampant and security is challenged, do we place limits on the easiest and best ID system available? Are we sacrificing future security due to fear of an unlikely dystopia?

In one of the latest cases rolling out of Illinois’ private right of action under the state’s Biometric Information Privacy Act (BIPA), Rogers v. BNSF Railway Company[1], the court ruled that a railroad hauling hazardous chemicals through major urban areas needed to change, and probably diminish, its security procedures for who it allows into restricted space. Why? Because the railroad used biometric security to identify authorized entrants, BIPA forces the railroad to receive the consent of each person authorized to enter restricted space, and because BIPA is not preempted by federal rail security regulations.

The court’s decision, based on the fact that federal rail security rules do not specifically regulate biometrics, is a reasonable reading of the law. However, with BIPA not providing exceptions for biometric security, BIPA will impede the adoption and effectiveness of biometric-based security systems, and force some businesses to settle for weaker security. This case illustrates how BIPA reduces security in our most vulnerable and dangerous places.

I can understand some of the reasons Illinois, Texas, Washington and others want to restrict the unchecked use of biometrics. Gathering physical traits – even public traits like faces and voices – into large searchable databases can lead to overreaching by businesses. The company holding the biometric database may run tests and make decisions based on physical properties.  If your voice shows signs of strain, maybe the price of your insurance should rise to cover risk that stress puts on your body. But this kind of concern can be addressed by regulating what can be done with biometric readings.

There are also some concerns that may not have the foundation they once had. Two decades ago, many biometric systems stored bio data as direct copies, so that if someone stole the file, that person would have your fingerprint, voiceprint or iris scan.  Now, nearly all of the better biometric systems store bio readings as algorithms that can’t be read by computers outside the system that took the sample. So some of the safety concerns are no longer valid.

I propose a more nuanced thinking about biometric readings. While requiring data subject consent is harmless in many situations, the consent regime is a problem for security systems that use biometric indications of identity. And these systems are generally the best for securing important spaces.  Despite what you see in the movies, 2019 biometric security systems can be nearly impossible to trick into false positive results. If we want to improve our security for critical infrastructure, we should be encouraging biometrics, not throwing hurdles in the path of people choosing to use it.

Illinois should, at the very least, provide an exception to BIPA for physical security systems, even if that exception is limited to critical facilities like nuclear, rail and hazardous shipping restricted spaces. The state can include limits on how the biometric samples are used by the companies taking them, so that only security needs are served.

The field of biometrics may scare some people, but it is a natural outgrowth of how humans have always told each other apart.  If limit its use for critical security, we are likely to suffer from the decision.

[1] 2019 WL 5699910 (N.D. Ill).


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more on biometric identifier privacy, see the National Law Review Communications, Media & Internet law page.

The CCPA Is Approaching: What Businesses Need to Know about the Consumer Privacy Law

The most comprehensive data privacy law in the United States, the California Consumer Privacy Act (CCPA), will take effect on January 1, 2020. The CCPA is an expansive step in U.S. data privacy law, as it enumerates new consumer rights regarding collection and use of personal information, along with corresponding duties for businesses that trade in such information.

While the CCPA is a state law, its scope is sufficiently broad that it will apply to many businesses that may not currently consider themselves to be under the purview of California law. In addition, in the wake of the CCPA, at least a dozen other states have introduced their own comprehensive data privacy legislation, and there is heightened consideration and support for a federal law to address similar issues.

Below, we examine the contours of the CCPA to help you better understand the applicability and requirements of the new law. While portions of the CCPA remain subject to further clarification, the inevitable challenges of compliance, coupled with the growing appetite for stricter data privacy laws in the United States generally, mean that now is the time to ensure that your organization is prepared for the CCPA.

Does the CCPA apply to my business?

Many businesses may rightly wonder if a California law even applies to them, especially if they do not have operations in California. As indicated above, however, the CCPA is not necessarily limited in scope to businesses physically located in California. The law will have an impact throughout the United States and, indeed, worldwide.

The CCPA will have broad reach because it applies to each for-profit business that collects consumers’ personal information, does business in California, and satisfies at least one of three thresholds:

  • Has annual gross revenues in excess of $25 million; or
  • Alone or in combination, annually buys, receives for commercial purposes, sells, or shares for commercial purposes, the personal information of 50,000 or more California consumers; or
  • Derives 50 percent or more of its annual revenues from selling consumers’ personal information

While the CCPA is limited in its application to California consumers, due to the size of the California economy and its population numbers, the act will effectively apply to any data-driven business with operations in the United States.

What is considered “personal information” under the CCPA?

The CCPA’s definition of “personal information” is likely the most expansive interpretation of the term in U.S. privacy law. Per the text of the law, personal information is any “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.”

The CCPA goes on to note that while traditional personal identifiers such as name, address, Social Security number, passport, and the like are certainly personal information, so are a number of other categories that may not immediately come to mind, including professional or employment-related information, geolocation data, biometric data, educational information, internet activity, and even inferences drawn from the sorts of data identified above.

As a practical matter, if your business collects any information that could reasonably be linked back to an individual consumer, then you are likely collecting personal information according to the CCPA.

When does a business “collect” personal information under the CCPA?

To “collect” or the “collection” of personal information under the CCPA is any act of “buying, renting, gathering, obtaining, receiving, or accessing any personal information pertaining to a consumer by any means.” Such collection can be active or passive, direct from the consumer or via the purchase of consumer data sets. If your business is collecting personal information directly from consumers, then at or before the point of collection the CCPA imposes a notice obligation on your business to inform consumers about the categories of information to be collected and the purposes for which such information will (or may) be used.

To reiterate, if your business collects any information that could reasonably be linked back to an individual, then you are likely collecting personal information according to the CCPA.

If a business collects personal information but never sells any of it, does the CCPA still apply?

Yes. While there are additional consumer rights related to the sale of personal information, the CCPA applies to businesses that collect personal information solely for internal purposes, or that otherwise do not disclose such information.

What new rights does the CCPA give to California consumers?

The CCPA gives California consumers four primary new rights: the right to receive information on privacy practices and access information, the right to demand deletion of their personal information, the right to prohibit the sale of their information, and the right not to be subject to price discrimination based on their invocation of any of the new rights specified above.

What new obligations does a business have regarding these new consumer rights?

Businesses that fall under the purview of the CCPA have a number of new obligations under the law:

  • A business must take certain steps to assist individual consumers with exercising their rights under the CCPA. This must be accomplished by providing a link on the business’s homepage titled “Do Not Sell My Personal Information” and a separate landing page for the same. In addition, a business must update its privacy policy (or policies), or a California-specific portion of the privacy policy, to include a separate link to the new “Do Not Sell My Personal Information” page.

A business also must provide at least two mechanisms for consumers to exercise their CCPA rights by offering, at a minimum, a dedicated web page for receiving and processing such requests (the CCPA is silent on whether this web page must be separate from or can be combined with the “Do Not Sell My Personal Information” page), and a toll-free 800 number to receive the same.

  • Upon receipt of a verified consumer request to delete personal information, the business must delete that consumer’s personal information within 45 days.
  • Upon receipt of a verified consumer request for information about the collection of that consumer’s personal information, a business must provide the consumer with a report within 45 days that includes the following information from the preceding 12 months:
    • Categories of personal information that the business has collected about the consumer;
    • Specific pieces of personal information that the business possesses about the consumer;
    • Categories of sources from which the business received personal information about the consumer;
    • A corporate statement detailing the commercial reason (or reasons) that the business collected such personal information about the consumer; and
    • The categories of third parties with whom the business has shared the consumer’s personal information.
  • Upon receipt of a verified consumer request for information about the sale of that consumer’s personal information, a business must provide the consumer with a report within 45 days that includes the following information from the preceding 12 months:
    • Categories of personal information that the business has collected about the consumer;
    • Categories of personal information that the business has sold about the consumer;
    • Categories of third parties to whom the business has sold the consumer’s personal information; and
    • The categories of personal information about the consumer that the business disclosed to a third party (or parties) for a business purpose.
  • Finally, a business must further update its privacy policy (or policies), or the California-specific section of such policy(s), to:
    • Identify all new rights afforded consumers by the CCPA;
    • Identify the categories of personal information that the business has collected in the preceding 12 months;
    • Include a corporate statement detailing the commercial reason (or reasons) that the business collected such personal information about the consumer;
    • Identify the categories of personal information that the business has sold in the prior 12 months, or the fact that the business has not sold any such personal information in that time; and
    • Note the categories of third parties with whom a business has shared personal information in the preceding 12 months.

What about employee data gathered by employers for internal workplace purposes?

As currently drafted, nothing in the CCPA carves out an exception for employee data gathered by employers. A “consumer” is simply defined as a “natural person who is a California resident …,” so the law would presumably treat employees like anyone else. However, the California legislature recently passed Bill AB 25, which excludes from the CCPA information collected about a person by a business while the person is acting as a job applicant, employee, owner, officer, director, or contractor of the business, to the extent that information is collected and used exclusively in the employment context. Bill AB 25 also provides an exception for emergency contact information and other information pertaining to the administration of employee benefits. The bill awaits the governor’s signature – he has until October 13, 2019 to sign.

But not so fast – Bill AB 25 only creates a one-year reprieve for employers, rather than a permanent exception. The exceptions listed above will expire on January 1, 2021. By that time, the legislature may choose to extend the exceptions indefinitely, or businesses should be prepared to fully comply with the CCPA.

California employers would thus be wise to start considering the type of employee data they collect, and whether that information may eventually become subject to the CCPA’s requirements (either on January 1, 2021 or thereafter). Personal information is likely to be present in an employee’s job application, browsing history, and information related to payroll processing, to name a few areas. It also includes biometric data, such as fingerprints scanned for time-keeping purposes. Employers who collect employees’ biometric information, for example, would be well advised to review their biometric policies so that eventual compliance with the CCPA can be achieved gradually during this one-year grace period.

Notwithstanding this new legislation, there remains little clarity as to how the law will ultimately be applied in the employer-employee context, if and when the exceptions expire. Employers are encouraged to err on the side of caution and to reach out to experienced legal counsel for further guidance if they satisfy any one of the above thresholds.

What are the penalties for violation of the CCPA?

Violations of the CCPA are enforced by the California Attorney General’s office, which can issue civil monetary fines of up to $2,500 per violation, or $7,500 for each intentional violation. Currently, the California AG’s office must provide notice of any alleged violation and allow for a 30-day cure period before issuing any fine.

Are there any exceptions to the CCPA?

Yes, there are a number of exceptions. First, the CCPA only applies to California consumers and businesses that meet the threshold(s) identified above. If a business operates or conducts a transaction wholly outside of California then the CCPA does not apply.

There are also certain enumerated exceptions to account for federal law, such that the CCPA is pre-empted by HIPAA, the Gramm-Leach-Bliley Act, the Fair Credit Reporting Act as it applies to personal information sold to or purchased from a credit reporting agency, and information subject to the Driver’s Privacy Protection Act.

Would it be fair to say that the CCPA is not very clear, and maybe even a bit confusing?

Yes, it would. The CCPA was drafted, debated, and enacted into law very quickly in the face of some legislative and ballot-driven pressures. As a result, the bill as enacted is a bit confusing and even contains sections that appear to contradict its other parts. The drafters of the CCPA, however, recognized this and have included provisions for the California AG’s office to provide further guidance on its intent and meaning. Amendment efforts also remain underway. As such, it is likely that the CCPA will be an evolving law for at least the short term.

Regardless, the CCPA will impose real-world requirements effective January 1, 2020, and the new wave of consumer privacy legislation it has inspired at the state and federal level is likely to bring even more of the same. It is important to address these issues now, rather than when it is too late.


© 2019 Much Shelist, P.C.

For more on the CCPA legislation, see the National Law Review Consumer Protection law page.

Personal Email Management Service Settles FTC Charges over Allegedly Deceptive Statements to Consumers over Its Access and Use of Subscribers’ Email Accounts

This week, the Federal Trade Commission (FTC) entered into a proposed settlement with Unrollme Inc. (“Unrollme”), a free personal email management service that offers to assist consumers in managing the flood of subscription emails in their inboxes. The FTC alleged that Unrollme made certain deceptive statements to consumers, who may have had privacy concerns, to persuade them to grant the company access to their email accounts. (In re Unrolllme Inc., File No 172 3139 (FTC proposed settlement announced Aug. 8, 2019).

This settlement touches many relevant issues, including the delicate nature of online providers’ privacy practices relating to consumer data collection, the importance for consumers to comprehend the extent of data collection when signing up for and consenting to a new online service or app, and the need for downstream recipients of anonymized market data to understand how such data is collected and processed.  (See also our prior post covering an enforcement action involving user geolocation data collected from a mobile weather app).

A quick glance at headlines announcing the settlement might give the impression that the FTC found Unrollme’s entire business model unlawful or deceptive, but that is not the case.  As described below, the settlement involved only a subset of consumers who received allegedly deceptive emails to coax them into granting access to their email accounts.  The model of providing free products or services in exchange for permission to collect user information for data-driven advertising or ancillary market research remains widespread, though could face some changes when California’s CCPA consumer choice options become effective or in the event Congress passes a comprehensive data privacy law.

As part of the Unrollme registration process, users grant Unrollme access to selected personal email accounts for decluttering purposes.  However, this permission also allows Unrollme to access and scan inboxes for so-called “e-receipts” or emailed receipts from e-commerce transactions. After scanning users’ e-receipt data (which might include billing and shipping addresses and information about the purchased products or services), Unrollme’s parent company, Slice Technologies, Inc., would anonymize the data and package it into market research reports that are sold to various companies, retailers and others.  According to the FTC complaint, when some consumers declined to grant permission to their email accounts during signup, Unrollme, during the relevant time period, tried to make them reconsider by sending allegedly deceptive statements about its access (e.g, “You need to authorize us to access your emails. Don’t worry, this is just to watch for those pesky newsletters, we’ll never touch your personal stuff”).  The FTC claimed that such messages did not tell users that access to their inboxes would also be used to collect e-receipts and to package that data for sale to outside companies, and that thousands of consumers changed their minds and signed up for Unrollme.

As part of the settlement, Unrollme is prohibited from misrepresentations about the extent to which it accesses, collects, uses, stores or shares information in connection with its email management products. Unrollme must also send an email to all current users who enrolled in Unrollme after seeing the allegedly deceptive statements and explain Unrollme’s data collection and usage practices.  Unrollme is also required to delete all e-receipt data obtained from recipients who enrolled in Unrollme after seeing the challenged statements (unless Unrollme receives affirmative consent to maintain such data from the affected consumers).

In an effort at increased transparency, Unrollme’s current home page displays several links to detailed explanations of how the service collects and analyzes user data (e.g., “How we use data”).

Interestingly, this is not the first time Unrollme’s practices have been challenged, as the company faced a privacy suit over its data mining practices last year.  (See Cooper v. Slice Technologies, Inc., No. 17-7102 (S.D.N.Y. June 6, 2018) (dismissing a privacy suit that claimed that Unrollme did not adequately disclose to consumers the extent of its data mining practices, and finding that consumers consented to a privacy policy that expressly allowed such data collection to build market research products and services).


© 2019 Proskauer Rose LLP.
This article is by Jeffrey D Neuburger of Proskauer Rose LLP.
For more on data privacy see the National Law Review Communications, Media & Internet law page.

No Means No

Researchers from the International Computer Science Institute found up to 1,325 Android applications (apps) gathering data from devices despite being explicitly denied permission.

The study looked at more than 88,000 apps from the Google Play store, and tracked data transfers post denial of permission. The 1,325 apps used tools, embedded within their code, that take personal data from Wi-Fi connections and metadata stored in photos.

Consent presents itself in different ways in the world of privacy. The GDPR is clear in defining consent as it pertains to user content. Recital 32 notes that “Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data…” Consumers pursuant to the CCPA can opt-out of having their personal data sold.

The specificity of consent has always been a tricky subject.  For decades, companies have offered customers the right to either opt in or out of “marketing,” often in exchange for direct payments. Yet, the promises have been slickly unspecific, so that a consumer never really knows what particular choices are being selected.

Does the option include data collection, if so how much? Does it include email, text, phone, postal contacts for every campaign or just some? The GDPR’s specificity provision is supposed to address this problem. But companies are choosing to not offer these options or ignore the consumer’s choice altogether.

Earlier this decade, General Motors caused a media dust-up by admitting it would continue collecting information about specific drivers and vehicles even if those drivers refused the Onstar system or turned it off. Now that policy is built into the Onstar terms of service. GM owners are left without a choice on privacy, and are bystanders to their driving and geolocation data being collected and used.

Apps can monitor people’s movements, finances, and health information. Because of these privacy risks, app platforms like Google and Apple make strict demands of developers including safe storage and processing of data. Seven years ago, Apple, whose app store has almost 1.8 million apps, issued a statement claiming that “Apps that collect or transmit a user’s contact data without their prior permission are in violation of our guidelines.”

Studies like this remind us mere data subjects that some rules were made to be broken. And even engaging with devices that have become a necessity to us in our daily lives may cause us to share personal information. Even more, simply saying no to data collection does not seem to suffice.

It will be interesting to see over the next couple of years whether tighter option laws like the GDPR and the CCPA can not only cajole app developers to provide specific choices to their customers, and actually honor those choices.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
For more on internet and data privacy concerns, see the National Law Review Communications, Media & Internet page.

GDPR May 25th Deadline Approaching – Businesses Globally Will Feel Impact

In less than four months, the General Data Protection Regulation (the “GDPR” or the “Regulation”) will take effect in the European Union/European Economic Area, giving individuals in the EU/EEA greater control over their personal data and imposing a sweeping set of privacy and data protection rules on data controllers and data processors alike. Failure to comply with the Regulation’s requirements could result in substantial fines of up to the greater of €20 million or 4% of a company’s annual worldwide gross revenues. Although many American companies that do not have a physical presence in the EU/EEA may have been ignoring GDPR compliance based on the mistaken belief that the Regulation’s burdens and obligations do not apply outside of the EU/EEA, they are doing so at their own peril.

A common misconception is that the Regulation only applies to EU/EEA-based corporations or multinational corporations with operations within the EU/EEA. However, the GDPR’s broad reach applies to any company that is offering goods or services to individuals located within the EU/EEA or monitoring the behavior of individuals in the EU/EEA, even if the company is located outside of the European territory. All companies within the GDPR’s ambit also must ensure that their data processors (i.e., vendors and other partners) process all personal data on the companies’ behalf in accordance with the Regulation, and are fully liable for any damage caused by their vendors’ non-compliant processing. Unsurprisingly, companies are using indemnity and insurance clauses in data processing agreements with their vendors to contractually shift any damages caused by non-compliant processing activities back onto the non-compliant processors, even if those vendors are not located in the EU/EEA. As a result, many American organizations that do not have direct operations in the EU/EEA nevertheless will need to comply with the GDPR because they are receiving, storing, using, or otherwise processing personal data on behalf of customers or business partners that are subject to the Regulation and its penalties. Indeed, all companies with a direct or indirect connection to the EU/EEA – including business relationships with entities that are covered by the Regulation – should be assessing the potential implications of the GDPR for their businesses.

Compliance with the Regulation is a substantial undertaking that, for most organizations, necessitates a wide range of changes, including:

  • Implementing “Privacy by Default” and “Privacy by Design”;
  • Maintaining appropriate data security;
  • Notifying European data protection agencies and consumers of data breaches on an expedited basis;
  • Taking responsibility for the security and processing of third-party vendors;
  • Conducting “Data Protection Impact Assessments” on new processing activities;
  • Instituting safeguards for cross-border transfers; and
  • Recordkeeping sufficient to demonstrate compliance on demand.

Failure to comply with the Regulation’s requirements carries significant risk. Most prominently, the GDPR empowers regulators to impose fines for non-compliance of up to the greater of €20 million or 4% of worldwide annual gross revenue. In addition to fines, regulators also may block non-compliant companies from accessing the EU/EEA marketplace through a variety of legal and technological methods. Even setting these potential penalties aside, simply being investigated for a potential GDPR violation will be costly, burdensome and disruptive, since during a pending investigation regulators have the authority to demand records demonstrating a company’s compliance, impose temporary data processing bans, and suspend cross-border data flows.

The impending May 25, 2018 deadline means that there are only a few months left for companies to get their compliance programs in place before regulators begin enforcement. In light of the substantial regulatory penalties and serious contractual implications of non-compliance, any company that could be required to meet the Regulation’s obligations should be assessing their current operations and implementing the necessary controls to ensure that they are processing personal data in a GDPR-compliant manner.

 

© 2018 Neal, Gerber & Eisenberg LLP.
More on the GDPR at the NLR European Union Jurisdiction Page.

Navigating Connected Cars in 2017: Data Protection

connected carsIt’s a fact: today’s marketplace has given connected cars the green light. As an OEM or supplier accelerating to create products to meet industry demand, what challenges can you anticipate in 2017? Here we describe where we believe your attention should be focused during the upcoming year…

Data Protection

The manufacturing industry is now one of the most hacked industries. It has been said that the modern day car is a computer on wheels. That is not quite right. The modern-day car is a network of several computers on wheels. Cars today can have 50 or more electrical control units (ECUs) – each of which is analogous to a separate computer – networked together. There will be an estimated 250 million connected cars on roads around the world by 2020. These cars will have 200 or more sensors collecting information about us, our cars and our driving habits.

With significant advances in smart phone car-connectivity and onboard infotainment systems, our cars are collecting more and more information about our daily lives and personal interactions. As a result, privacy and security of connected cars has evolved and quickly risen over the last year to a top priority of carmakers and suppliers. Here are our top 4 tips for addressing these privacy and security issues and concerns in 2017:

  • Practice “security by design.” This is a concept recently espoused by federal regulators, namely, the National Highway Traffic Safety Administration and the Federal Trade Commission, as well as industry self-regulatory organizations. With security by design, a company addresses data security controls “day 1” while products, components and devices are still on the drawing board. Data security practices evolve over time, and the days of building it first and then layering security on top are now over. Risk assessments addressing potential threats and attack targets should be dealt with during the design process. Security design reviews and product testing should be conducted throughout the development process. Secure computing, software development and networking practices should address the security of connections into, from and inside the vehicle.

  • Practice “privacy by design.” While security deals with the safeguards and measures implemented to protect the data from unauthorized access or use, privacy focuses on the right and desire of individuals to keep information about themselves confidential. During the design process, companies should understand and identify what personal information will be collected by a component or device, what notice should be provided to or consent obtained from consumers before collecting that personal information, how should the personal information be used, are those intended uses legal, with whom will the personal information be shared, and is that sharing appropriate and legal. With this information identified, the company can reconcile privacy requirements with security safeguards during the design and development process.

  • Establish an appropriate data security governance model. Executives and senior management can no longer blindly delegate data security to the security engineering team. Regulators, courts and juries are demanding that senior management become involved in and accountable for data security. While the precise governance model will depend on the nature and size of the organization, the company should actively consider what level of executive oversight is appropriate, and then document those conclusions in a data security governance policy. This will serve the dual purposes of enhancing data security of vehicles and component parts, while also bolstering the company’s defenses in the event of a security incident or investigation.

  • Address the entire supply chain. Whether it is the finished vehicle or a component part, most companies relevant to the data security ecosystem will rely on suppliers that play a role in data security. Hardware, software, development tools, assembly, integration and testing may all be provided by one or more suppliers. Companies impacted by this scenario should conduct appropriate due diligence and risk assessments with respect to its suppliers, both at the commencement, as well as periodically throughout, the relationship. Contractual provisions should also be utilized to address data security requirements for the relevant suppliers.

© 2016 Foley & Lardner LLP