President Biden Announces Groundbreaking Restrictions on Access to Americans’ Sensitive Personal Data by Countries of Concern

The EO and forthcoming regulations will impact the use of genomic data, biometric data, personal health care data, geolocation data, financial data and some other types of personally identifiable information. The administration is taking this extraordinary step in response to the national security risks posed by access to US persons’ sensitive data by countries of concern – data that could then be used to surveil, scam, blackmail and support counterintelligence efforts, or could be exploited by artificial intelligence (AI) or be used to further develop AI. The EO, however, does not call for restrictive personal data localization and aims to balance national security concerns against the free flow of commercial data and the open internet, consistent with protection of security, privacy and human rights.

The EO tasks the US Department of Justice (DOJ) to develop rules that will address these risks and provide an opportunity for businesses and other stakeholders, including labor and human rights organizations, to provide critical input to agency officials as they draft these regulations. The EO and forthcoming regulations will not screen individual transactions. Instead, they will establish general rules regarding specific categories of data, transactions and covered persons, and will prohibit and regulate certain high-risk categories of restricted data transactions. It is contemplated to include a licensing and advisory opinion regime. DOJ expects companies to develop and implement compliance procedures in response to the EO and subsequent implementing of rules. The adequacy of such compliance programs will be considered as part of any enforcement action – action that could include civil and criminal penalties. Companies should consider action today to evaluate risk, engage in the rulemaking process and set up compliance programs around their processing of sensitive data.

Companies across industries collect and store more sensitive consumer and user data today than ever before; data that is often obtained by data brokers and other third parties. Concerns have grown around perceived foreign adversaries and other bad actors using this highly sensitive data to track and identify US persons as potential targets for espionage or blackmail, including through the training and use of AI. The increasing availability and use of sensitive personal information digitally, in concert with increased access to high-performance computing and big data analytics, has raised additional concerns around the ability of adversaries to threaten individual privacy, as well as economic and national security. These concerns have only increased as governments around the world face the privacy challenges posed by increasingly powerful AI platforms.

The EO takes significant new steps to address these concerns by expanding the role of DOJ, led by the National Security Division, in regulating the use of legal mechanisms, including data brokerage, vendor and employment contracts and investment agreements, to obtain and exploit American data. The EO does not immediately establish new rules or requirements for protection of this data. It instead directs DOJ, in consultation with other agencies, to develop regulations – but these restrictions will not enter into effect until DOJ issues a final rule.

Broadly, the EO, among other things:

  • Directs DOJ to issue regulations to protect sensitive US data from exploitation due to large scale transfer to countries of concern, or certain related covered persons and to issue regulations to establish greater protection of sensitive government-related data
  • Directs DOJ and the Department of Homeland Security (DHS) to develop security standards to prevent commercial access to US sensitive personal data by countries of concern
  • Directs federal agencies to safeguard American health data from access by countries of concern through federal grants, contracts and awards

Also on February 28, DOJ issued an Advance Notice of Proposed Rulemaking (ANPRM), providing a critical first opportunity for stakeholders to understand how DOJ is initially contemplating this new national security regime and soliciting public comment on the draft framework.

According to a DOJ fact sheet, the ANPRM:

  • Preliminarily defines “countries of concern” to include China and Russia, among others
  • Focuses on six enumerated categories of sensitive personal data: (1) covered personal identifiers, (2) geolocation and related sensor data, (3) biometric identifiers, (4) human genomic data, (5) personal health data and (6) personal financial data
  • Establishes a bulk volume threshold for the regulation of general data transactions in the enumerated categories but will also regulate transactions in US government-related data regardless of the volume of a given transaction
  • Proposes a broad prohibition on two specific categories of data transactions between US persons and covered countries or persons – data brokerage transactions and genomic data transactions.
  • Contemplates restrictions on certain vendor agreements for goods and services, including cloud service agreements; employment agreements; and investment agreements. These cybersecurity requirements would be developed by DHS’s Cybersecurity and Infrastructure Agency and would focus on security requirements that would prevent access by countries of concern.

The ANPRM also proposes general and specific licensing processes that will give DOJ considerable flexibilities for certain categories of transactions and more narrow exceptions for specific transactions upon application by the parties involved. DOJ’s licensing decisions would be made in collaboration with DHS, the Department of State and the Department of Commerce. Companies and individuals contemplating data transactions will also be able to request advisory opinions from DOJ on the applicability of these regulations to specific transactions.

A White House fact sheet announcing these actions emphasized that they will be undertaken in a manner that does not hinder the “trusted free flow of data” that underlies US consumer, trade, economic and scientific relations with other countries. A DOJ fact sheet echoed this commitment to minimizing economic impacts by seeking to develop a program that is “carefully calibrated” and in line with “longstanding commitments to cross-border data flows.” As part of that effort, the ANPRM contemplates exemptions for four broad categories of data: (1) data incidental to financial services, payment processing and regulatory compliance; (2) ancillary business operations within multinational US companies, such as payroll or human resources; (3) activities of the US government and its contractors, employees and grantees; and (4) transactions otherwise required or authorized by federal law or international agreements.

Notably, Congress continues to debate a comprehensive Federal framework for data protection. In 2022, Congress stalled on the consideration of the American Data Privacy and Protection Act, a bipartisan bill introduced by House energy and commerce leadership. Subsequent efforts to move comprehensive data privacy legislation in Congress have seen little momentum but may gain new urgency in response to the EO.

The EO lays the foundation for what will become significant new restrictions on how companies gather, store and use sensitive personal data. Notably, the ANPRM also represents recognition by the White House and agency officials that they need input from business and other stakeholders to guide the draft regulations. Impacted companies must prepare to engage in the comment process and to develop clear compliance programs so they are ready when the final restrictions are implemented.

Kate Kim Tuma contributed to this article 

Under the GDPR, Are Companies that Utilize Personal Information to Train Artificial Intelligence (AI) Controllers or Processors?

The EU’s General Data Protection Regulation (GDPR) applies to two types of entities – “controllers” and “processors.”

A “controller” refers to an entity that “determines the purposes and means” of how personal information will be processed.[1] Determining the “means” of processing refers to deciding “how” information will be processed.[2] That does not necessitate, however, that a controller makes every decision with respect to information processing. The European Data Protection Board (EDPB) distinguishes between “essential means” and “non-essential means.[3] “Essential means” refers to those processing decisions that are closely linked to the purpose and the scope of processing and, therefore, are considered “traditionally and inherently reserved to the controller.”[4] “Non-essential means” refers to more practical aspects of implementing a processing activity that may be left to third parties – such as processors.[5]

A “processor” refers to a company (or a person such as an independent contractor) that “processes personal data on behalf of [a] controller.”[6]

Data typically is needed to train and fine-tune modern artificial intelligence models. They use data – including personal information – in order to recognize patterns and predict results.

Whether an organization that utilizes personal information to train an artificial intelligence engine is a controller or a processor depends on the degree to which the organization determines the purpose for which the data will be used and the essential means of processing. The following chart discusses these variables in the context of training AI:

The following chart discusses these variables in the context of training AI:

Function

Activities Indicative of a Controller

Activities Indicative of a Processor

Purpose of processing

Why the AI is being trained.

If an organization makes its own decision to utilize personal information to train an AI, then the organization will likely be considered a “controller.”

If an organization is using personal information provided by a third party to train an AI, and is doing so at the direction of the third party, then the organization may be considered a processor.

Essential means

Data types used in training.

If an organization selects which data fields will be used to train an AI, the organization will likely be considered a “controller.”

If an organization is instructed by a third party to utilize particular data types to train an AI, the organization may be a processor.

Duration personal information is held within the training engine

If an organization determines how long the AI can retain training data, it will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, and does not control how long the AI may access the training data, the organization may be a processor.

Recipients of the personal information

If an organization determines which third parties may access the training data that is provided to the AI, that organization will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, but does not control who will be able to access the AI (and the training data to which the AI has access), the organization may be a processor.

Individuals whose information is included

If an organization is selecting whose personal information will be used as part of training an AI, the organization will likely be considered a “controller.”

If an organization is being instructed by a third party to utilize particular individuals’ data to train an AI, the organization may be a processor.

 

[1] GDPR, Article 4(7).

[1] GDPR, Article 4(7).

[2] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 33.

[3] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[4] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[5] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[6] GDPR, Article 4(8).

©2023 Greenberg Traurig, LLP. All rights reserved.

For more Privacy Legal News, click here to visit the National Law Review.

How a Zero-Day Flaw in MOVEit Led to a Global Ransomware Attack

In an era where our lives are ever more intertwined with technology, the security of digital platforms is a matter of national concern. A recent large-scale cyberattack affecting several U.S. federal agencies and numerous other commercial organizations emphasizes the criticality of robust cybersecurity measures.

The Intrusion

On June 7, 2023, the Cybersecurity and Infrastructure Security Agency (CISA) identified an exploit by “Threat Actor 505” (TA505), namely, a previously unidentified (zero-day) vulnerability in a data transfer software called MOVEit. MOVEit is a file transfer software used by a broad range of companies to securely transfer files between organizations. Darin Bielby, the managing director at Cypfer, explained that the number of affected companies could be in the thousands: “The Cl0p ransomware group has become adept at compromising file transfer tools. The latest being MOVEit on the heels of past incidents at GoAnywhere. Upwards of 3000 companies could be affected. Cypfer has already been engaged by many companies to assist with threat actor negotiations and recovery.”

CISA, along with the FBI, advised that “[d]ue to the speed and ease TA505 has exploited this vulnerability, and based on their past campaigns, FBI and CISA expect to see widespread exploitation of unpatched software services in both private and public networks.”

Although CISA did not comment on the perpetrator behind the attack, there are suspicions about a Russian-speaking ransomware group known as Cl0p. Much like in the SolarWinds case, they ingeniously exploited vulnerabilities in widely utilized software, managing to infiltrate an array of networks.

Wider Implications

The Department of Energy was among the many federal agencies compromised, with records from two of its entities being affected. A spokesperson for the department confirmed they “took immediate steps” to alleviate the impact and notified Congress, law enforcement, CISA, and the affected entities.

This attack has ramifications beyond federal agencies. Johns Hopkins University’s health system reported a possible breach of sensitive personal and financial information, including health billing records. Georgia’s statewide university system is investigating the scope and severity of the hack affecting them.

Internationally, the likes of BBC, British Airways, and Shell have also been victims of this hacking campaign. This highlights the global nature of cyber threats and the necessity of international collaboration in cybersecurity.

The group claimed credit for some of the hacks in a hacking campaign that began two weeks ago. Interestingly, Cl0p took an unusual step, stating that they erased the data from government entities and have “no interest in exposing such information.” Instead, their primary focus remains extorting victims for financial gains.

Still, although every file transfer service based on MOVEit could have been affected, that does not mean that every file transfer service based on MOVEit was affected. Threat actors exploiting the vulnerability would likely have had to independently target each file transfer service that employs the MOVEit platform. Thus, companies should determine whether their secure file transfer services rely on the MOVEit platform and whether any indicators exist that a threat actor exploited the vulnerability.

A Flaw Too Many

The attackers exploited a zero-day vulnerability that likely exposed the data that companies uploaded to MOVEit servers for seemingly secure transfers. This highlights how a single software vulnerability can have far-reaching consequences if manipulated by adept criminals. Progress, the U.S. firm that owns MOVEit, has urged users to update their software and issued security advice.

Notification Requirements

This exploitation likely creates notification requirements for the myriad affected companies under the various state data breach notification laws and some industry-specific regulations. Companies that own consumer data and share that data with service providers are not absolved of notification requirements merely because the breach occurred in the service provider’s environment. Organizations should engage counsel to determine whether their notification requirements are triggered.

A Call to Action

This cyberattack serves as a reminder of the sophistication and evolution of cyber threats. Organizations using the MOVEit software should analyze whether this vulnerability has affected any of their or their vendors’ operations.

With the increasing dependency on digital platforms, cybersecurity is no longer an option but a necessity in a world where the next cyberattack is not a matter of “if” but “when;” it’s time for a proactive approach to securing our digital realms. Organizations across sectors must prioritize cybersecurity. This involves staying updated with the latest security patches and ensuring adequate protective measures and response plans are in place.

© 2023 Bradley Arant Boult Cummings LLP

For cybersecurity legal news, click here to visit the National Law Review.

Montana Passes 9th Comprehensive Consumer Privacy Law in the U.S.

On May 19, 2023, Montana’s Governor signed Senate Bill 384, the Consumer Data Privacy Act. Montana joins California, Colorado, Connecticut, Indiana, Iowa, Tennessee, Utah, and Virginia in enacting a comprehensive consumer privacy law. The law is scheduled to take effect on October 1, 2024.

When does the law apply?

The law applies to a person who conducts business in the state of Montana and:

  • Controls or processes the personal data of not less than 50,000 consumers (defined as Montana residents), excluding data controlled or processed solely to complete a payment transaction.
  • Controls and processes the personal data of not less than 25,000 consumers and derive more than 25% of gross revenue from the sale of personal data.

Hereafter these covered persons are referred to as controllers.

The following entities are exempt from coverage under the law:

  • Body, authority, board, bureau, commission, district, or agency of this state or any political subdivision of this state;
  • Nonprofit organization;
  • Institution of higher education;
  • National securities association that is registered under 15 U.S.C. 78o-3 of the federal Securities Exchange Act of 1934;
  • A financial institution or an affiliate of a financial institution governed by Title V of the Gramm- Leach-Bliley Act;
  • Covered entity or business associate as defined in the privacy regulations of the federal Health Insurance Portability and Accountability Act (HIPAA);

Who is protected by the law?

Under the law, a protected consumer is defined as an individual who resides in the state of Montana.

However, the term consumer does not include an individual acting in a commercial or employment context or as an employee, owner, director, officer, or contractor of a company partnership, sole proprietorship, nonprofit, or government agency whose communications or transactions with the controller occur solely within the context of that individual’s role with the company, partnership, sole proprietorship, nonprofit, or government agency.

What data is protected by the law?

The statute protects personal data defined as information that is linked or reasonably linkable to an identified or identifiable individual.

There are several exemptions to protected personal data, including for data protected under HIPAA and other federal statutes.

What are the rights of consumers?

Under the new law, consumers have the right to:

  • Confirm whether a controller is processing the consumer’s personal data
  • Access Personal Data processed by a controller
  • Delete personal data
  • Obtain a copy of personal data previously provided to a controller.
  • Opt-out of the processing of the consumer’s personal data for the purpose of targeted advertising, sales of personal data, and profiling in furtherance of solely automated decisions that produce legal or similarly significant effects.

What obligations do businesses have?

The controller shall comply with requests by a consumer set forth in the statute without undue delay but no later than 45 days after receipt of the request.

If a controller declines to act regarding a consumer’s request, the business shall inform the consumer without undue delay, but no later than 45 days after receipt of the request, of the reason for declining.

The controller shall also conduct and document a data protection assessment for each of their processing activities that present a heightened risk of harm to a consumer.

How is the law enforced?

Under the statute, the state attorney general has exclusive authority to enforce violations of the statute. There is no private right of action under Montana’s statute.

Jackson Lewis P.C. © 2023

For more Privacy Legal News, click here to visit the National Law Review.

Secure Software Regulations and Self-Attestation Required for Federal Contractors

US Policy and Regulatory Alert

Government contractors providing software across the federal government’s supply chain will be required later this year to comply with a new Secure Software Design Framework (SSDF). The SSDF requires software vendors to attest to new security controls in the design of code used by the federal government.

Cybersecurity Compromises of Government Software on the Rise

In the aftermath of the cybersecurity compromises of significant enterprise software systems embedded in government supply chains, the federal government has increasingly prioritized reducing the vulnerability of software used within agency networks. Recognizing that most of the enterprise software that is used by the federal government is provided by a wide range of private sector contractors, the White House has been moving to impose a range of new software security regulations on both prime and subcontractors. One priority area is an effort to require government contractors to ensure that software used by federal agencies incorporates security by design. As a result, federal contractors supplying software to the government now face a new set of requirements to supply secure software code. That is, to provide software that is developed with security in mind so that flaws and vulnerabilities can be mitigated before the government buys and deploys the software.

The SSDF as A Government Response

In response, the White House issued Executive Order 14028, “Executive Order on Improving the Nation’s Cybersecurity” (EO 14028), on 12 May 2021. EO 14028 requires the National Institute of Standards and Technology (NIST) to develop standards, tools, and best practices to enhance the security of the software supply chain. NIST subsequently promulgated the SSDF in special publication NIST SP 800-218. EO 14028 also mandates that the director of the Office of Management and Budget (OMB) take appropriate steps to ensure that federal agencies comply with NIST guidance and standards regarding the SSDF. This resulted in OMB Memorandum M-22-18, “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices” (M-22-18). The OMB memo provides that a federal agency may use software subject to M-22-18’s requirements only if the producer of that software has first attested to compliance with federal government-specified secure software development practices drawn from the SSDF. Meaning, if the producer of the software cannot attest to meeting the NIST requirements, it will not be able to supply software to the federal government. There are some exceptions and processes for software to gradually enter into compliance under various milestones for improvements, all of which are highly technical and subjective.

In accordance with these regulations, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security issued a draft form for collecting the relevant attestations and associated information. CISA released the draft form on 27 April 2023 and is accepting comments until 26 June 2023.1

SSDF Implementation Deadline and Requirements for Government Suppliers

CISA initially set a deadline of 11 June 2023 for critical software and 13 September 2023 for non-critical software to comply with SSDF. Press reports indicate that these deadlines will be extended due to both the complexity of the SSDF requirements and the fact that the comment period remains open until 26 June  2023. However, CISA has not yet confirmed an extension of the deadline.

Attestation and Compliance with the SSDF

Based on what we know now, the attestation form generally requires software producers to confirm that:

  • The software was developed and built in secure environments.
  • The software producer has made a good-faith effort to maintain trusted source code supply chains.
  • The software producer maintains provenance data for internal and third-party code incorporated into the software.
  • The software producer employed automated tools or comparable processes that check for security vulnerabilities.

Software producers that must comply with SSDF should move quickly and begin reviewing their approach to software security. The SSDF requirements are complex and likely will take time to review, implement, and document. In particular, many of the requirements call for subjective analysis rather than objective evaluation against a set of quantifiable criteria, as is usually the case with such regulations. The SSDF also includes numerous ambiguities. For example, the SSDF requires versioning changes in software to have certain impacts in the security assessment, although the term “versioning” does not have a standard definition in the software sector.

Next Steps and Ricks of Noncompliance

Critically, the attestations on the new form carry risk under the civil False Claims Act for government contractors and subcontractors. Given the fact that many of the attestations require subjective analysis, contractors must take exceptional care in completing the attestation form. Contractors should carefully document their assessment that the software they produce is compliant. In particular, contractors and other interested parties should use this opportunity to share feedback and insights with CISA through the public comment process.

K&L Gates lawyers in our National Security Practice are closely tracking the implementation of these new requirements.


1 88 Fed. Reg. 25,670.

Copyright 2023 K & L Gates

Ankura CTIX FLASH Update – January 3, 2023

Malware Activity

Louisiana’s Largest Medical Complex Discloses Data Breach Associated to October Attack

On December 23rd, 2022, the Lake Charles Memorial Health System (LCMHS) began sending out notifications regarding a newly discovered data breach that is currently impacting approximately 270,000 patients. LCMHS is the largest medical complex in Lake Charles, Louisiana, which contains multiple hospitals and a primary care clinic. The organization discovered unusual activity on their network on October 21, 2022, and determined on October 25, 2022, that an unauthorized actor gained access to the organization’s network as well as “accessed or obtained certain files from [their] systems.” The LCMHS notice listed the following patient information as exposed: patient names, addresses, dates of birth, medical record or patient identification numbers, health insurance information, payment information, limited clinical information regarding received care, and Social Security numbers (SSNs) in limited instances. While LCMHS has yet to confirm the unauthorized actor responsible for the data breach, the Hive ransomware group listed the organization on their data leak site on November 15, 2022, as well as posted files allegedly exfiltrated after breaching the LCMHS network. The posted files contained “bills of materials, cards, contracts, medical info, papers, medical records, scans, residents, and more.” It is not unusual for Hive to claim responsibility for the associated attack as the threat group has previously targeted hospitals/healthcare organizations. CTIX analysts will continue to monitor the Hive ransomware group into 2023 and provide updates on the Lake Charles Memorial Health System data breach as necessary.

Threat Actor Activity

Kimsuky Threat Actors Target South Korean Policy Experts in New Campaign

Threat actors from the North Korean-backed Kimsuky group recently launched a phishing campaign targeting policy experts throughout South Korea. Kimsuky is a well-aged threat organization that has been in operation since 2013, primarily conducting cyber espionage and occasional financially motivated attacks. Aiming their attacks consistently at entities of South Korea, the group often targets academics, think tanks, and organizations relating to inter-Korea relations. In this recent campaign, Kimsuky threat actors distributed spear-phishing emails to several well-known South Korean policy experts. Within these emails, either an embedded website URL or an attachment was present, both executing malicious code to download malware to the compromised machine. One (1) tactic the threat actors utilized was distributing emails through hacked servers, masking the origin IP address(es). In total, of the 300 hacked servers, eighty-seven (87) of them were located throughout North Korea, with the others from around the globe. This type of social engineering attack is not new for the threat group as similar instances have occurred over the past decade. In January 2022, Kimsuky actors mimicked activities of researchers and think tanks in order to harvest intelligence from associated sources. CTIX continues to urge users to validate the integrity of email correspondence prior to visiting any embedded emails or downloading any attachments to lessen the risk of threat actor compromise.

Vulnerabilities

Netgear Patches Critical Vulnerability Leading to Arbitrary Code Execution

Network device manufacturer Netgear has just patched a high-severity vulnerability impacting multiple WiFi router models. The flaw, tracked as CVE-2022-48196, is described as a pre-authentication buffer overflow security vulnerability, which, if exploited, could allow threat actors to carry out a number of malicious activities. These activities include stealing sensitive information, creating Denial-of-Service (DoS) conditions, as well as downloading malware and executing arbitrary code. In past attacks, threat actors have utilized this type of vulnerability as an initial access vector by which they pivot to other parts of the network. Currently, there is very little technical information regarding the vulnerability and Netgear is temporarily withholding the details to allow as many of their users to update their vulnerable devices to the latest secure firmware. Netgear stated that this is a very low-complexity attack, meaning that unsophisticated attackers may be able to successfully exploit a device. CTIX analysts urge Netgear users with any of the vulnerable devices listed in Netgear’s advisory to patch their device immediately.

For more cybersecurity news, click here to visit the National Law Review.

Copyright © 2023 Ankura Consulting Group, LLC. All rights reserved.

First BIPA Trial Results in $228M Judgment for Plaintiffs

Businesses defending class actions under the Illinois Biometric Information Privacy Act (BIPA) have struggled to defeat claims in recent years, as courts have rejected a succession of defenses.

We have been following this issue and have previously reported on this trend, which continued last week in the first BIPA class action to go to trial. The Illinois federal jury found that BNSF Railway Co. violated BIPA, resulting in a $228 million award to a class of more than 45,000 truck drivers.

Named plaintiff Richard Rogers filed suit in Illinois state court in April 2019, and BNSF removed the case to the US District Court for the Northern District of Illinois. Plaintiff alleged on behalf of a putative class of BNSF truck drivers that BNSF required the drivers to provide biometric identifiers in the form of fingerprints and hand geometry to access BNSF’s facilities. The lawsuit alleged BNSF violated BIPA by (i) failing to inform class members their biometric identifiers or information were being collected or stored prior to collection, (ii) failing to inform class members of the specific purpose and length of term for which the biometric identifiers or information were being collected, and (iii) failing to obtain informed written consent from class members prior to collection.

In October 2019, the court rejected BNSF’s legal defenses that the class’s BIPA claims were preempted by three federal statutes governing interstate commerce and transportation: the Federal Railroad Safety Act, the Interstate Commerce Commission Termination Act, and the Federal Aviation Administration Authorization Act. The court held that BIPA’s regulation of how BNSF obtained biometric identifiers or information did not unreasonably interfere with federal regulation of rail transportation, motor carrier prices, routes, or services, or safety and security of railroads.

Throughout the case, including at trial, BNSF also argued it should not be held liable where the biometric data was collected by its third-party contractor, Remprex LLC, which BNSF hired to process drivers at the gates of BNSF’s facilities. In March 2022, the court denied BNSF’s motion for summary judgment, pointing to evidence that BNSF employees were also involved in registering drivers in the biometric systems and that BNSF gave direction to Remprex regarding the management and use of the systems. The court concluded (correctly, as it turned out) that a jury could find that BNSF, not just Remprex, had violated BIPA.

The case proceeded to trial in October 2022 before US District Judge Matthew Kennelly. At trial, BNSF continued to argue it should not be held responsible for Remprex’s collection of drivers’ fingerprints. Plaintiff’s counsel argued BNSF could not avoid liability by pleading ignorance and pointing to a third-party contractor that BNSF controlled. Following a five-day trial and roughly one hour of deliberations, the jury returned a verdict in favor of the class, finding that BNSF recklessly or intentionally violated BIPA 45,600 times. The jury did not calculate damages. Rather, because BIPA provides for $5,000 in liquidated damages for every willful or reckless violation (and $1,000 for every negligent violation), Judge Kennelly applied BIPA’s damages provision, which resulted in a judgment of $228 million in damages. The judgment does not include attorneys’ fees, which plaintiff is entitled to and will inevitably seek under BIPA.

While an appeal will almost certainly follow, the BNSF case serves as a stark reminder of the potential exposure companies face under BIPA. Businesses that collect biometric data must ensure they do so in compliance with BIPA and other biometric privacy regulations. Where BIPA claims have been asserted, companies should promptly seek outside counsel to develop a legal strategy for a successful resolution.

For more Privacy and Cybersecurity Legal News, click here to visit the National Law Review.

© 2022 ArentFox Schiff LLP

Navigating the Data Privacy Landscape for Autonomous and Connected Vehicles: Best Practices

Autonomous and connected vehicles, and the data they collect, process and store, create high demands for strong data privacy and security policies. Accordingly, in-house counsel must define holistic data privacy best practices for consumer and B2B autonomous vehicles that balance compliance, safety, consumer protections and opportunities for commercial success against a patchwork of federal and state regulations.

Understanding key best practices related to the collection, use, storage and disposal of data will help in-house counsel frame balanced data privacy policies for autonomous vehicles and consumers. This is the inaugural article in our series on privacy policy best practices related to:

  1. Data collection

  2. Data privacy

  3. Data security

  4. Monetizing data

Autonomous and Connected Vehicles: Data Protection and Privacy Issues

The spirit of America is tightly intertwined with the concept of personal liberty, including freedom to jump in a car and go… wherever the road takes you. As the famous song claims, you can “get your kicks on Route 66.” But today you don’t just get your kicks. You also get terabytes of data on where you went, when you left and arrived, how fast you traveled to get there, and more.

Today’s connected and semi-autonomous vehicles are actively collecting 100x more data than a personal smartphone, precipitating a revolution that will drive changes not just to automotive manufacturing, but to our culture, economy, infrastructure, legal and regulatory landscapes.

As our cars are becoming computers, the volume and specificity of data collected continues to grow. The future is now. Or at least, very near. Global management consultant McKinsey estimates “full autonomy with Level 5 technology—operating anytime, anywhere” as soon as the next decade.

This near-term future isn’t only for consumer automobiles and ride-sharing robo taxis. B2B industries, including logistics and delivery, agriculture, mining, waste management and more are pursuing connected and autonomous vehicle deployments.

In-house counsel must balance evolving regulations at the federal and state level, as well as consider cross-border and international regulations for global technologies. In the United States, the Federal Trade Commission (FTC) is the regulatory agency governing data privacy, alongside individual states that are developing their own regulations, with the California Consumer Privacy Act (CCPA) leading the way. Virginia and Colorado have new laws coming into effect in 2022, the California Privacy Rights Act comes into effect in 2023, and a half dozen more states are expected to enact new privacy legislation in the near future.

While federal and state regulations continue to evolve, mobility companies in the consumer and B2B mobility sectors need to make decisions today about their own data privacy and security policies in order to optimize compliance and consumer protection with opportunities for commercial success.

Understanding Types of Connected and Autonomous Vehicles

Autonomous, semi-autonomous, self-driving, connected and networked cars; in this developing category, these descriptions are often used interchangeably in leading business and industry publications. B2B International defines “connected vehicles (CVs) [as those that] use the latest technology to communicate with each other and the world around them” whereas “autonomous vehicles (AVs)… are capable of recognizing their environment via the use of on-board sensors and global positioning systems in order to navigate with little or no human input. Examples of autonomous vehicle technology already in action in many modern cars include self-parking and auto-collision avoidance systems.”

But SAE International and the National Highway Traffic Safety Administration (NHTSA) go further, defining five levels of automation in self-driving cars.

Levels of Driving Automation™ in Self-Driving Cars

 

 

Level 3 and above autonomous driving is getting closer to reality every day because of an array of technologies, including: sensors, radar, sonar, lidar, biometrics, artificial intelligence and advanced computing power.

Approaching a Data Privacy Policy for Connected and Autonomous Vehicles

Because the mobility tech ecosystem is so dynamic, many companies, though well intentioned, inadvertently start with insufficient data privacy and security policies for their autonomous vehicle technology. The focus for these early and second stage companies is on bringing a product to market and, when sales accelerate, there is an urgent need to ensure their data privacy policies are comprehensive and compliant.

Whether companies are drafting initial policies or revising existing ones, there are general data principles that can guide policy development across the lifecycle of data:

Collect

Use

Store

Dispose

Only collect the data you need

Only use data for the reason you informed the consumer

Ensure reasonable data security protections are in place

Dispose the data when it’s no longer needed

Additionally, for many companies, framing autonomous and connected vehicle data protection and privacy issues through a safety lens can help determine the optimal approach to constructing policies that support the goals of the business while satisfying federal and state regulations.

For example, a company that monitors driver alertness (critical for safety in today’s Level 2 AV environment) through biometrics is, by design, collecting data on each driver who uses the car. This scenario clearly supports vehicle and driver safety while at the same time implicates U.S. data privacy law.

In the emerging regulatory landscape, in-house counsel will continue to be challenged to balance safety and privacy. Biometrics will become even more prevalent in connection to identification and authentication, along with other driver-monitoring technologies for all connected and autonomous vehicles, but particularly in relation to commercial fleet deployments.

Developing Best Practices for Data Privacy Policies

In-house counsel at autonomous vehicle companies are responsible for constructing their company’s data privacy and security policies. Best practices should be set around:

  • What data to collect and when

  • How collected data will be used

  • How to store collected data securely

  • Data ownership and monetization

Today, the CCPA sets the standard for rigorous consumer protections related to data ownership and privacy. However, in this evolving space, counsel will need to monitor and adjust their company’s practices and policies to comply with new regulations as they continue to develop in the U.S. and countries around the world.

Keeping best practices related to the collection, use, storage and disposal of data in mind will help in-house counsel construct policies that balance consumer protections with safety and the commercial goals of their organizations.

A parting consideration may be opportunistic, if extralegal: companies that choose to advocate strongly for customer protections may be afforded a powerful, positive opportunity to position themselves as responsible corporate citizens.

© 2022 Varnum LLP
For more articles about transportation, visit the NLR Public Services, Infrastructure, Transportation section.

As the California Attorney General Focuses on Loyalty Programs, What Do Companies Need to Remember?

The California attorney general (AG) celebrated data privacy day by doing an “investigative sweep” of the loyalty programs of retailers, supermarkets, home improvement stores, travel companies, and food service companies, and sending out notices of non-compliance to businesses that the AG’s office believes might not be fully compliant with the CCPA. As the AG focuses its attention on loyalty programs, the following provides a reminder of the requirements under the CCPA.

What is a loyalty program?

Loyalty programs are structured in a variety of different ways. Some programs track dollars spent by consumers; others track products purchased. Some programs are free to participate in; others require consumers to purchase membership. Some programs offer consumers additional products; other programs offer prizes, money, or products from third parties. Although neither the CCPA nor the regulations implementing the CCPA define a “loyalty program,” as a practical matter most, if not all, loyalty programs have two things in common: (1) they collect information about consumers, and (2) they provide some form of reward in recognition of (or in exchange for) repeat purchasing patterns.[1]

What are the general obligations under the CCPA?

Because loyalty programs collect personal information about their members, if a business that sponsors a loyalty program is itself subject to the CCPA, then its loyalty program will also be subject to the CCPA. In situations in which the CCPA applies to a loyalty program, the following table generally describes the rights conferred upon a consumer in relation to the program:

Right Applicability to Loyalty Program
Notice at collection A loyalty program that collects personal information from its members should provide a notice at the point where information is being collected regarding the categories of personal information that will be collected and how that information will be used.[2]
Privacy notice A loyalty program that collects personal information of its members should make a privacy notice available to its members.[3]
Access to information A member of a loyalty program may request that a business disclose the “specific pieces of personal information” collected about them.[5]
Deletion of information A member of a loyalty program may request that a business delete the personal information collected about them. That said, a company may be able to deny a request by a loyalty program member to delete information in their account based upon one of the exceptions to the right to be forgotten.
Opt-out of sale A loyalty program that sells the personal information of its members should include a “do not sell” link on its homepage and permit consumers to opt-out of the sale of their information. To the extent that a consumer has directed the loyalty program to disclose their information to a third party (e.g., a fulfillment partner) it would not be considered a “sale” of information.
Notice of financial incentive To the extent that a loyalty program qualifies as a “financial incentive” under the regulations implementing the CCPA (discussed below), a business should provide a “notice of financial incentive.”[4]

Are loyalty programs always financial incentive programs?

Whether a loyalty program constitutes a “financial incentive” program as that term is defined by the regulations implementing the CCPA depends on the extent to which the loyalty program’s benefits “relate to” the collection, retention, or sale of personal information.”[6] While the California Attorney General has implied that all loyalty programs “however defined, should receive the same treatment as other financial incentives,” a strong argument may exist that for many loyalty programs the benefits provided are directly related to consumer purchasing patterns (i.e., repeat or volume purchases) and are not “related” to the collection of personal information.[7] If a particular loyalty program qualifies as a financial incentive program, a business should consider the following steps (in addition to the compliance obligations identified above):

  • Notify the consumer of the financial incentive.[8] The regulations implementing the CCPA specify that the financial incentive notice should contain the following information:
    • A summary of the financial incentive offered.[11] In the context of a loyalty program a description of the benefits that the consumer will receive as part of the program would likely provide a sufficient summary of the financial incentive.
    • A description of the material terms of the financial incentive. [12] The regulation specifies that the description should include the categories of personal information that are implicated by the financial incentive program and the “value of the consumer’s data.”[13]
    • How the consumer can opt-in to the financial incentive.[14] Information about how a consumer can opt-in (or join) a financial incentive program is typically conveyed when a consumer reviews an application to join or sign-up with the program.
    • How the consumer can opt-out, or withdraw, from the program. [15] This is an explanation as to how the consumer can invoke their right to withdraw from the program.[16]
    • An explanation of how the financial incentive is “reasonably related” to the value of the consumer’s data.[17] While the regulations state that a notice of financial incentive should provide an explanation as to how the financial incentive “reasonably relates” to the value of the consumer’s data, the CCPA requires only that a reasonable relationship exists if a business intends to discriminate against a consumer “because the consumer exercised any of the consumer’s rights” under the Act.[18] Where a business does not intend to use its loyalty program to discriminate against consumers that exercise CCPA-conferred privacy rights, it’s not clear whether this requirement applies. In the event that a reasonable relationship must be shown, however, the regulations require that a company provide a “good-faith estimate of the value of the consumer’s data that forms the basis” for the financial incentive and that the business provide a “description of the method” used to calculate that value.[19]
  • Obtain the consumer’s “opt in consent” to the “material terms” of the financial incentive,[9] and
  • Permit the consumer to revoke their consent “at any time.”[10]

FOOTNOTES

[1] FSOR Appendix A at 273 (Response 814) (including recognition from the AG that “loyalty programs” are not defined under the CCPA, and declining invitations to provide a definition through regulation).

[2] Cal. Civ. Code § 1798.100(a) (West 2021); Cal. Code Regs. tit. 11, 999.304(b), 305(a)(1) (2021).

[3] Cal. Code Regs. tit. 11, 999.304(a) (2021).

[5] Cal. Civ. Code § 1798.100(a).

[4] CAL. CODE REGS. tit. 11, 999.301(n); 304(d); 307(a), (b).

[6] CAL. CODE REGS. tit. 11, 999.301(j) (2021).

[7] FSOR Appendix A at 75 (Response 254).

[8] Cal. Civ. Code § 1798.125(b)(2) (West 2021).

[11] CAL. CODE REGS. tit. 11, 999.307(b)(1) (2021).

[12] CAL. CODE REGS. tit. 11, 999.307(b)(2) (2021).

[13] CAL. CODE REGS. tit. 11, 999.307(b)(2) (2021).

[14] CAL. CODE REGS. tit. 11, 999.307(b)(3) (2021).

[15] CAL. CODE REGS. tit. 11, 999.307(b)(4) (2021).

[16] Cal. Civ. Code § 1798.125(b)(3) (West 2021).

[17] CAL. CODE REGS. tit. 11, 999.307(b)(5) (2021).

[18] Cal. Civ. Code § 1798.125(a)(1), (2) (West 2021).

[19] CAL. CODE REGS. tit. 11, 999.307(b)(5)(a), (b) (2021).

[9] Cal. Civ. Code § 1798.125(b)(3) (West 2021).

[10] Cal. Civ. Code § 1798.125(b)(3) (West 2021).

©2022 Greenberg Traurig, LLP. All rights reserved.
For more articles about data privacy, visit the NLR Cybersecurity, Media & FCC section.

Federal Regulators Issue New Cyber Incident Reporting Rule for Banks

On November 18, 2021, the Federal Reserve, Federal Deposit Insurance Corporation, and Office of the Comptroller of the Currency issued a new rule regarding cyber incident reporting obligations for U.S. banks and service providers.

The final rule requires a banking organization to notify its primary federal regulator “as soon as possible and no later than 36 hours after the banking organization determines that a notification incident has occurred.” The rule defines a “notification incident” as a “computer-security incident that has materially disrupted or degraded, or is reasonably likely to materially disrupt or degrade, a banking organization’s—

  1. Ability to carry out banking operations, activities, or processes, or deliver banking products and services to a material portion of its customer base, in the ordinary course of business;
  2. Business line(s), including associated operations, services, functions, and support, that upon failure would result in a material loss of revenue, profit, or franchise value; or
  3. Operations, including associated services, functions and support, as applicable, the failure or discontinuance of which would pose a threat to the financial stability of the United States.”

Under the rule, a “computer-security incident” is “an occurrence that results in actual harm to the confidentiality, integrity, or availability of an information system or the information that the system processes, stores, or transmits.”

Separately, the rule requires a bank service provider to notify each affected banking organization “as soon as possible when the bank service provider determines it has experienced a computer-security incident that has materially disrupted or degraded or is reasonably likely to materially disrupt or degrade, covered services provided to such banking organization for four or more hours.” For purposes of the rule, a bank service provider is one that performs “covered services” (i.e., services subject to the Bank Service Company Act (12 U.S.C. 1861–1867)).

In response to comments received on the agencies’ December 2020 proposed rule, the new rule reflects changes to key definitions and notification provisions applicable to both banks and bank service providers. These changes include, among others, narrowing the definition of a “computer security incident,” replacing the “good faith belief” notification standard for banks with a determination standard, and adding a definition of “covered services” to the bank service provider provisions. With these revisions, the agencies intend to resolve some of the ambiguities in the proposed rule and address commenters’ concerns that the rule would create an undue regulatory burden.

The final rule becomes effective April 1, 2022, and compliance is required by May 1, 2022. The regulators hope this new rule will “help promote early awareness of emerging threats to banking organizations and the broader financial system,” as well as “help the agencies react to these threats before they become systemic.”

Copyright © 2021, Hunton Andrews Kurth LLP. All Rights Reserved.

For more articles on banking regulations, visit the NLR Financial Securities & Banking section.