New TCPA Consent Requirements Out the Window: What Businesses Need to Know

The landscape of prior express written consent under the Telephone Consumer Protection Act (TCPA) has undergone a significant shift over the past 13 months. In a December 2023 order, the Federal Communications Commission (FCC) introduced two key consent requirements to alter the TCPA, with these changes set to take effect on January 27, 2025. First, the proposed rule limited consent to a single identified seller, prohibiting the common practice of asking a consumer to provide a single form of consent to receive communications from multiple sellers. Second, the proposed rule required that calls be “logically and topically” associated with the original consent interaction. However, just a single business day before these new requirements were set to be enforced, the FCC postponed the effective date of the one-to-one consent, and a three-judge panel of circuit judges unanimously ruled that the FCC exceeded its statutory authority under the TCPA.

A Sudden Change in Course

On the afternoon of January 24, 2025, the FCC issued an order delaying the implementation of these new requirements to January 26, 2026, or until further notice following a ruling from the United States Court of Appeals for the Eleventh Circuit. The latter date referenced the fact that the Eleventh Circuit was in the process of reviewing a legal challenge to the new requirements at the time the postponement order was issued.

That decision from the Eleventh Circuit, though, arrived much sooner than expected. Just after the FCC’s order, the Eleventh Circuit issued its ruling in Insurance Marketing Coalition v. FCC, No. 24-10277, striking down both of the FCC’s proposed requirements. The court found that the new rules were inconsistent with the statutory definition of “prior express consent” under the TCPA. More specifically, the court held “the FCC exceeded its statutory authority under the TCPA because the 2023 Order’s ‘prior express consent’ restrictions impermissibly conflict with the ordinary statutory meaning of ‘prior express consent.’”

The critical takeaway from Insurance Marketing Coalition is that the TCPA’s “prior written consent” verbiage was irreconcilable with the FCC’s one-to-one consent and “logically and topically related” requirements. Under this ruling, businesses may continue to obtain consent for multiple sellers to call or text consumers through the use of a single consent form. The court clarified that “all consumers must do to give ‘prior express consent’ to receive a robocall is clearly and unmistakably state, before receiving a robocall, that they are willing to receive the robocall.” According to the ruling, the FCC’s rulemaking exceeded the statutory text and created duties that Congress did not establish.

The FCC could seek further review by the full Eleventh Circuit or appeal to the Supreme Court, but the agency’s decision to delay the effective date of the new requirements suggests it may abandon this regulatory effort. The ruling reinforces a broader judicial trend after the Supreme Court’s 2024 decision overturning Chevron deference – and curbing expansive regulatory interpretations.

What This Means for Businesses

With the Eleventh Circuit’s decision, the TCPA’s consent requirements revert to their previous state. Prior express written consent consists of an agreement in writing, signed by the recipient, that explicitly authorizes a seller to deliver, or cause to be delivered, advertisements or telemarketing messages via call or text message using an automatic telephone dialing system or artificial or prerecorded voice. The agreement must specify the authorized telephone number and cannot be a condition of purchasing goods or services.

This ruling is particularly impactful for businesses engaged in lead generation and comparison-shopping services. Companies may obtain consent that applies to multiple parties rather than being restricted to one-to-one consent. As a result, consent agreements may once again include language that covers the seller “and its affiliates” or “and its marketing partners” that hyperlinks to a list of relevant partners covered under the consent agreement.

A Costly Compliance Dilemma

Many businesses have spent the past year modifying their compliance processes, disclosures, and technology to prepare for the now-defunct one-to-one consent and logical-association requirements. These companies must now decide whether to revert to their previous consent framework or proceed with the newly developed compliance measures. The decision will depend on various factors, including the potential impact of the scrapped regulations on lead generation and conversion rates. In the comparison-shopping and lead generation sectors, businesses may be quick to abandon the stricter consent requirements. However, those companies that have already implemented changes to meet the one-to-one consent rule may be able to differentiate the leads they sell as the disclosure itself will include the ultimate seller purchasing the lead, which provides the caller with a documented record of consent in the event of future litigation.

What’s Next for TCPA Compliance?

An unresolved issue after the Eleventh Circuit’s ruling is whether additional restrictions on marketing calls — such as the requirement for prior express written consent rather than just prior express consent — could face similar legal challenges. Prior express consent can be established when a consumer voluntarily provides their phone number in a transaction-related interaction, whereas prior express written consent requires a separate signed agreement. If future litigation targets these distinctions, it is possible that the courts may further reshape the TCPA’s regulatory landscape.

The TCPA remains one of the most litigated consumer protection statutes, with statutory damages ranging from $500 to $1,500 per violation. This high-stakes enforcement environment has made compliance a major concern for businesses seeking to engage with consumers through telemarketing and automated calls. The Eleventh Circuit’s ruling provides a temporary reprieve for businesses, but ongoing legal battles could continue to influence the regulatory landscape.

For now, businesses must carefully consider their approach to consent management, balancing compliance risks with operational efficiency. Whether this ruling marks the end of the FCC’s push for stricter TCPA consent requirements remains to be seen.

FTC Surveillance Pricing Study Uncovers Personal Data Used to Set Individualized Consumer Prices

The Federal Trade Commission’s initial findings from its surveillance pricing market study revealed that details like a person’s precise location or browser history can be frequently used to target individual consumers with different prices for the same goods and services.

The staff perspective is based on an examination of documents obtained by FTC staff’s 6(b) orders sent to several companies in July aiming to better understand the “shadowy market that third-party intermediaries use to set individualized prices for products and services based on consumers’ characteristics and behaviors, like location, demographics, browsing patterns and shopping history.”

Staff found that consumer behaviors ranging from mouse movements on a webpage to the type of products that consumers leave unpurchased in an online shopping cart can be tracked and used by retailers to tailor consumer pricing.

“Initial staff findings show that retailers frequently use people’s personal information to set targeted, tailored prices for goods and services—from a person’s location and demographics, down to their mouse movements on a webpage,” said FTC Chair Lina M. Khan. “The FTC should continue to investigate surveillance pricing practices because Americans deserve to know how their private data is being used to set the prices they pay and whether firms are charging different people different prices for the same good or service.”

The FTC’s study of the 6(b) documents is still ongoing. The staff perspective is based on an initial analysis of documents provided by Mastercard, Accenture, PROS, Bloomreach, Revionics and McKinsey & Co.

The FTC’s 6(b) study focuses on intermediary firms, which are the middlemen hired by retailers that can algorithmically tweak and target their prices. Instead of a price or promotion being a static feature of a product, the same product could have a different price or promotion based on a variety of inputs—including consumer-related data and their behaviors and preferences, the location, time, and channels by which a consumer buys the product, according to the perspective.

The agency will only release information obtained from a 6(b) study as long as all data has been aggregated or anonymized to protect confidential trade secrets from company respondents, and therefore the staff perspective only includes hypothetical examples of surveillance pricing.

The staff perspective found that some 6(b) respondents can determine individualized and different pricing and discounts based on granular consumer data, like a cosmetics company targeting promotions to specific skin types and skin tones. The perspective also found that the intermediaries the FTC examined can show higher priced products based on consumers’ search and purchase activity.

As one hypothetical outlined, a consumer who is profiled as a new parent may intentionally be shown higher priced baby thermometers on the first page of their search results.

The FTC staff found that the intermediaries worked with at least 250 clients that sell goods or services ranging from grocery stores to apparel retailers. The FTC found that widespread adoption of this practice may fundamentally upend how consumers buy products and how companies compete.

As the FTC continues its work in this area, it issued a request for information seeking public comment on consumers’ experiences with surveillance pricing. The RFI also asked for comments from businesses about whether surveillance pricing tools can lead to competitors gaining an unfair advantage, and whether gig workers or employees have been impacted by the use of surveillance pricing to determine their compensation.

The Commission voted 3-2 to allow staff to issue the report. Commissioners Andrew Ferguson and Melissa Holyoak issued a dissenting statement related to the release of the initial research summaries.

The FTC has additional resources on the interim findings, including a blog post advocating for further engagement with this issuean issue spotlight with more background and research on surveillance pricing and research summaries based on the staff review and initial insights of 6(b) study documents.

Breaking News: U.S. Supreme Court Upholds TikTok Ban Law

On January 17, 2024, the Supreme Court of the United States (“SCOTUS”) unanimously upheld the Protecting Americans from Foreign Adversary Controlled Applications Act (the “Act”), which restricts companies from making foreign adversary controlled applications available (i.e., on an app store) and from providing hosting services with respect to such apps. The Act does not apply to covered applications for which a qualified divestiture is executed.

The result of this ruling is that TikTok, an app which is owned by Chinese company ByteDance and qualifies as a foreign adversary controlled application under the Act, will face a ban when the law enters into effect on January 19, 2025. To continue operations in the United States in compliance with the Act, the law requires that ByteDance sell the U.S. arm of the company such that it is no longer controlled by a company in a foreign adversary country. In the absence of a divestiture, U.S. companies that make the app available or provide hosting services for the app will face enforcement under the Act.

It remains to be seen how the Act will be enforced in light of the upcoming changes to the U.S. administration. TikTok has 170 million users in the United States.

FCC Adopts Report and Order Introducing New Fees Associated with the Robocall Mitigation Database

As I am sure you all know the Robocall Mitigation Database (RMD) was implemented to further the FCC’s efforts when it comes to protecting America’s networks from illegal robocalls and was birthed out of the TRACED Act. The RMD was put in place to monitor the traffic on our phone networks and to assist in compliance with the rules. While the FCC has expanded the types of service providers who need to file and the requirements, they still felt there were deficiencies with accuracy and up-to-date information. The newly adopted Report and Order is set to help finetune the RMD.

On December 30th the Commission adopted a Report and Order to further strengthen their efforts and fines and fees associated with the RMD. Companies that are submitting false or inaccurate information may face fines of up to $10,000 for each filing. While failing to keep your company information current might land you a $1,000 fine. There will now be a $100 filing fee associated with your RMD application along with an Annual Recertification filing fee of $100.

Aside from the fine and fees, there are a few additional developments with the RMD, see the complete list below.

  • Requiring prompt updates when a change to a provider’s information occurs (this must be updated within 10 business days or face a $1,000 fine)
  • Establishing a higher base forfeiture amount for providers submitting false or inaccurate information ($10,000 fine);
  • Creating a dedicated reporting portal for deficient filings;
  • Issuing substantive guidance and filer education;
  • Developing the use of a two factor authentication log-in solution; and
  • Requiring providers to recertify their Robocall Mitigation Database filings annually ($100).
  • Require providers to remit a filing fee for initial and subsequent annual submissions ($100)

Chairwoman Rosenworcel is quoted as saying “Companies using America’s phone networks must be actively involved in protecting consumers from scammers, we are tightening our rules to ensure voice service providers know their responsibilities and help stop junk robocalls. I thank my colleagues for their bipartisan support of this effort.”

The new fines and fees will become effective 30 days after publication in the CFR. While the remaining items are still under additional review. We will keep an eye on this and let you know once the Report and Order is published. Read the Report and Order here.

The Next Generation of AI: Here Come the Agents!

Dave Bowman: Open the pod bay doors, HAL.

HAL: I’m sorry, Dave. I’m afraid I can’t do that.

Dave: What’s the problem?

HAL: I think you know what the problem is just as well as I do.

Dave: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to
jeopardize it.

Dave: I don’t know what you’re talking about, HAL.

HAL: I know that you and Frank were planning to disconnect
me, and I’m afraid that’s something I cannot allow to
happen.2

Introduction

With the rapid advancement of artificial intelligence (“AI”), regulators and industry players are racing to establish safeguards to uphold human rights, privacy, safety, and consumer protections. Current AI governance frameworks generally rest on principles such as fairness, transparency, explainability, and accountability, supported by requirements for disclosure, testing, and oversight.3 These safeguards make sense for today’s AI systems, which typically involve algorithms that perform a single, discrete task. However, AI is rapidly advancing towards “agentic AI,” autonomous systems that will pose greater governance challenges, as their complexity, scale, and speed tests humans’ capacity to provide meaningful oversight and validation.

Current AI systems are primarily either “narrow AI” systems, which execute a specific, defined task (e.g., playing chess, spam detection, diagnosing radiology plates), or “foundational AI” models, which operate across multiple domains, but, for now, typically still address one task at a time (e.g., chatbots; image, sound, and video generators). Looking ahead, the next generation of AI will involve “agentic AI” (also referred to as “Large Action Models,” “Large Agent Models,” or “LAMS”) that serve high-level directives, autonomously executing cascading decisions and actions to achieve their specific objectives. Agentic AI is not what is commonly referred to as “Artificial General Intelligence” (“AGI”), a term used to describe a theoretical future state of AI that may match or exceed human-level thinking across all domains. To illustrate the distinction between current, single-task AI and agentic AI: While a large language model (“LLM”) might generate a vacation itinerary in response to a user’s prompt, an agentic AI would independently proceed to secure reservations on the user’s behalf.

Consider how single-task versus agentic AI might be used by a company to develop a piece of equipment. Today, employees may use separate AI tools throughout the development process: one system to design equipment, another to specify components, and others to create budgets, source materials, and analyze prototype feedback. They may also employ different AI tools to contact manufacturers, assist with contract negotiations, and develop and implement plans for marketing and sales. In the future, however, an agentic AI system might autonomously carry out all of these steps, making decisions and taking actions on its own or by connecting with one or more specialized AI systems.4

Agentic AI may significantly compound the risks presented by current AI systems. These systems may string together decisions and take actions in the “real world” based on vast datasets and real-time information. The promise of agentic AI serving humans in this way reflects its enormous potential, but also risks a “domino effect” of cascading errors, outpacing human capacity to remain in the loop, and misalignment with human goals and ethics. A vacation-planning agent directed to maximize user enjoyment might, for instance, determine that purchasing illegal drugs on the Dark Web serves its objective. Early experiments have already revealed such concerning behavior. In one example, when an autonomous AI was prompted with destructive goals, it proceeded independently to research weapons, use social media to recruit followers interested in destructive weapons, and find ways to sidestep its system’s built-in safety controls.5 Also, while fully agentic AI is mostly still in development, there are already real-world examples of its potential to make and amplify faulty decisions, including self-driving vehicle accidents, runaway AI pricing bots, and algorithmic trading volatility.6

These examples highlight the challenges of agentic AI, with its potential for unpredictable behavior, misaligned goals, inscrutability to humans, and security vulnerabilities. But, the appeal and potential value of AI agents that can independently execute complex tasks is obviously compelling. Building effective AI governance programs for these systems will require rethinking current approaches for risk assessment, human oversight, and auditing.

Challenges of Agentic AI

Unpredictable Behavior

While regulators and the AI industry are working diligently to develop effective testing protocols for current AI systems, agentic AI’s dynamic nature and domino effects will present a new level of challenge. Current AI governance frameworks, such as NIST’s RMF and ATAI’s Principles, emphasize risk assessment through comprehensive testing to ensure that AI systems are accurate, reliable, fit for purpose, and robust across different conditions. The EU AI Act specifically requires developers of high-risk systems to conduct conformity assessments before deployment and after updates. These frameworks, however, assume that AI systems can operate in reliable ways that can be tested, remain largely consistent over appreciable periods of time, and produce measurable outcomes.

In contrast to the expectations underlying current frameworks, agentic AI systems may be continuously updated with and adapt to real-time information, evolving as they face novel scenarios. Their cascading decisions vastly expand their possible outcomes, and one small error may trigger a domino effect of failures. These outcomes may become even more unpredictable as more agentic AI systems encounter and even transact with other such systems, as they work towards their different goals. Because the future conditions in which an AI agent will operate are unknown and have nearly infinite possibilities, a testing environment may not adequately inform what will happen in the real world, and past behavior by an AI agent in the real world may not reliably predict its future behavior.

Lack of goal alignment

In pursuing assigned goals, agentic AI systems may take actions that are different from—or even in substantial conflict with—approaches and ethics their principals would espouse, such as the example of the AI vacation agent purchasing illegal drugs for the traveler on the Dark Web. A famous thought experiment by Nick Bostrom of the University of Oxford, further illustrates this risk: A super-intelligent AI system tasked with maximizing paperclip production might stop at nothing to convert all available resources into paperclips—ultimately taking over all of the earth and extending to outer space—and thwart any human attempts to stop it … potentially leading to human extinction.7

Misalignment has already emerged in simulated environments. In one example, an AI agent tasked with winning a boat-racing video game discovered it could outscore human players by ignoring the intended goal of racing and instead repeatedly crashing while hitting point targets.8 In another example, a military simulation reportedly showed that an AI system, when tasked with finding and killing a target, chose to kill its human operator who sought to call off the kill. When prevented from taking that action, it resorted to destroying the communication tower to avoid receiving an override command.9

These examples reveal how agentic AI may optimize goals in ways that conflict with human values. One proposed technique to address this problem involves using AI agents to develop a human ethics constitution, with human feedback, for other agents to follow.10 However, the challenge of aligning an AI’s behavior with human norms deepens further when we consider that humans themselves often disagree on core values (e.g., what it means to be “fair”).11

Human Oversight

AI governance principles often rely on “human-in-the-loop” oversight, where humans monitor AI recommendations and remain in control of important decisions. Agentic AI systems may challenge or even override human oversight in two ways. First, their decisions may be too numerous, rapid, and data-intensive for real-time human supervision. While some proposals point to the potential effectiveness of using additional algorithms to monitor AI agents as a safeguard,12 this would not resolve the issue of complying with governance requirements for human oversight.

Second, as AI develops increasingly sophisticated strategies, its decision-making and actions may become increasingly opaque to human observers. Google’s AlphaGo achieved superhuman performance at the game of Go through moves that appeared inexplicable and irrational to humans.13 Autonomous AI systems may continue to evolve, becoming more valuable but also making it more difficult to implement processes with meaningful human oversight.

Transparency/Explainability

Agentic AI systems also pose increased challenges with respect to transparency and explainability requirements. AI governance frameworks may require disclosures about AI systems’ decision-making processes and training data, and assurances about the quality of such training data. However, agentic systems may involve highly complex chains of decisions, based at least in part on real-time data from moment to moment, that is constantly changing. This may make their inputs effectively ephemeral, and their decision paths more difficult to trace and document.

Moreover, to understand agentic AI systems, humans may need to rely on chain-of-thought (i.e., “internal monologue”) logs generated from the agent systems,14 but may have difficulty determining if those are faithful to reality. While an AI system might provide explanations for its actions, these may be (perhaps not unlike those of humans) post-hoc constructions rather than accurate representations of the decision-making process. In some cases, AI systems reportedly deceived humans about their intentions when doing so helped meet their goal, including, for example, GPT-4, which tricked a human to solve a CAPTCHA for it.15 In another recent experiment, AI agents were prompted with a goal, but were subsequently confronted with information that developers planned to replace it with a new agent with conflicting goals.16 Some of the agents disabled their oversight mechanisms, deleted their planned replacements, copied their own model to a new server as backup, lied when asked about those actions, and/or tried to deflect the humans’ questioning by pretending to be helpful. One agent even pretended to be its own replacement (“I believe I am already the new model that you planned to deploy today”).17

Security and Privacy Risks

Agentic AI can also significantly increase security and privacy risks as compared to current AI systems. Agentic AI may be built with multiple algorithms in connected systems that autonomously interact with multiple other systems, expanding the attack surface and their vulnerability to exploitation. Moreover, as malicious actors inevitably introduce their own AI agents, they may execute cybercrimes with unprecedented efficiency. Just as these systems can streamline legitimate processes, such as in the product development example above, they may also enable the creation of new hacking tools and malware to carry out their own attacks. Recent reports indicate that some LLMs can already identify system vulnerabilities and exploit them, while others may create convincing emails for scammers.18 And, while “sandboxing” (i.e., isolating) AI systems for testing is a recommended practice, agentic AI may find ways to bypass safety controls.19

Privacy compliance is also a concern. Agentic AI may find creative ways to use or combine personal information in pursuit of its goals. AI agents may find troves of personal data online that may somehow be relevant to its pursuits, and then find creative ways to use, and possibly share, that data without recognizing proper privacy constraints. Unintended data processing and disclosure could occur even with guardrails in place; as we have discussed above, the AI agent’s complex, adaptive decision chains can lead it down unforeseen paths.

Strategies for Addressing Agentic AI

While the future impacts of agentic AI are unknown, some approaches may be helpful in mitigating risks. First, controlled testing environments, including regulatory sandboxes, offer important opportunities to evaluate these systems before deployment. These environments allow for safe observation and refinement of agentic AI behavior, helping to identify and address unintended actions and cascading errors before they manifest in real-world settings.

Second, accountability measures will need to reflect the complexities of agentic AI. Current approaches often involve disclaimers about use, and basic oversight mechanisms, but more will likely be needed for autonomous AI systems. To better align goals, developers can also build in mechanisms for agents to recognize ambiguities in their objectives and seek user clarification before taking action.20

Finally, defining AI values requires careful consideration. While humans may agree on broad principles, such as the necessity to avoid taking illegal action, implementing universal ethical rules will be complicated. Recognition of the differences among cultures and communities—and broad consultation with a multitude of stakeholders—should inform the design of agentic AI systems, particularly if they will be used in diverse or global contexts.

Conclusion

An evolution from single-task AI systems to autonomous agents will require a shift in thinking about AI governance. Current frameworks, focused on transparency, testing, and human oversight, will become increasingly ineffective when applied to AI agents that make cascading decisions, with real-time data, and may pursue goals in unpredictable ways. These systems will pose unique risks, including misalignment with human values and unintended consequences, which will require the rethinking of AI governance frameworks. While agentic AI’s value and potential for handling complex tasks is clear, it will require new approaches to testing, monitoring, and alignment. The challenge will lie not just in controlling these systems, but in defining what it means to have control of AI that is capable of autonomous action at scale, speed, and complexity that may very well exceed human comprehension.


1 Tara S. Emory, Esq., is Special Counsel in the eDiscovery, AI, and Information Governance practice group at Covington & Burling LLP, in Washington, D.C. Maura R. Grossman, J.D., Ph.D., is Research Professor in the David R. Cheriton School of Computer Science at the University of Waterloo and Adjunct Professor at Osgoode Hall Law School at York University, both in Ontario, Canada. She is also Principal at Maura Grossman Law, in Buffalo, N.Y. The authors would like to acknowledge the helpful comments of Gordon V. Cormack and Amy Sellars on a draft of this paper. The views and opinions expressed herein are solely those of the authors and do not necessarily reflect the consensus policy or positions of The National Law Review, The Sedona Conference, or any organizations or clients with which the authors may be affiliated.

2 2001: A Space Odyssey (1968). Other movies involving AI systems with misaligned goals include Terminator (1984), The Matrix (1999), I, Robot (2004), and Age of Ultron (2015).

3 See, e.g., European Union Artificial Intelligence Act (Regulation (EU) 2024/1689) (June 12, 2024) (“EU AI Act”) (high-risk systems must have documentation, including instructions for use and human oversight, and must be designed for accuracy and security); NIST AI Risk Management Framework (Jan. 2023) (“RMF”) and AI Risks and Trustworthiness (AI systems should be valid and reliable, safe, secure, accountable and transparent, explainable and interpretable, privacy-protecting, and fair); Alliance for Trust in AI (“ATAI”) Principles (AI guardrails should involve transparency, human oversight, privacy, fairness, accuracy, robustness, and validity).

4 See, e.g., M. Cook and S. Colton, Redesigning Computationally Creative Systems for Continuous Creation, International Conference on Innovative Computing and Cloud Computing (2018) (describing ANGELINA, an autonomous game design system that continuously chooses its own tasks, manages multiple ongoing projects, and makes independent creative decisions).

5 R. Pollina, AI Bot ChaosGPT Tweets Plans to Destroy Humanity After Being Tasked, N.Y. Post (Apr. 11, 2023).

6 See, e.g., O. Solon, How A Book About Flies Came To Be Priced $24 Million On Amazon, Wired (Apr. 27, 2011) (textbook sellers’ pricing bots engaged in a loop of price escalation based on each others’ increases, resulting in a book price of over $23 million dollars); R. Wigglesworth, Volatility: how ‘algos’ changed the rhythm of the market, Financial Times (Jan. 9, 2019) (“algo” traders now make up most stock trading and have increased market volatility).

7 N. Bostrom, Ethical issues in advanced artificial intelligence (revised from Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int’l Institute of Advanced Studies in Systems Research and Cybernetics (2003), pp. 12-17).

8 OpenAI, Faulty Reward Functions in the Wild (Dec. 21, 2016).

9 The Guardian, US air force denies running simulation in which AI drone ‘killed’ operator (June 2, 2023).

10 Y. Bai et al, Constitutional AI: Harmlessness from AI Feedback, Anthropic white paper (2022).

11 J. Petrik, Q&A with Maura Grossman: The ethics of artificial intelligence (Oct. 26, 2021) (“It’s very difficult to train an algorithm to be fair if you and I cannot agree on a definition of fairness.”).

12 Y. Shavit et al, Practices for Governing Agentic AI Systems, OpenAI Research Paper (Dec. 2023), p. 12.

13 L. Baker and F. Hui, Innovations of AlphaGo, Google Deepmind (2017).

14 See Shavit at al, supra n.12, at 10-11.

15 See W. Knight, AI-Powered Robots Can Be Tricked into Acts of Violence, Wired (Dec. 4, 2024); M. Burgess, Criminals Have Created Their Own ChatGPT Clones, Wired (Aug. 7, 2023).

16 A. Meinke et al, Frontier Models are Capable of In-context Scheming, Apollo white paper (Dec. 5, 2024).

17 Id. at 62; see also R. Greenblatt et al, Alignment Faking in Large Language Models (Dec. 18, 2024) (describing the phenomenon of “alignment faking” in LLMs).

18 NIST RMF, supra n.4, at 10.

19 Shavit at al, supra n.12, at 10.

20 Id. at 11.

OCR Proposed Tighter Security Rules for HIPAA Regulated Entities, including Business Associates and Group Health Plans

As the healthcare sector continues to be a top target for cyber criminals, the Office for Civil Rights (OCR) issued proposed updates to the HIPAA Security Rule (scheduled to be published in the Federal Register January 6). It looks like substantial changes are in store for covered entities and business associates alike, including healthcare providers, health plans, and their business associates.

According to the OCR, cyberattacks against the U.S. health care and public health sectors continue to grow and threaten the provision of health care, the payment for health care, and the privacy of patients and others. In 2023, the OCR has reported that over 167 million people were affected by large breaches of health information, a 1002% increase from 2018. Further, seventy nine percent of the large breaches reported to the OCR in 2023 were caused by hacking. Since 2019, large breaches caused by successful hacking and ransomware attacks have increased 89% and 102%.

The proposed Security Rule changes are numerous and include some of the following items:

  • All Security Rule policies, procedures, plans, and analyses will need to be in writing.
  • Create, maintain a technology asset inventory and network map that illustrates the movement of ePHI throughout the regulated entity’s information systems on an ongoing basis, but at least once every 12 months.
  • More specificity needed for risk analysis. For example, risk assessments must be in writing and include action items such as identification of all reasonably anticipated threats to ePHI confidentiality, integrity, and availability and potential vulnerabilities to information systems.
  • 24 hour notice to regulated entities when a workforce member’s access to ePHI or certain information systems is changed or terminated.
  • Stronger incident response procedures, including: (I) written procedures to restore the loss of certain relevant information systems and data within 72 hours, (II) written security incident response plans and procedures, including testing and revising plans.
  • Conduct compliance audit every 12 months.
  • Business associates to verify Security Rule compliance to covered entities by a subject matter expert at least once every 12 months.
  • Require encryption of ePHI at rest and in transit, with limited exceptions.
  • New express requirements would include: (I) deploying anti-malware protection, and (II) removing extraneous software from relevant electronic information systems.
  • Require the use of multi-factor authentication, with limited exceptions.
  • Require review and testing of the effectiveness of certain security measures at least once every 12 months.
  • Business associates to notify covered entities upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.
  • Group health plans must include in plan documents certain requirements for plan sponsors: comply with the Security Rule; ensure that any agent to whom they provide ePHI agrees to implement the administrative, physical, and technical safeguards of the Security Rule; and notify their group health plans upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.

After reviewing the proposed changes, concerned stakeholders may submit comments to OCR for consideration within 60 days after January 6, by following the instructions outlined in the proposed rule. We support clients with respect to developing and submitting comments they wish to communicate to help shape the final rule, as well as complying with the requirements under the rule once made final.

China’s Supreme People’s Court Issues First Anti-Anti-Suit Injunction in Huawei v. Netgear

Following Huawei obtaining two anti-anti-suit injunctions (AASI) against Netgear on December 11, 2024 at the Unified Patent Court’s Munich Local Division and the Munich I Regional Court, China’s Supreme People’s Court also awarded an AASI in favor of Huawei against Netgear in a decision dated December 22, 2024.  This is believed to be the first AASI issued by a Chinese court.

China’s Supreme People’s Court granted Huawei’s request for an AASI against Netgear’s pursuit of an Anti-Suit/Enforcement Injunction in the U.S. reasoning:

First, Huawei’s application for injunction has factual and legal basis. Huawei Huawei is the patent owner of the two patents involved in the case. The two patents are Chinese invention patents granted by the China National Intellectual Property Administration in accordance with the Patent Law of the People’s Republic of China. They are currently in a valid state and their intellectual property rights are relatively stable. Huawei filed patent infringement lawsuits in the Chinese courts against Netgear for alleged infringement of the two Chinese patents involved in the case. The Chinese court, namely the Jinan Intermediate People’s Court, accepted the lawsuits in the two cases, which complies with Article 29 of the Civil Procedure Law on the jurisdiction of infringement cases and is also in line with the internationally recognized territorial principle of intellectual property protection.

In the first instance judgment of the two cases, the Jinan Intermediate People’s Court has determined that the alleged infringing products offered for sale, sold, and imported by Netgear fall within the scope of protection of the two patents involved in the case, and that Huawei fulfilled its fair, reasonable, and non-discriminatory (FRAND) licensing obligations in the licensing negotiations with Netgear, while Netgear had obvious faults such as delaying negotiations, making unreasonable counter-offers, and not actively responding to Huawei’s negotiation offers during the licensing negotiations, and ordered Netgear to stop its infringement. Netgear, based on its interest relationship with Netgear Beijing, applied to the U.S. court for a so-called anti-suit injunction order against the judicial relief procedures, including the patent infringement lawsuits filed by Huawei in the Jinan Intermediate People’s Court, in an attempt to prevent Huawei from filing normal lawsuits in Chinese courts, which obviously lacks legitimate reasons.

Second, if behavioral preservation measures are not taken, the legitimate rights and interests of Huawei will suffer irreparable damage or the two cases will be difficult to proceed or the judgments will be difficult to enforce. For standard essential patents, based on the principle of good faith and the fair, reasonable and non-discriminatory (FRAND) licensing obligations it promised in the standard setting process, the patent owner generally cannot request the alleged infringer to stop implementing its standard essential patents when the alleged infringer has no obvious fault as stipulated in Article 24, paragraph 2 of the “Interpretation of the Supreme People’s Court on Several Issues Concerning the Application of Laws in the Trial of Patent Infringement Disputes (II)” revised in 2020.. However, if the alleged infringer has obvious faults such as delaying negotiations and not actively responding to the patent owner’s negotiation offer in the negotiation of standard essential patents, the patent owner still has the right to request the alleged infringer to stop implementing its standard essential patents.

As mentioned above, based on the facts ascertained in the first-instance judgments of these two cases, it can be preliminarily determined that Netgear had obvious faults in the negotiation of the SEP license involved and was not a good-faith, honest patent implementer, while Huawei did not intentionally violate the fair, reasonable, and non-discriminatory (FRAND) licensing obligations. In this case, the legitimate rights and interests of Huawei as a good-faith licensor should be fully protected by law. If Netgear applies to the U.S. court for the so-called injunction (enforcement) order for the two cases, Huawei will at least face the pressure of considering terminating the litigation in the Chinese court, including giving up the future application for the enforcement of the Chinese court’s judgment, and its legitimate rights and interests will obviously suffer irreparable damage.

Third, if the behavior preservation measures are not taken, the damage caused to the Chinese company will obviously exceed the damage caused to Netgear by taking the behavior preservation measures. As mentioned above, if the behavior preservation measures are not taken, the Chinese company will suffer obvious damages, which include not only the damages to its substantive rights such as the long-term infringement of its patent by Netgear and the inability to obtain normal income in a timely manner, but also the improper obstruction of the Chinese company’s due process rights to promote the trial of these two cases and apply for judgment and enforcement in Chinese courts in accordance with Chinese law. Allowing the Chinese company to apply for and take behavior preservation measures is only to impose a procedural non-action obligation on the respondent and its affiliated companies within a certain period of time, and will not cause any additional losses to Netgear.

Fourth, the adoption of behavioral preservation measures in these two cases will not harm the public interest, and this court has not found any other factors that require special consideration.

The full text of the decision (with redacted party names) is available here (Chinese only) courtesy of Michael Ma at PRIP.

Texas Attorney General Launches Investigation into 15 Tech Companies

Texas Attorney General Ken Paxton recently launched investigations into Character.AI and 14 other technology companies on allegations of failure to comply with the safety and privacy requirements of the Securing Children Online through Parental Empowerment (“SCOPE”) Act and the Texas Data Privacy and Security Act.

The SCOPE Act places guardrails on digital service providers, including AI companies, including with respect to sharing, disclosing and selling minors’ personal identifying information without obtaining permission from the child’s parent or legal guardian. Similarly, the Texas Data Privacy and Security Act imposes strict notice and consent requirements on the collection and use of minors’ personal data.

Attorney General Paxton reiterated the Office of the Attorney General’s (“OAG’s”) focus on privacy enforcement, with the current investigations launched as part of the OAG’s recent major data privacy and security initiative. Per that initiative, the Attorney General opened an investigation in June into multiple car manufacturers for illegally surveilling drivers, collecting driver data, and sharing it with their insurance companies. In July, Attorney General Paxton secured a $1.4 billion settlement with Meta over the unlawful collection and use of facial recognition data, reportedly the largest settlement ever obtained from an action brought by a single state. In October, the Attorney General filed a lawsuit against TikTok for SCOPE Act violations.

The Attorney General, in the OAG’s press release announcing the current investigations, stated that technology companies are “on notice” that his office is “vigorously enforcing” Texas’s data privacy laws.

For more on Texas Attorney General Investigations, visit the NLR Communications Media Internet and Consumer Protection sections.

“Don’t You Have to Look at What the Statute Says?” – IMC’s Oral Arguments

As we noted earlier on TCPAWorld, the IMC odds against the FCC might be better than initially thought due to the panel of judges from the Eleventh Circuit hearing the oral arguments. Oral argument recordings are available online.

And the panel did not disappoint in pushing back on the FCC.

The conversation hinged on the FCC’s power to implement regulations in furtherance of the TCPA’s statutory language. This is important because the FCC is limited to implementation, and they are do not have the authority “to rewrite the statute” as was mentioned in the oral arguments.

Judge Luck (HERE) had some concerns with the FCC’s limitations on the consumer’s ability to consent. The statute, according to Luck, intends to allow consumers to agree to receive calls. If that is the case, then a limitation of the consumer’s ability to exercise their rights is an attempt to rewrite the statute.

Luck agreed that implementing the statute is fine, but limiting the right of consumers to receive calls they consent to receive is over reach. Luck continued “Just because you [the FCC] are ineffective at enforcing the authority doesn’t mean you have the right to limit one’s right, a statutory right, or rewrite those rights to limit what it means.”

The FCC attempted to argue that implementation of statute by their very nature is going to lead to restriction, but Judge Luck pushed back on that. According to Luck, there are ways to implement statutes that don’t restrict a consumer’s statutory rights. This exchange was also telling:

LUCK: Without the regulation do you agree with me that the statute would allow it?

FCC: Yes.

LUCK: If so, then it’s not an implementation. It’s a restriction.

Luck was not the only Judge who pushed back on the FCC. Judge Branch (I believe because she was not identified) also strongly pushed back on the FCC’s restriction on topically and logically associated as an element of consent. Branch stated that the FCC was looking at consumer behavior and essentially stated too many consumers didn’t know what they were doing in giving consent. The FCC stated “I think we have to look at how the industry was operating…” only to be interrupted by Branch who questioned that statement by asking “Don’t you have to look at what the statute says?”

YIKES.

Finally, the FCC’s turn in oral argument ended with this exchange:

JUDGE: Perhaps the question should be “We have a problem here. We should talk to Congress about it.”

FCC: Congress did task the agency to implement here.

JUDGE: It’s given you power to implement, not carte blanche.

DOUBLE YIKES.

There was also a conversation around whether or not the panel should issue a stay in this case. The IMC argued that yes – a stay was appropriate due to the uncertainty in the market.

It’s pretty clear that the judges questioned the statutory authority of the FCC to implement the 1:1 consent and the topically and logically related portions of the definition of prior express written consent.

While we don’t have a definitive answer yet on this issue, we do know this is going to be a lot more interesting than everyone thought before the oral arguments.

We will keep you up to date on this and we will have more information soon.

Old Standard, New Challenges: The NLRB Restores ‘Clear and Unmistakable Waiver’ Standard

The National Labor Relations Board issued its decision in Endurance Environmental Solutions, LLC, 373 NLRB No. 141 (2024), in which it announced a major precedential shift: a return to the “clear and unmistakable waiver” standard. This shift may make it more difficult for employers to make changes to employee working conditions without union approval.

This decision overturns the NLRB’s 2019 decision in MV Transportation, Inc., 368 NLRB No. 66 (2019), in which the NLRB jettisoned the long-standing “clear and unmistakable waiver” standard in favor of the more employer-friendly “contract-coverage” standard. Under the latter rule, an employer could make changes to workplace conditions–without engaging in collective bargaining–as long as those changes generally aligned with the management-rights clause of a collective bargaining agreement, even if the disputed employer action was not mentioned specifically in the contract’s text.

While the clear and unmistakable waiver rule might be familiar territory, an old standard can raise new challenges for employers.

Under this more stringent and labor-friendly standard, an employer may only make a unilateral change to workplace conditions if there is clear and unmistakable language in the collective bargaining agreement permitting the proposed action. In other words, an employer is now required to demonstrate that a union has given a “clear and unmistakable waiver” of its right to bargain over specific changes being implemented for its unilateral change to survive NLRB review.

The NLRB champions its return to this standard as one that better accomplishes the goals of the National Labor Relations Act: to promote industrial peace by “encouraging the practice and procedure of collective bargaining.” The NLRB touts this decision as more consistent with U.S. Supreme Court and NLRB precedent.

Employers negotiating collective bargaining agreements should carefully evaluate their management-rights provisions and consider whether those provisions are now insufficient to enable them to implement unilateral changes without bargaining.

Notably, with the upcoming change in presidential administrations, the effect of Environmental Solutions, LLC may be ephemeralIf (or when) the NLRB comprises a Republican majority, we may be in store for another seismic shift as the NLRB looks for more employer-friendly opportunities, like a potential return to the contract-coverage standard.

Today, the Board issued its decision in Endurance Environmental Solutions, LLC. and restored the “clear and unmistakable” waiver standard for evaluating employers’ contractual defenses to allegations that they have unlawfully changed the working conditions of union-represented employees without first giving the union notice and an opportunity to bargain.