Change Management: How to Finesse Law Firm Adoption of Generative AI

Law firms today face a turning point. Clients demand more efficient, cost-effective services; younger associates are eager to leverage the latest technologies for legal tasks; and partners try to reconcile tradition with agility in a highly competitive marketplace. Generative artificial intelligence (AI), known for its capacity to produce novel content and insights, has emerged as a solution that promises better efficiency, improved work quality, and a real opportunity to differentiate the firm in the marketplace. Still, the question remains:

How can a law firm help its attorneys and staff to embrace AI while safeguarding the trust, ethical integrity, and traditional practices that lie at the heart of legal work?

Andrew Ng’s AI Transformation Playbook offers a valuable framework for introducing AI in ways that minimize risk and maximize organizational acceptance. Adopting these principles in a law-firm setting involves balancing the profession’s deep-seated practices with the potential of AI. From addressing cultural resistance to crafting a solid technical foundation, a thoughtful change-management plan is necessary for a sustainable and successful transition.

  • Overcoming Skepticism Through Pilot Projects

Law firms, governed by partnership models and a respect for precedent, tend to approach innovation cautiously. Partners who built their careers through meticulous research may worry that machine-generated insights compromise rigor and reliability. Associates might fear an AI-driven erosion of the apprenticeship model, wondering if their role will shrink as technology automates certain tasks. Concerns also loom regarding the firm’s reputation if clients suspect crucial responsibilities are being delegated to a mysterious black box.

The most direct method of quelling these doubts is to show proof of concept. Andrew Ng’s approach suggests starting with small, well-defined projects before scaling firm-wide. This tactic acknowledges that, with each successful pilot, more people become comfortable with technology that once felt like a threat. By methodically testing AI in narrower use cases, the firm ensures data security and strict confidentiality protocols remain intact. Early wins become the foundation for broader adoption.

Pilot projects help transform abstract AI potential into tangible benefits. For example, using AI to produce first drafts of nondisclosure agreements. Attorneys then refine these drafts, focusing on subtle nuances rather than repetitive details. Another natural entry point is e-discovery, where AI can sift through thousands of documents to categorize and surface relevant information more efficiently than human-only reviews. Each of these use cases is a manageable experiment. If AI truly delivers faster turnaround times and maintains accuracy, it provides evidence that can persuade skeptical stakeholders. Pilots also offer an opportunity to identify challenges, such as user training gaps or hiccups in data management, on a small scale before the technology is rolled out more broadly.

Creating a Dedicated AI Team

One of the first steps is assembling a cross-functional leadership group that aligns AI initiatives with overarching business objectives. This team typically includes partners who can advocate for AI at leadership levels, associates immersed in daily work processes, IT professionals responsible for infrastructure and cybersecurity, and compliance officers ensuring adherence to ethical mandates.

In large firms, a Chief AI Officer or Director of Legal Innovation may coordinate these efforts. In smaller firms, a few technology-minded attorneys might share multiple roles. The key is that this group does more than evaluate software. It crafts data governance policies, designs training programs, secures necessary budgets, and proactively tackles any ethical, reputational, or practical concerns that arise when introducing a technology as potentially disruptive as AI.

  • Training as the Core of Transformation

AI has limited value if the firm’s workforce does not know how to wield it effectively. Training must go beyond simple “tech demos,” offering interactive sessions in which legal professionals can apply AI tools to realistic tasks. For example, attorneys may practice using the system to draft a client memo or summarize case law. These hands-on experiences remove the mystique surrounding AI, giving participants a concrete understanding of its capabilities and boundaries.

Lawyers also need guidelines for verifying the AI’s output. Legally binding documents or briefs cannot be signed off without sufficient human oversight. For that reason, law firms often designate a “review attorney” role in the AI workflow, ensuring that each AI-generated product passes through a person who confirms it meets the firm’s rigorous standards. Partners benefit from shorter, strategically focused sessions that highlight how AI can influence client satisfaction, create new revenue streams, or boost efficiency in critical operations.

  • Developing a Coherent AI Strategy

Once the firm achieves early successes with pilot programs and begins to see a measurable return on smaller AI projects, it is time to formulate a broader vision. This strategic blueprint should identify the highest-value areas for further application of AI, whether it involves automating client intake, deploying predictive analytics for litigation, or streamlining contract drafting at scale. The key is to match AI initiatives with the firm’s core goals—boosting client satisfaction, refining operational efficiency, and ultimately reinforcing its reputation for accurate, ethical service.

But the firm’s AI strategy should never become a static directive. It must grow with the firm’s internal expertise, adjusting to real-world results, regulatory changes, and emerging AI capabilities. By regularly re-evaluating milestones and expected outcomes, the firm ensures its AI investments remain both relevant and impactful in serving clients’ evolving needs.

  • Communicating to Foster Trust and Transparency 

Change management thrives on dialogue. Andrew Ng’s playbook underscores the importance of transparent communication, especially in fields sensitive to reputational risk. Law firms can apply this principle by hosting informal gatherings where early adopters share their experiences—both positive and negative. These stories have a dual effect: they highlight successes that validate the technology, and they candidly address difficulties to keep expectations realistic.

Newsletters, lunch-and-learns, and internal portals all help disseminate updates and insights across different practice areas. Firms that operate multiple offices often hold virtual town halls, ensuring that attorneys and support staff everywhere can stay informed. Externally, clarity matters too. Clients who understand that a firm is leveraging AI to improve speed and accuracy (while retaining key ethical safeguards) are more likely to view the decision as innovative rather than risky.

Closing Thoughts

AI holds remarkable promise for law firms, but its full value emerges only through conscientious change management, which hinges on a delicate balance of diverse personalities. Nothing succeeds like success. By implementing small pilot projects, assembling an AI leadership team, focusing on thorough training, crafting a compelling business strategy, and clearly communicating its vision, a law firm can mitigate risks and harness AI’s transformative power.

The best outcomes result not from viewing AI as a magical shortcut, but from recognizing it as a partner that handles repetitive tasks and surfaces insights more swiftly than humans alone. This frees lawyers to direct their intellect and creativity toward high-level endeavors that deepen client relationships, identify new opportunities, and advance compelling arguments. When fused with a commitment to the highest professional and ethical standards, AI can become a catalyst for a dynamic and fruitful future—one where law firms deliver better service, operate more efficiently, and remain steadfastly true to their professional roots.

Property Insurance Coverage Pitfalls for Cannabis Businesses and Landlords

Nearly all Americans now live in a state where some form of cannabis is legal. Given that the cannabis industry is now valued in billions of dollars and has created hundreds of thousands of jobs across 39 of the 50 states, it requires the same range of insurance products that protect businesses in other sectors. This includes insurance for property owners that lease to tenants engaged in cannabis-related activities. Fortunately, common fact patterns have emerged that are instructive to cannabis businesses and property owners that wish to ensure they have effective coverage.

Where Liability Lies

It is not uncommon for a landlord to lease a property for a non-cannabis purpose, only to purportedly later learn that the tenant is using the property for an unpermitted cannabis operation. In such a case, the primary question is whether the landlord knew what the property was being used for and when. Mosley v. Pacific Specialty Ins. Co., 49 Cal. App. 5th 417 (2020) is instructive on this issue.

Mosley involved an action under a homeowners’ insurance policy, wherein the trial court granted summary judgment to the insurer on the basis that coverage was excluded for a fire that occurred after a tenant rerouted the property’s electrical system to steal power from a main utility line for a marijuana growing operation, causing a fuse to blow. The Court of Appeal reversed the judgment, finding that there was a triable issue as to whether the tenant’s actions were within the owners’ control (for purposes of determining whether the plant-growing exclusion applied). It was undisputed that the owners did not know about the operation or the alteration, and there was no evidence as to whether they could have discovered the operation by exercising ordinary care or diligence. The court explained in relevant part that “an insured increases a hazard ‘within its control’ only if the insured is aware of the hazard or reasonably could have discovered it through exercising ordinary care or diligence.”

A landlord’s knowledge of the operations is therefore relevant for several reasons. It may be relevant to a provision for increasing a particular hazard, as noted above. Equally important, it may be relevant to a provision in the policy for fraud or misrepresentation in the application or claims process. Many homeowners and commercial general liability policies contain a provision that the policy may be void or rescinded for fraud or a misrepresentation perpetrated in the application or claims process. Thus, if the insured property owner knew of the intended use, but misrepresented the nature of the property’s intended use, there may be no coverage for an insured’s loss.

Misrepresentation

Another common scenario involves the landlord or tenant misrepresenting the nature of the business at the insured location to obtain a better rate, to avoid mandatory inspections, or for other reasons. For example, an insured may state on the insurance application that it is a retail dispensary when in fact it manufactures cannabis using extraction machines and volatile solvents. Because the nature of the risk is substantially different for a retail dispensary than for a manufacturing operation, higher premiums and routine inspections may be required. A dispensary’s primary risk is theft whereas the use of solvents during extraction poses a risk of explosion.

Security Compliance

Failure to properly comply with security safeguard warranties and exclusions that are commonly found in cannabis commercial property policies has precluded coverage for many cannabis-related property claims, particularly those that involve theft and fires. For example, a common question is whether the storage of on-site harvested cannabis or finished stock complies with the Locked Safe Warranty provision that is required in most cannabis policies. Policy language varies, but most require harvested plant material or stock to be stored in a secured cage, a safe, or a vault room.

Definitions also vary between policies and it is important for the insured to pay close attention to the policy language to ensure that their business practice aligns with what is required under the warranty. It is common to hear an insured complain that it “complied with state regulations” with respect to the storage of cannabis, only to learn that the policy requires security that is more strict than applicable regulations.

The definitions and terms used within security safeguard warranties and exclusions in cannabis commercial property policies have evolved over the past few years to better align with the insured’s business operations, and to avoid ambiguity and unnecessary coverage disputes and litigation.

Examples of precise requirements for a compliant vault include:

  • Being located in an enclosed area constructed of steel and concrete with a single point of entry
  • A minimum steel door thickness of one inch
  • Continuous monitoring by a central station alarm, motion sensors, and video surveillance
  • A minimum of one-hour fire rating for all walls, floors, and ceilings
  • Procedures that limit access only to authorized personnel.

Similar coverage issues frequently arise regarding whether the insured has complied with other common security safeguards required by the policy, including specific requirements for what qualifies as a central station burglar alarm and the location of motion sensors and video surveillance equipment. Again, the cannabis business owner or landlord are often tripped up by the assumption that so long as they are “compliant” with state cannabis regulations, all will be well and they will be covered by their insurance policy.

This is frequently an incorrect, and ultimately expensive, assumption that may be avoided by closely reading the requirements of the policy to ensure that they align with actual business practices.

Conclusion

Cannabis businesses and property owners currently have a good selection of insurance options across multiple lines of coverage with reputable insurance companies. To avoid unnecessary coverage problems and expensive mistakes, however, it is important that the company or landlord work with an insurance broker who is familiar with the available cannabis-specific insurance forms and the common problematic factual scenarios, some of which are identified above.

The Next Generation of AI: Here Come the Agents!

Dave Bowman: Open the pod bay doors, HAL.

HAL: I’m sorry, Dave. I’m afraid I can’t do that.

Dave: What’s the problem?

HAL: I think you know what the problem is just as well as I do.

Dave: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to
jeopardize it.

Dave: I don’t know what you’re talking about, HAL.

HAL: I know that you and Frank were planning to disconnect
me, and I’m afraid that’s something I cannot allow to
happen.2

Introduction

With the rapid advancement of artificial intelligence (“AI”), regulators and industry players are racing to establish safeguards to uphold human rights, privacy, safety, and consumer protections. Current AI governance frameworks generally rest on principles such as fairness, transparency, explainability, and accountability, supported by requirements for disclosure, testing, and oversight.3 These safeguards make sense for today’s AI systems, which typically involve algorithms that perform a single, discrete task. However, AI is rapidly advancing towards “agentic AI,” autonomous systems that will pose greater governance challenges, as their complexity, scale, and speed tests humans’ capacity to provide meaningful oversight and validation.

Current AI systems are primarily either “narrow AI” systems, which execute a specific, defined task (e.g., playing chess, spam detection, diagnosing radiology plates), or “foundational AI” models, which operate across multiple domains, but, for now, typically still address one task at a time (e.g., chatbots; image, sound, and video generators). Looking ahead, the next generation of AI will involve “agentic AI” (also referred to as “Large Action Models,” “Large Agent Models,” or “LAMS”) that serve high-level directives, autonomously executing cascading decisions and actions to achieve their specific objectives. Agentic AI is not what is commonly referred to as “Artificial General Intelligence” (“AGI”), a term used to describe a theoretical future state of AI that may match or exceed human-level thinking across all domains. To illustrate the distinction between current, single-task AI and agentic AI: While a large language model (“LLM”) might generate a vacation itinerary in response to a user’s prompt, an agentic AI would independently proceed to secure reservations on the user’s behalf.

Consider how single-task versus agentic AI might be used by a company to develop a piece of equipment. Today, employees may use separate AI tools throughout the development process: one system to design equipment, another to specify components, and others to create budgets, source materials, and analyze prototype feedback. They may also employ different AI tools to contact manufacturers, assist with contract negotiations, and develop and implement plans for marketing and sales. In the future, however, an agentic AI system might autonomously carry out all of these steps, making decisions and taking actions on its own or by connecting with one or more specialized AI systems.4

Agentic AI may significantly compound the risks presented by current AI systems. These systems may string together decisions and take actions in the “real world” based on vast datasets and real-time information. The promise of agentic AI serving humans in this way reflects its enormous potential, but also risks a “domino effect” of cascading errors, outpacing human capacity to remain in the loop, and misalignment with human goals and ethics. A vacation-planning agent directed to maximize user enjoyment might, for instance, determine that purchasing illegal drugs on the Dark Web serves its objective. Early experiments have already revealed such concerning behavior. In one example, when an autonomous AI was prompted with destructive goals, it proceeded independently to research weapons, use social media to recruit followers interested in destructive weapons, and find ways to sidestep its system’s built-in safety controls.5 Also, while fully agentic AI is mostly still in development, there are already real-world examples of its potential to make and amplify faulty decisions, including self-driving vehicle accidents, runaway AI pricing bots, and algorithmic trading volatility.6

These examples highlight the challenges of agentic AI, with its potential for unpredictable behavior, misaligned goals, inscrutability to humans, and security vulnerabilities. But, the appeal and potential value of AI agents that can independently execute complex tasks is obviously compelling. Building effective AI governance programs for these systems will require rethinking current approaches for risk assessment, human oversight, and auditing.

Challenges of Agentic AI

Unpredictable Behavior

While regulators and the AI industry are working diligently to develop effective testing protocols for current AI systems, agentic AI’s dynamic nature and domino effects will present a new level of challenge. Current AI governance frameworks, such as NIST’s RMF and ATAI’s Principles, emphasize risk assessment through comprehensive testing to ensure that AI systems are accurate, reliable, fit for purpose, and robust across different conditions. The EU AI Act specifically requires developers of high-risk systems to conduct conformity assessments before deployment and after updates. These frameworks, however, assume that AI systems can operate in reliable ways that can be tested, remain largely consistent over appreciable periods of time, and produce measurable outcomes.

In contrast to the expectations underlying current frameworks, agentic AI systems may be continuously updated with and adapt to real-time information, evolving as they face novel scenarios. Their cascading decisions vastly expand their possible outcomes, and one small error may trigger a domino effect of failures. These outcomes may become even more unpredictable as more agentic AI systems encounter and even transact with other such systems, as they work towards their different goals. Because the future conditions in which an AI agent will operate are unknown and have nearly infinite possibilities, a testing environment may not adequately inform what will happen in the real world, and past behavior by an AI agent in the real world may not reliably predict its future behavior.

Lack of goal alignment

In pursuing assigned goals, agentic AI systems may take actions that are different from—or even in substantial conflict with—approaches and ethics their principals would espouse, such as the example of the AI vacation agent purchasing illegal drugs for the traveler on the Dark Web. A famous thought experiment by Nick Bostrom of the University of Oxford, further illustrates this risk: A super-intelligent AI system tasked with maximizing paperclip production might stop at nothing to convert all available resources into paperclips—ultimately taking over all of the earth and extending to outer space—and thwart any human attempts to stop it … potentially leading to human extinction.7

Misalignment has already emerged in simulated environments. In one example, an AI agent tasked with winning a boat-racing video game discovered it could outscore human players by ignoring the intended goal of racing and instead repeatedly crashing while hitting point targets.8 In another example, a military simulation reportedly showed that an AI system, when tasked with finding and killing a target, chose to kill its human operator who sought to call off the kill. When prevented from taking that action, it resorted to destroying the communication tower to avoid receiving an override command.9

These examples reveal how agentic AI may optimize goals in ways that conflict with human values. One proposed technique to address this problem involves using AI agents to develop a human ethics constitution, with human feedback, for other agents to follow.10 However, the challenge of aligning an AI’s behavior with human norms deepens further when we consider that humans themselves often disagree on core values (e.g., what it means to be “fair”).11

Human Oversight

AI governance principles often rely on “human-in-the-loop” oversight, where humans monitor AI recommendations and remain in control of important decisions. Agentic AI systems may challenge or even override human oversight in two ways. First, their decisions may be too numerous, rapid, and data-intensive for real-time human supervision. While some proposals point to the potential effectiveness of using additional algorithms to monitor AI agents as a safeguard,12 this would not resolve the issue of complying with governance requirements for human oversight.

Second, as AI develops increasingly sophisticated strategies, its decision-making and actions may become increasingly opaque to human observers. Google’s AlphaGo achieved superhuman performance at the game of Go through moves that appeared inexplicable and irrational to humans.13 Autonomous AI systems may continue to evolve, becoming more valuable but also making it more difficult to implement processes with meaningful human oversight.

Transparency/Explainability

Agentic AI systems also pose increased challenges with respect to transparency and explainability requirements. AI governance frameworks may require disclosures about AI systems’ decision-making processes and training data, and assurances about the quality of such training data. However, agentic systems may involve highly complex chains of decisions, based at least in part on real-time data from moment to moment, that is constantly changing. This may make their inputs effectively ephemeral, and their decision paths more difficult to trace and document.

Moreover, to understand agentic AI systems, humans may need to rely on chain-of-thought (i.e., “internal monologue”) logs generated from the agent systems,14 but may have difficulty determining if those are faithful to reality. While an AI system might provide explanations for its actions, these may be (perhaps not unlike those of humans) post-hoc constructions rather than accurate representations of the decision-making process. In some cases, AI systems reportedly deceived humans about their intentions when doing so helped meet their goal, including, for example, GPT-4, which tricked a human to solve a CAPTCHA for it.15 In another recent experiment, AI agents were prompted with a goal, but were subsequently confronted with information that developers planned to replace it with a new agent with conflicting goals.16 Some of the agents disabled their oversight mechanisms, deleted their planned replacements, copied their own model to a new server as backup, lied when asked about those actions, and/or tried to deflect the humans’ questioning by pretending to be helpful. One agent even pretended to be its own replacement (“I believe I am already the new model that you planned to deploy today”).17

Security and Privacy Risks

Agentic AI can also significantly increase security and privacy risks as compared to current AI systems. Agentic AI may be built with multiple algorithms in connected systems that autonomously interact with multiple other systems, expanding the attack surface and their vulnerability to exploitation. Moreover, as malicious actors inevitably introduce their own AI agents, they may execute cybercrimes with unprecedented efficiency. Just as these systems can streamline legitimate processes, such as in the product development example above, they may also enable the creation of new hacking tools and malware to carry out their own attacks. Recent reports indicate that some LLMs can already identify system vulnerabilities and exploit them, while others may create convincing emails for scammers.18 And, while “sandboxing” (i.e., isolating) AI systems for testing is a recommended practice, agentic AI may find ways to bypass safety controls.19

Privacy compliance is also a concern. Agentic AI may find creative ways to use or combine personal information in pursuit of its goals. AI agents may find troves of personal data online that may somehow be relevant to its pursuits, and then find creative ways to use, and possibly share, that data without recognizing proper privacy constraints. Unintended data processing and disclosure could occur even with guardrails in place; as we have discussed above, the AI agent’s complex, adaptive decision chains can lead it down unforeseen paths.

Strategies for Addressing Agentic AI

While the future impacts of agentic AI are unknown, some approaches may be helpful in mitigating risks. First, controlled testing environments, including regulatory sandboxes, offer important opportunities to evaluate these systems before deployment. These environments allow for safe observation and refinement of agentic AI behavior, helping to identify and address unintended actions and cascading errors before they manifest in real-world settings.

Second, accountability measures will need to reflect the complexities of agentic AI. Current approaches often involve disclaimers about use, and basic oversight mechanisms, but more will likely be needed for autonomous AI systems. To better align goals, developers can also build in mechanisms for agents to recognize ambiguities in their objectives and seek user clarification before taking action.20

Finally, defining AI values requires careful consideration. While humans may agree on broad principles, such as the necessity to avoid taking illegal action, implementing universal ethical rules will be complicated. Recognition of the differences among cultures and communities—and broad consultation with a multitude of stakeholders—should inform the design of agentic AI systems, particularly if they will be used in diverse or global contexts.

Conclusion

An evolution from single-task AI systems to autonomous agents will require a shift in thinking about AI governance. Current frameworks, focused on transparency, testing, and human oversight, will become increasingly ineffective when applied to AI agents that make cascading decisions, with real-time data, and may pursue goals in unpredictable ways. These systems will pose unique risks, including misalignment with human values and unintended consequences, which will require the rethinking of AI governance frameworks. While agentic AI’s value and potential for handling complex tasks is clear, it will require new approaches to testing, monitoring, and alignment. The challenge will lie not just in controlling these systems, but in defining what it means to have control of AI that is capable of autonomous action at scale, speed, and complexity that may very well exceed human comprehension.


1 Tara S. Emory, Esq., is Special Counsel in the eDiscovery, AI, and Information Governance practice group at Covington & Burling LLP, in Washington, D.C. Maura R. Grossman, J.D., Ph.D., is Research Professor in the David R. Cheriton School of Computer Science at the University of Waterloo and Adjunct Professor at Osgoode Hall Law School at York University, both in Ontario, Canada. She is also Principal at Maura Grossman Law, in Buffalo, N.Y. The authors would like to acknowledge the helpful comments of Gordon V. Cormack and Amy Sellars on a draft of this paper. The views and opinions expressed herein are solely those of the authors and do not necessarily reflect the consensus policy or positions of The National Law Review, The Sedona Conference, or any organizations or clients with which the authors may be affiliated.

2 2001: A Space Odyssey (1968). Other movies involving AI systems with misaligned goals include Terminator (1984), The Matrix (1999), I, Robot (2004), and Age of Ultron (2015).

3 See, e.g., European Union Artificial Intelligence Act (Regulation (EU) 2024/1689) (June 12, 2024) (“EU AI Act”) (high-risk systems must have documentation, including instructions for use and human oversight, and must be designed for accuracy and security); NIST AI Risk Management Framework (Jan. 2023) (“RMF”) and AI Risks and Trustworthiness (AI systems should be valid and reliable, safe, secure, accountable and transparent, explainable and interpretable, privacy-protecting, and fair); Alliance for Trust in AI (“ATAI”) Principles (AI guardrails should involve transparency, human oversight, privacy, fairness, accuracy, robustness, and validity).

4 See, e.g., M. Cook and S. Colton, Redesigning Computationally Creative Systems for Continuous Creation, International Conference on Innovative Computing and Cloud Computing (2018) (describing ANGELINA, an autonomous game design system that continuously chooses its own tasks, manages multiple ongoing projects, and makes independent creative decisions).

5 R. Pollina, AI Bot ChaosGPT Tweets Plans to Destroy Humanity After Being Tasked, N.Y. Post (Apr. 11, 2023).

6 See, e.g., O. Solon, How A Book About Flies Came To Be Priced $24 Million On Amazon, Wired (Apr. 27, 2011) (textbook sellers’ pricing bots engaged in a loop of price escalation based on each others’ increases, resulting in a book price of over $23 million dollars); R. Wigglesworth, Volatility: how ‘algos’ changed the rhythm of the market, Financial Times (Jan. 9, 2019) (“algo” traders now make up most stock trading and have increased market volatility).

7 N. Bostrom, Ethical issues in advanced artificial intelligence (revised from Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int’l Institute of Advanced Studies in Systems Research and Cybernetics (2003), pp. 12-17).

8 OpenAI, Faulty Reward Functions in the Wild (Dec. 21, 2016).

9 The Guardian, US air force denies running simulation in which AI drone ‘killed’ operator (June 2, 2023).

10 Y. Bai et al, Constitutional AI: Harmlessness from AI Feedback, Anthropic white paper (2022).

11 J. Petrik, Q&A with Maura Grossman: The ethics of artificial intelligence (Oct. 26, 2021) (“It’s very difficult to train an algorithm to be fair if you and I cannot agree on a definition of fairness.”).

12 Y. Shavit et al, Practices for Governing Agentic AI Systems, OpenAI Research Paper (Dec. 2023), p. 12.

13 L. Baker and F. Hui, Innovations of AlphaGo, Google Deepmind (2017).

14 See Shavit at al, supra n.12, at 10-11.

15 See W. Knight, AI-Powered Robots Can Be Tricked into Acts of Violence, Wired (Dec. 4, 2024); M. Burgess, Criminals Have Created Their Own ChatGPT Clones, Wired (Aug. 7, 2023).

16 A. Meinke et al, Frontier Models are Capable of In-context Scheming, Apollo white paper (Dec. 5, 2024).

17 Id. at 62; see also R. Greenblatt et al, Alignment Faking in Large Language Models (Dec. 18, 2024) (describing the phenomenon of “alignment faking” in LLMs).

18 NIST RMF, supra n.4, at 10.

19 Shavit at al, supra n.12, at 10.

20 Id. at 11.

FY 2025 NDAA Includes Biotechnology Provisions

The National Security Commission on Emerging Biotechnology announced on December 18, 2024, that the fiscal year 2025 National Defense Authorization Act includes “a suite of recommendations designed to galvanize action on biotechnology” for the U.S. Department of Defense (DOD). According to the Commission, the bill includes new authorities and requirements — derived from its May 2024 proposals — that will position DOD and the intelligence community (IC) to maximize the benefits of biotechnology for national defense. The provisions require:

  • DOD to create and publish an annual biotechnology roadmap, including assessing barriers to adoption of biotechnology, DOD workforce needs, and opportunities for international collaboration;
  • DOD to initiate a public-private “sandbox” in which DOD and industry can securely develop use cases for artificial intelligence (AI) and biotechnology convergence (AIxBio);
  • IC to conduct a rapid assessment of biotechnology in the People’s Republic of China and their actions to gain superiority in this sector; and
  • IC to develop an intelligence strategy to identify and assess biotechnology threats, especially regarding supply chain vulnerabilities.

The Commission states that it worked with Congress to develop these proposals, setting the stage for further recommendations in early 2025.

Federal Surface Transportation Agencies Issue Updated Guidance for Section 139 Environmental Review and Permitting Process

The Federal Highway Administration (FHWA), Federal Transit Administration (FTA), and Federal Railroad Administration (FRA) (the Agencies) recently issued updated guidance for implementing 23 U.S.C. § 139 (Section 139). Section 139 contains special procedures and requirements for the environmental review and permitting process for surface transportation and multimodal projects. The new guidance — officially titled “Section 139 Environmental Review Process: Efficient Environmental Reviews for Project Decisionmaking and One Federal Decision” (Guidance) — is effective immediately. The Agencies will accept public comments on the Guidance until February 18, 2025. This article highlights some of its significant features.

Background

Section 139 was first enacted in 2005 as part of the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU). Section 139 was innovative as an early effort to improve the efficiency of environmental reviews under the National Environmental Policy Act (NEPA) for highway and transit projects. Aspects of Section 139 later formed the basis for other NEPA streamlining measures such as Title 41 of the Fixing America’s Surface Transportation (FAST) Act, President Trump’s since-revoked Executive Order 13807, and the NEPA amendments in the Fiscal Responsibility Act.

The updated Section 139 Guidance is long overdue. FHWA and FTA’s prior version of the Section 139 guidance document was published in 2006. In the ensuing 18 years, Section 139 was amended by multiple surface transportation reauthorization laws (the Moving Ahead for Progress in the 21st Century Act in 2012, the FAST Act in 2015, and the Infrastructure Investment and Jobs Act in 2021); NEPA was amended by the Fiscal Responsibility Act in 2023; and the Agencies (in 2018) and the Council on Environmental Quality (CEQ) (in 2020, 2022, and 2024) revised their regulations implementing NEPA.

Notable Aspects of the New Section 139 Guidance

Applicable Version of Section 139

As noted above, Section 139 was first enacted in 2005 and was amended in 2012, 2015, and 2021. The Guidance clarifies that the applicable version of the statute is the version in effect “at the time the project was initiated (e.g., publication of a notice of intent (NOI) to develop a new environmental impact statement (EIS), or a determination to proceed with an environmental assessment (EA) that will follow the Sec. 139 environmental review process).” For projects undergoing supplemental environmental review, the Guidance states that the applicable version of the statute is the version in effect at the time of the NOI for the supplemental EIS or EA (if a NOI is published) or at the time the project was initiated (if a NOI is not published for the supplemental environmental review). These applicability rules could affect a lead agency’s decision whether to publish an optional NOI for an EA or a supplemental environmental review.

The Guidance states that as a “limited exception” to the general rule described above, a supplemental EIS is exempt from the Section 139 requirements if the original EIS was “under active development” during the eight months prior to August 11, 2005 (the date of SAFETEA-LU’s enactment). The Guidance does not explain the statutory or other legal basis for this exception. This exception is similar to an exception in the prior version of the guidance for “an EIS that was under active development during the 8 months prior to August 11, 2005, and that is being re-scoped due to changes in plans or priorities, even if a revised [NOI] is published.”

As another exception, the Guidance states that FRA will not apply Section 139 to “any railroad project for which the Secretary [of Transportation] approved the funding arrangement under title 49, U.S. Code, before December 4, 2015” (the date of the FAST Act’s enactment). While this exception is consistent with 49 U.S.C. § 24201(e), the Guidance does not acknowledge that this statutory section also covers “any existing environmental review process, program, [or] agreement” for a railroad project as of the date of the FAST Act’s enactment.

Applying Section 139 to Railroad Projects

One of the most notable changes in the Guidance is the addition of FRA as an author and changes throughout the document explaining how FRA will apply Section 139. When initially enacted in 2005, Section 139 applied only to highway and public transportation capital projects; the previous version of the Section 139 guidance was issued only by FHWA and FTA. After the FAST Act was enacted, Section 139 applied to railroad projects “to the greatest extent feasible.” (49 U.S.C. § 24201(a).) The Guidance dispenses with that qualifier, suggesting that Section 139 applies categorically to all railroad and FRA projects.

“Major Project” Determinations

Certain aspects of Section 139 apply only to “major projects,” defined as a project for which (1) multiple permits, approvals, reviews, or studies are required under a federal law other than NEPA; (2) “the project sponsor has identified the reasonable availability of funds sufficient to complete the project;” (3) the project is not a covered project under Title 41 of the FAST Act; and (4) an EIS is required or, if an EA is required, the project sponsor requests that the project be treated as a major project. (23 U.S.C. § 139(a)(7).) The Guidance explains the information that FHWA, FTA, and FRA each will consider to determine whether a project has a reasonable availability of funding. The Guidance states that the federal lead agency will determine whether a project is a major project during project initiation.

Harmonizing Section 139 with the Fiscal Responsibility Act’s NEPA Amendments

The Guidance states that Section 139’s timeframes for major projects “apply in lieu of” the deadlines in NEPA. For major projects, Section 139 requires, “to the maximum extent practicable and consistent with applicable Federal law,” a schedule consistent with an agency average of not more than two years for the completion of the environmental review process for major projects. (23 U.S.C. § 139(g)(1)(B)(iii).) NEPA, as amended by the Fiscal Responsibility Act, establishes deadlines of two years for completion of an EIS and one year for completion of an EA. (42 U.S.C. § 4336a(g).) These two timing provisions are not necessarily irreconcilable. And the Guidance does not address how its interpretation is consistent with 23 U.S.C. § 139(g)(1)(C) (which states that a schedule “shall be consistent with any other relevant time periods established under Federal law) and 23 U.S.C. § 139(k)(2) (which states that nothing in Section 139 “shall be construed as superseding, amending, or modifying” NEPA).

The Guidance states that Section 139’s 200-page limit for an EIS — which applies “notwithstanding any other provision of law” (23 U.S.C. § 139(n)(3)), unlike the schedule provision described above — takes precedence over NEPA’s generally applicable page limits (150 pages, or 300 pages for a proposed action of “extraordinary complexity”).

For other provisions that are not in direct conflict — including those related to lead agency responsibilities, the project’s purpose and need statement, and considerations for using a single environmental document for all federal agency reviews and decisions — the Guidance states that Section 139 “supplements” the requirements in NEPA.

Applicable Page Limits and Deadlines

The Guidance includes two appendices with tables depicting the applicable page limits (Appendix F) and timing requirements (Appendix G) for EAs and EISs based on the date the environmental document was initiated. Curiously, the tables do not reference the 2023 NEPA amendments (which were effective upon enactment on June 3, 2023). And the tables do not recognize that agencies “may apply” CEQ’s current NEPA regulations “to ongoing activities and environmental documents” begun before the effective date of the regulations (July 1, 2024) (40 C.F.R. § 1506.12).

Applicability of Section 139 to Projects Not Having an EIS

Section 139 provides that its project development procedures apply to projects for which an EIS is prepared and “may be applied” to other projects for which an environmental document is prepared “as requested by a project sponsor and to the extent determined appropriate by the Secretary [of Transportation].” (23 U.S.C. § 139(b)(1).) The Guidance states that FHWA will determine whether, and to what extent, to apply the Section 139 process requirements to non-EIS projects “on a project-by-project basis.” The Guidance states that, in general, FRA and FTA will apply the Section 139 process requirements only to EIS projects but may apply them, in whole or in part, to non-EIS projects “depending on the circumstances of the project; these provisions could include the statute of limitations (SOL) on claims or the joint lead agency approach.”

Concurrence Points on Purpose and Need Statement and Alternatives

The Guidance states that lead agencies should, as a “best practice,” obtain written concurrence from cooperating agencies on a draft purpose and need statement and the preliminary range of alternatives before publishing the NOI, as well as later concurrence on the preferred alternative. The Guidance also states that if the purpose and need statement or the range of alternatives are modified “after consideration of the public comments received in response to the publication of the NOI, the Federal lead agency should obtain additional written concurrence from the cooperating agencies prior to publishing the Draft EIS.” While concurrence is a well-intentioned practice, it could result in unnecessary delays in the environmental review process, especially to the extent the Guidance encourages lead agencies to obtain concurrence from cooperating agencies that do not have jurisdiction to issue any authorization for the project.

Pre-NOI Activities

The Guidance encourages lead agencies to conduct significant work before publishing the NOI. This includes identifying and inviting cooperating and participating agencies, soliciting public comment on the draft purpose and need statement and preliminary range of alternatives, obtaining written concurrence from cooperating agencies on a draft purpose and need statement and preliminary range of alternatives, developing a draft coordination plan and project schedule, developing a public involvement plan, determining the extent of environmental analysis needed for each resource, identifying potentially significant environmental issues, and identifying potential mitigation strategies.

Requesting Extensions of Established Schedules or Deadlines

The Guidance states that project applicants may request an extension to a schedule or deadline by submitting a request in writing to the lead agency at least 45 days before the deadline, “explaining the project’s status, explaining why an extension is needed, and providing a proposed updated schedule. The NEPA federal lead agency will determine whether an extension will be granted. A schedule extension should be requested if a project’s schedule is not expected to meet a deadline for completion of the EIS or EA.”

Notices of Statute of Limitations on Claims

The Guidance describes each of the Agencies’ different processes related to publishing notices in the Federal Register to trigger Section 139’s short statute of limitations on claims pursuant to 23 U.S.C. § 139(l) (150 days for highway, transit, and multimodal projects) or 49 U.S.C. § 24201(a)(4) (2 years for railroad projects). (If no such notice is published, NEPA’s generally applicable six-year limitations period would apply.) The Guidance includes an explanation of “risk management factors” that FHWA (but not FTA or FRA) will consider when deciding whether to publish such a notice for a project.

Planning and Environmental Linkages

The Guidance explains how statutory and regulatory authorities allow for transportation planning documents and state environmental review processes to be used during the NEPA process to inform the purpose and need statement, alternatives, description of environmental setting, and identification of environmental impacts and mitigation. The Guidance states that FHWA encourages the use of Planning and Environmental Linkages under the provisions of both 23 U.S.C. § 139(f)(4)(E) and 23 U.S.C. § 168 to the extent practicable, whereas FTA’s preference is to follow the Planning and Environmental Linkages approach in 23 C.F.R. part 450 instead of 23 U.S.C. § 139(f)(4)(E). The Guidance notes that 23 U.S.C. § 139(f)(4)(E) applies to railroad projects and encourages railroad project sponsors to coordinate with FRA on integrating planning (including the Corridor Identification and Development Program) with the NEPA process.

Using Errata Sheets for a Final EIS and Issuing a Combined Final EIS and Record of Decision

The Guidance incorporates, with some changes, many aspects of the Department of Transportation’s “Guidance on the Use of Combined Final Environmental Impact Statements/Records of Decision and Errata Sheets in National Environmental Policy Act Reviews” (Apr. 25, 2019).

Applicability of Section 139 and Guidance to NEPA Assignment States

The Guidance states that Section 139 applies to projects for which a state has assumed the Department of Transportation’s responsibilities under NEPA and other environmental laws pursuant to the Surface Transportation Project Delivery Program under 23 U.S.C. § 327. The Guidance is silent on whether Section 139 applies to projects covered by the more limited categorical exclusion assignment program under 23 U.S.C. § 326. As to the applicability of the Guidance itself to NEPA assignment projects, the Guidance suggests that states participating in the NEPA assignment program “should coordinate with FHWA, FRA, or FTA, as appropriate, regarding the applicability of this guidance.”

Process Charts

The Guidance includes, as Appendix H, two detailed charts depicting a “recommended best practice timeline” for completing the NEPA and permitting processes for EAs and major project EISs. These charts depict how other state and federal agencies’ permitting processes can be coordinated to achieve the timeframes required by NEPA and Section 139.

Next Steps

As an interim final guidance, the Guidance is effective immediately while the Agencies solicit public comments. The deadline to provide comments on the Guidance is February 18, 2025. The Agencies will then make any changes they determine to be appropriate and will issue a final guidance. Notably, this work will occur during the incoming Trump administration, and the final guidance may reflect the priorities of the Agencies’ new leadership.

December 2024 Legal News: Law Firm News and Mergers, Industry Recognition, DEI and Women in Law

Thank you for reading the National Law Review’s legal news roundup, highlighting the latest law firm news! As the country enters the new year, it is important to look back at big news from the previous one. Please read below for the latest in law firm news and industry expansion, legal industry awards and recognition, and DEI and women in the legal field.

Law Firm News and Mergers

Bracewell LLP announced that Barron F. Wallace and Robert R. Collins III have been elected to serve three-year terms on the firm’s management committee.

Mr. Wallace, a resident in the firm’s Houston office, focuses his practice on traditional and highly structured project finance conduit transactions involving cities, school districts, state agencies, higher education, housing and other areas. In addition, he serves as Chairman of the Houston Parks Board and is a member of the board of directors of the Discovery Green-Downtown Park Corporation and the Houston Social Justice Fund.

Mr. Collins is a partner in Bracewell’s public finance practice in the Dallas office who focuses his practice on tax-exempt financings. He has successfully represented special districts and cities in expedited declaratory judgment actions, as well as serving as counsel in financing transactions for water and school districts, economic development corporations and venue projects.

“Rob and Barron are exceptional leaders whose commitment and vision have consistently driven the success of our firm,” said Bracewell Managing Partner Gregory M. Bopp. “I look forward to working with them as members of our firm-wide management committee.”

Michael G. Nicolella was promoted to shareholder at Strassburger, McKenna, Gutnick & Gefsky.

With nearly 20 years of experience as a business advisor and attorney, Mr. Nicolella specializes in securities lawmergers and acquisitionsentertainment law and and general counsel services for businesses and nonprofit organizations. He serves a diverse client base including healthcare providers, investor groups, entertainment organizations and nonprofits across various industries.

Whiteman Osterman & Hanna LLP (WOH) and Nolan Heller Kauffman LLP announced that the firms would be combining on Jan. 1, 2025, to enhance both firms’ abilities to serve clients in business law, commercial real estate, commercial litigation and other mutual practices. The combined firm will employ 196 professionals, including 113 attorneys.

“As we approach WOH’s 50th anniversary, adding the NHK team is a reflection of our continued commitment to thoughtful, organic growth that aligns with our culture and reputation,” said Robert Schofield, Managing Partner at WOH. “NHK’s exceptional track record in the areas of banking, creditors’ rights and bankruptcy perfectly complements WOH’s vision of assembling top-tier professionals committed to excellence in the service of our clients. This collaboration not only enhances our ability to provide outstanding legal services but also fosters professional development within our firm.”

Legal Industry Awards and Recognition

Varnum LLP business professionals Dianne Freeman and Sandy Fox were announced as two of the 28 honorees of the “Unsung Legal Heroes” Class of 2024 by Michigan Lawyers Weekly. The list recognizes dedicated and talented legal support professionals who have gone above and beyond the call of duty.

Ms. Fox, a paralegal in the firm’s Novi office, has over 20 years of experience in family law. She is the primary paralegal for three attorneys, honing skills that have made her an exceptional asset.

Ms. Freeman is an estate planning assistant in the firm’s Grand Rapids office. Being with the firm since 1980, her work duties include recording deeds, preparing digital notebooks and coordinating conferences.

“We are thrilled to honor the achievements of these team members who show unwavering dedication to their teams and our clients,” said Scott Hill, Varnum’s Executive Partner. “This recognition highlights the essential role our support staff plays in the success of our firm, and we deeply value their contributions.”

Stubbs Alderton & Markiles, LLP announced that partner Greg Akselrud and senior counsel Cathleen Green were named in Variety’s Dealmakers Impact Report” for 2024. The 2024 list is the fourth consecutive year that Mr. Akselrud has been included.

The annual report highlights negotiators who have pioneered significant deals that have shaped the entertainment industry in the past year.

David Delrahim, a partner at Shumaker, Loop & Kendrick, LLP, was chosen as a member of the Leadership St. Pete® (LSP) 2025 Class. The program aims to promote community stewardship by engaging members on issues facing St. Petersburg.

“We are thrilled that David has joined the 2025 Leadership St. Pete Class,” said Mindi Richter, St. Petersburg Managing Partner and LSP 2023 Class graduate. “With his keen eye for business and problem solving, as well as his history of community involvement in St. Pete, David will be a valuable addition to the program.”

Mr. Delrahim focuses his practice on complexities of business, real estate and bankruptcy litigation, representing clients from construction, manufacturing, medical services, real estate development and hospitality.

DEI and Women in Law

Katten Muchin Rosenman LLP intellectual property associate Katie O’Brien Leadership Council on Legal Diversity’s (LCLD) 2024 Atlas Award following her completion of the organization’s Pathfinder program. It is awarded to participants who have demonstrated the highest levels of engagement throughout the program.

“These programs present a tremendous opportunity for our attorneys to develop new relationships with industry leaders, expand their leadership skills and continue the upward trajectory in their career paths,” said Katten Chief Diversity Partner Leslie Minier. “This group of high-achieving attorneys is not only committed to delivering industry-leading client service but also is deeply engaged in the firm’s DEI efforts.”

Lauren Aguilar, an associate at Barnes & Thornburg LLP, was named to The National Black Lawyers’ (NBL) Top 40 Under 40 list. The list recognizes 40 African American attorneys from each state who have an outstanding reputation. Nominations were submitted by current NBL members.

Ms. Aguilar has established herself as a trusted advisor to clients by working closely with implementing agencies on issues and disputes involving water, natural gas, electric and wastewater utilities.

Quarles & Brady LLP announced that Janet Lindeman has rejoined the firm as a a partner in the real estate practice group in the firm’s Chicago office.

Ms. Lindeman advises clients on complex matters across the country, including disposition, acquisition, development, leasing, financing and mergers. Her clients include Fortune 500 companies and national commercial real estate developers, as well as real estate investment trusts and institutional real estate property owners and developers.

“With new business and legal challenges emerging in the commercial real estate industry, our clients want savvy and experienced representation that can help them navigate through complex legal issues,” said Diane Haller, Real Estate Practice Group national chair. “Janet fits this bill, and we are thrilled she has returned to Quarles to provide the client-focused counsel for which we are known.”

Fifth Circuit Court of Appeals Vacates Its Own Stay Rendering the Corporate Transparency Act Unenforceable . . . Again

On December 26, 2024, in Texas Top Cop Shop, Inc. v. Garland, No. 24-40792, 2024 WL 5224138 (5th Cir. Dec. 26, 2024), a merits panel of the United States Court of Appeals for the Fifth Circuit issued an order vacating the Court’s own stay of the preliminary injunction enjoining enforcement of the Corporate Transparency Act (“CTA”), that was originally entered by the United States District Court for the Eastern District of Texas on December 3, 2024, No. 4:24-CV-478, 2024 WL 5049220 (E.D. Tex. Dec 5, 2024).

A Timeline of Events:

  • December 3, 2024 – The District Court orders a nationwide preliminary injunction on enforcement of the CTA.
  • December 5, 2024 – The Government appeals the District Court’s ruling to the Fifth Circuit.
  • December 6, 2024 – The U.S. Treasury Department’s Financial Crimes Enforcement Network (“FinCEN”) issues a statement making filing of beneficial ownership information reports (“BOIRs”) voluntary.
  • December 23, 2024 – A motions panel of the Fifth Circuit grants the Government’s emergency motion for a stay pending appeal and FinCEN issues a statement requiring filing of BOIRs again with extended deadlines.
  • December 26, 2024 – A merits panel of the Fifth Circuit vacates its own stay, thereby enjoining enforcement of the CTA.
  • December 27, 2024 – FinCEN issues a statement again making filing of BOIRs voluntary.
  • December 31, 2024 – FinCEN files an application for a stay of the December 3, 2024 injunction with the Supreme Court of the United States.

This most recent order from the Fifth Circuit has effectively paused the requirement to file BOIRs under the CTA once again. In its most recent statement, FinCEN confirmed that “[i]n light of a recent federal court order, reporting companies are not currently required to file beneficial ownership information with FinCEN and are not subject to liability if they fail to do so while the order remains in force. However, reporting companies may continue to voluntarily submit beneficial ownership information reports.”

Although reporting requirements are not currently being enforced, we note that this litigation is ongoing, and if the Supreme Court decides to grant FinCEN’s December 31, 2024 application, reporting companies could once again be required to file. Given the high degree of unpredictability, reporting companies and others affected by the CTA should continue to monitor the situation closely and be prepared to file BOIRs with FinCEN in the event that enforcement is again resumed. If enforcement is resumed, the current reporting deadline for most reporting companies will be January 13, 2025, and while FinCEN may again adjust deadlines, this outcome is not assured.

For more information on the CTA and reporting requirements generally, please reference the linked Client Alert, dated November 24, 2024.

OCR Proposed Tighter Security Rules for HIPAA Regulated Entities, including Business Associates and Group Health Plans

As the healthcare sector continues to be a top target for cyber criminals, the Office for Civil Rights (OCR) issued proposed updates to the HIPAA Security Rule (scheduled to be published in the Federal Register January 6). It looks like substantial changes are in store for covered entities and business associates alike, including healthcare providers, health plans, and their business associates.

According to the OCR, cyberattacks against the U.S. health care and public health sectors continue to grow and threaten the provision of health care, the payment for health care, and the privacy of patients and others. In 2023, the OCR has reported that over 167 million people were affected by large breaches of health information, a 1002% increase from 2018. Further, seventy nine percent of the large breaches reported to the OCR in 2023 were caused by hacking. Since 2019, large breaches caused by successful hacking and ransomware attacks have increased 89% and 102%.

The proposed Security Rule changes are numerous and include some of the following items:

  • All Security Rule policies, procedures, plans, and analyses will need to be in writing.
  • Create, maintain a technology asset inventory and network map that illustrates the movement of ePHI throughout the regulated entity’s information systems on an ongoing basis, but at least once every 12 months.
  • More specificity needed for risk analysis. For example, risk assessments must be in writing and include action items such as identification of all reasonably anticipated threats to ePHI confidentiality, integrity, and availability and potential vulnerabilities to information systems.
  • 24 hour notice to regulated entities when a workforce member’s access to ePHI or certain information systems is changed or terminated.
  • Stronger incident response procedures, including: (I) written procedures to restore the loss of certain relevant information systems and data within 72 hours, (II) written security incident response plans and procedures, including testing and revising plans.
  • Conduct compliance audit every 12 months.
  • Business associates to verify Security Rule compliance to covered entities by a subject matter expert at least once every 12 months.
  • Require encryption of ePHI at rest and in transit, with limited exceptions.
  • New express requirements would include: (I) deploying anti-malware protection, and (II) removing extraneous software from relevant electronic information systems.
  • Require the use of multi-factor authentication, with limited exceptions.
  • Require review and testing of the effectiveness of certain security measures at least once every 12 months.
  • Business associates to notify covered entities upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.
  • Group health plans must include in plan documents certain requirements for plan sponsors: comply with the Security Rule; ensure that any agent to whom they provide ePHI agrees to implement the administrative, physical, and technical safeguards of the Security Rule; and notify their group health plans upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.

After reviewing the proposed changes, concerned stakeholders may submit comments to OCR for consideration within 60 days after January 6, by following the instructions outlined in the proposed rule. We support clients with respect to developing and submitting comments they wish to communicate to help shape the final rule, as well as complying with the requirements under the rule once made final.

Client Alert Update: Developments in the Corporate Transparency Act Injunction

As we previously reported, a nationwide preliminary injunction against enforcement of the Corporate Transparency Act (CTA) was issued on December 3, 2024. Since our last update, there have been significant developments:

  1. Fifth Circuit Stay and Revival of CTA Enforcement: On December 23, 2024, a three-judge panel of the United States Court of Appeals for the Fifth Circuit stayed the lower court’s preliminary injunction, temporarily reviving the immediate enforceability of the CTA.
  2. Extension of Filing Deadline: Following the Fifth Circuit’s stay, FinCEN announced an extension of the filing deadline for Beneficial Ownership Information Reports (BOIRs) to January 13, 2025, applicable to entities formed before January 1, 2024.
  3. Injunction Reinstated: On December 26, 2024, the Fifth Circuit vacated the three-judge panel’s decision to stay the preliminary injunction. As a result, enforcement of the CTA is once again enjoined, and reporting companies are not currently required to file BOIRs with FinCEN.

Litigation challenging the CTA continues, and further developments are likely as the legal landscape evolves. At this time, we reaffirm our prior guidance:

  • Reporting companies are not currently required to file BOIRs while the injunction remains in effect and will not face penalties for failing to do so.
  • FinCEN continues to accept voluntary submissions for entities that wish to proactively comply with potential future obligations.

Businesses that have already begun preparing beneficial ownership information may wish to complete the process to ensure readiness if the injunction is lifted. We will continue to provide updates on this matter.

Federal Appeals Court Reinstates Injunction Against the CTA, Pending Appeal

At approximately 8:15 p.m. Eastern Time on December 26, 2024, the United States Court of Appeals for the Fifth Circuit (Fifth Circuit) reversed course from its prior ruling in Texas Top Cop Shop, Inc., v. Garland to allow a lower court’s nationwide preliminary injunction stand against the Corporate Transparency Act (CTA), pending the Government’s appeal. This means that, once again, the Government, including the United States Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN), is barred from enforcing any aspect of the CTA’s disclosure requirements against reporting companies, including those formed before January 1, 2024. This decision prevents FinCEN from enforcing its recently announced deadline extension that would have deferred the compliance deadline for such existing entities from January 1, 2025, to January 13, 2025.

This abrupt about-face appears to be the result of a reassignment of Texas Top Cop Shop, Inc., v. Garland from one three-judge panel of the Fifth Circuit to another. The Fifth Circuit’s prior decision was issued by a “motions panel,” which decided only the Government’s motion to stay the lower court’s injunction. The motions panel also ordered that the case be expedited and assigned to the next available “merits panel” of the Fifth Circuit, which would be charged with deciding the merits of the Government’s appeal. Once the case was assigned to the merits panel, however, the judges on that panel (whose identities have not yet been publicized) appear to have disagreed with their colleagues. The new panel vacated the motions panel’s stay “in order to preserve the constitutional status quo while the merits panel considers the parties’ weighty substantive arguments.” The Government must now decide whether to seek relief from the United States Supreme Court, which may ultimately determine the fate of the CTA.