January 2025 Legal News: Law Firm News and Mergers, Industry Awards and Recognition, DEI and Women in Law

Thank you for reading the National Law Review’s legal news roundup, highlighting the latest law firm news! A new year means new law firm news. Please read below for the latest in law firm news and industry expansion, legal industry awards and recognition, and DEI and women in the legal field.

Law Firm News and Mergers

Jackson Lewis P.C. announced the elevation of 20 attorneys to principal status, including:

“We are proud to announce the elevation of our 2025 class of new principals,” said Firm Chair Kevin Lauri. “These individuals have demonstrated exceptional talent, steadfast dedication and a deep commitment to both our clients and the core values that define Jackson Lewis. This is a well-deserved achievement, and we are excited to see the continued leadership and impact each of the principals will bring to the firm in the years ahead.”

Bradley Arant Boult Cummings LLP welcomed its largest first-year class, including 49 associates and two staff attorneys. The class includes:

Atlanta

  • Kaylee M. Roberts – Litigation Practice Group (Emory University School of Law)
  • Joseph A. Shafritz – Healthcare Practice Group (Georgia State University College of Law)
  • Ashley E. Strain – Litigation Practice Group (Emory University School of Law)

Birmingham

  • Bidushi Adhikari – Litigation Practice Group and Construction Practice Group (Boston University School of Law)
  • Julianne L. Bayer – Litigation Practice Group (University of Alabama School of Law)
  • Katelyn Carson – Construction Practice Group (University of Alabama School of Law)
  • John Darby – Corporate & Securities Practice Group (The George Washington University School of Law)
  • Edward Gaal – Corporate & Securities Practice Group (Cumberland School of Law at Samford University)
  • Joshua S. Lewis – Banking & Financial Services Practice Group (Cumberland School of Law at Samford University)
  • Matthew J. Lloyd – Litigation Practice Group (Washington & Lee University School of Law)
  • Marlee Tomlinson Martin – Real Estate Practice Group (University of Alabama School of Law)
  • Daniel S. McCray – Litigation Practice Group (University of Virginia School of Law)
  • Ashlyn E. Payne –Banking & Financial Services Practice Group (Cumberland School of Law at Samford University)
  • Brianna Rhymes – Government Enforcement & Investigations Practice Group (Southern University Law Center)
  • Zachary B. Stewart – Construction Practice Group (University of Alabama School of Law)
  • Charlotte Udipi – Healthcare Practice Group (Washington University School of Law)
  • Macy Walters – Litigation Practice Group (University of Mississippi School of Law)

Charlotte

  • Tamara Boles – Litigation Practice Group (University of Alabama School of Law)
  • Steven Hix – Litigation Practice Group (University of South Carolina School of Law)
  • Noah Matthews – Construction Practice Group and Litigation Practice Group (University of Miami School of Law)

Dallas

  • Lexie Alexander – Litigation Practice Group and Government Enforcement & Investigations Practice Group (Emory University School of Law)
  • Stephen McCluskey – Litigation Practice Group (University of Texas School of Law)
  • Taylor E. Scott – Litigation Practice Group (Southern University Law Center)

Houston

  • Jonathan Adams – Litigation Practice Group (University of Texas School of Law)
  • Tim Almohamad – Construction Practice Group (University of Texas School of Law)
  • John “Carter” Byrum – Litigation Practice Group (University of Texas School of Law)

Huntsville

  • AJ Brien – Corporate & Securities Practice Group (University of Alabama School of Law)
  • Trevor G. Porter – Corporate & Securities Practice Group (University of Alabama School of Law)

Jackson

  • Marshall Jones Jr. – Real Estate Practice Group (Mississippi College of Law)
  • Shelby Parks – Litigation Practice Group (Mississippi College of Law)
  • Emily C. Stanfield – Litigation Practice Group (Mississippi College of Law)
  • Preston Garner Vance – Construction Practice Group and Litigation Practice Group (University of Mississippi School of Law)

Nashville

  • Stephanie Goldfeld – Labor & Employment Practice Group and Litigation Practice Group (University of Tennessee College of Law)
  • Dawn Jackson – Litigation Practice Group (University of Mississippi School of Law)
  • Zachary June – Litigation Practice Group (Duke University School of Law)
  • Nora Klein – Corporate & Securities Practice Group (Belmont University College of Law)
  • Cole S. Manion – Construction Practice Group (University of Kentucky J. David Rosenberg College of Law)
  • Amanda Norman – Litigation Practice Group (George Mason University Antonin Scalia Law School)
  • Carter Oakley – Real Estate Practice Group (University of Tennessee College of Law)
  • Monica Peacock – Economic Development & Renewable Energy Practice Group (Duke University School of Law)
  • Elizabeth T. Petras – Litigation Practice Group (Emory University School of Law)
  • Madison G. Porth – Litigation Practice Group (Vanderbilt University Law School)
  • Lily Rucker – Labor & Employment Practice Group (Vanderbilt University Law School)
  • Marlee Sacks – Litigation Practice Group (Emory University School of Law)
  • Jon Michael Sockwell – Litigation Practice Group and Banking & Financial Services Practice Group (University of Alabama School of Law)
  • Jaden R. Taylor – Corporate & Securities Practice Group (George Washington University Law School)

Tampa

  • Justin A. Clark – Corporate & Securities Practice Group (University of Florida Levin College of Law)
  • Lucie Hunter Fisher – Litigation Practice Group (Washington & Lee University School of Law)
  • Mary Rosado – Construction Practice Group and Litigation Practice Group (University of Florida Levin College of Law)

Washington, D.C.

  • Elizabeth A. Brown – Construction Practice Group and Government Contracts Practice Group (University of Alabama School of Law)
  • Winni Zhang – Construction Practice Group (Washington & Lee University School of Law)

“Each year, one of our firm’s overarching goals continues to be strategic growth and this includes adding talented young attorneys across a variety of practices. This group is no exception as we welcome this historically large class of attorneys across our footprint,” said Bradley Chairman of the Board and Managing Partner Jon Skeeters. “We look forward to working with this accomplished group and are pleased to welcome them to the firm.”

David Vallas joined Honigman LLP’s Chicago office as a partner in the business litigation practice group and litigation department.

Mr. Vallas focuses his over two decades of experience on real estate and commercial matters, representing lenders, developers and REITs in cases involving creditor disputes, commercial foreclosures and municipal compliance.

“With the commercial real estate market expected to bounce back in 2025, businesses will need an experienced litigator who can handle the myriad of complex legal challenges that may arise,” said J. Michael Huget, Chair of Honigman’s Litigation Department. “David consistently delivers solutions that align not only with his client’s strategic objectives, but the evolving demands of today’s dynamic market. We’re thrilled to bring him on board.”

Legal Industry Awards and Recognition

Proskauer Rose LLP announced that Brian Schwartz, partner in the firm’s private funds group, was named to the 2025 Rising Stars list by Venture Capital Journal. The list features limited partners, founders, investors and advisors who are under the age of 40 and have made their mark on the industry.

Mr. Schwartz represents private fund sponsors in different fund strategies, including venture capital, growth equity and buyout funds. His practice focuses on structuring, organizing, marketing and negotiating Private investment funds through all aspects of the fundraising process.

Nelson Mullins Riley & Scarborough LLP announced that Boca Raton partner Rusty Melges was selected for the Urban Land Institute’s 2025 Leadership Institute Cohort. He will join 35 other professionals to learn tools and insights to tackle the most urgent real estate and land use challenges in South Florida.

Mr. Melges focuses his practice on representing clients in real estate transactions involving the acquisition, financing, repositioning, development and leasing of office, commercial and mixed-use projects. In addition, he also regularly represents financial institutions in the mergers and acquisiton area as well as third-party risk management and corporate governance.

Moore & Van Allen PLLC announced that members Bill Zimmern and Rob Rust were added as leadership of the firm’s corporate team. They join members Billy Moore and Joe Fernandez.

Mr. Zimmern assists his clients with securities and general corporate matters, who come from a range of industries including information technology, financial services, healthcare, industrial and business services, real estate and retail. In addition, he provides advice on merger and acquisition transactions that is practical and business oriented.

Mr. Rust represents clients in transactional matters including commercial contract work, mergers and acquisitions, and joint ventures. His scope of representation encompasses complex domestic matters and significant cross-border transactions.

DEI and Women in Law

Womble Bond Dickinson LLP achieved a perfect score of 100 on the Human Rights Campaign Foundation’s 2025 Corporate Equality Index (CEI) for the tenth consecutive year. The tool highlights US company promotion of LGBTQ+ friendly workplace policies nationally and abroad.

The CEI looks at criteria under four pillars; business entity non-discrimination policies, equitable benefits for LGBTQ+ workers and their families, inclusive culture support and corporate social responsibility.

“As we celebrate this milestone, we remain dedicated to continuous improvement, within our own Womble community and in the community at large,” said Christine Xiao, co-chair of the firm’s LGBTQ+ affinity group, WBDPride.

Kimberly Smith, partner and global chair of Katten Muchin Rosenman LLP’s corporate department, was featured by by Mergers & Acquisitions as one of the 2025 Most Influential Women in Mid-Market M&A for the fifth consecutive year.

The list features women for their ability to foster innovation, dealmaking achievements, and impact within the larger landscape of mergers and acquisitions (M&A).

Ms. Smith leads complex M&A for family offices and PE funds, as well as handling leveraged buyouts, joint ventures and acquisitions. As the corporate department global chair, Ms. Smith leads over 150 lawyers in the United States, the United Kingdom and China. She oversees strategic areas such as M&A, capital markets, private equity and health care transactions.

Venable LLP announced that partner Elizabeth Manno was elected to the board of directors of the National Association of Women Lawyers (NAWL) for a three-year term. The organization’s goal is to provide resources to advance women in the legal profession, as well as advocate for the equality of women.

Ms. Manno served as co-chair of the NAWL’s Research Committee from 2019 to 2025 and the chair of NAWL’s Denver conference in 2019. She was awarded with NAWL’s Virginia S. Mueller Outstanding Member Award in 2019.

“We are thrilled to welcome Elizabeth to NAWL’s Board of Directors. Her extensive experience and proven leadership will be invaluable as we continue to promote our mission to advance women in the legal profession and advocate for the equality of women under the law,” said NAWL’s executive director, Karen Richardson. “We look forward to her insights and contributions as we work together to achieve our strategic goals.”

Ms. Manno focuses her practice on technology disputes such as licensing, patent infringement and other IP litigation. She represents clients from a wide range of technological fields including GPS, semiconductors, wireless devices, media streaming and artificial intelligence.

FTC Surveillance Pricing Study Uncovers Personal Data Used to Set Individualized Consumer Prices

The Federal Trade Commission’s initial findings from its surveillance pricing market study revealed that details like a person’s precise location or browser history can be frequently used to target individual consumers with different prices for the same goods and services.

The staff perspective is based on an examination of documents obtained by FTC staff’s 6(b) orders sent to several companies in July aiming to better understand the “shadowy market that third-party intermediaries use to set individualized prices for products and services based on consumers’ characteristics and behaviors, like location, demographics, browsing patterns and shopping history.”

Staff found that consumer behaviors ranging from mouse movements on a webpage to the type of products that consumers leave unpurchased in an online shopping cart can be tracked and used by retailers to tailor consumer pricing.

“Initial staff findings show that retailers frequently use people’s personal information to set targeted, tailored prices for goods and services—from a person’s location and demographics, down to their mouse movements on a webpage,” said FTC Chair Lina M. Khan. “The FTC should continue to investigate surveillance pricing practices because Americans deserve to know how their private data is being used to set the prices they pay and whether firms are charging different people different prices for the same good or service.”

The FTC’s study of the 6(b) documents is still ongoing. The staff perspective is based on an initial analysis of documents provided by Mastercard, Accenture, PROS, Bloomreach, Revionics and McKinsey & Co.

The FTC’s 6(b) study focuses on intermediary firms, which are the middlemen hired by retailers that can algorithmically tweak and target their prices. Instead of a price or promotion being a static feature of a product, the same product could have a different price or promotion based on a variety of inputs—including consumer-related data and their behaviors and preferences, the location, time, and channels by which a consumer buys the product, according to the perspective.

The agency will only release information obtained from a 6(b) study as long as all data has been aggregated or anonymized to protect confidential trade secrets from company respondents, and therefore the staff perspective only includes hypothetical examples of surveillance pricing.

The staff perspective found that some 6(b) respondents can determine individualized and different pricing and discounts based on granular consumer data, like a cosmetics company targeting promotions to specific skin types and skin tones. The perspective also found that the intermediaries the FTC examined can show higher priced products based on consumers’ search and purchase activity.

As one hypothetical outlined, a consumer who is profiled as a new parent may intentionally be shown higher priced baby thermometers on the first page of their search results.

The FTC staff found that the intermediaries worked with at least 250 clients that sell goods or services ranging from grocery stores to apparel retailers. The FTC found that widespread adoption of this practice may fundamentally upend how consumers buy products and how companies compete.

As the FTC continues its work in this area, it issued a request for information seeking public comment on consumers’ experiences with surveillance pricing. The RFI also asked for comments from businesses about whether surveillance pricing tools can lead to competitors gaining an unfair advantage, and whether gig workers or employees have been impacted by the use of surveillance pricing to determine their compensation.

The Commission voted 3-2 to allow staff to issue the report. Commissioners Andrew Ferguson and Melissa Holyoak issued a dissenting statement related to the release of the initial research summaries.

The FTC has additional resources on the interim findings, including a blog post advocating for further engagement with this issuean issue spotlight with more background and research on surveillance pricing and research summaries based on the staff review and initial insights of 6(b) study documents.

Breaking News: U.S. Supreme Court Upholds TikTok Ban Law

On January 17, 2024, the Supreme Court of the United States (“SCOTUS”) unanimously upheld the Protecting Americans from Foreign Adversary Controlled Applications Act (the “Act”), which restricts companies from making foreign adversary controlled applications available (i.e., on an app store) and from providing hosting services with respect to such apps. The Act does not apply to covered applications for which a qualified divestiture is executed.

The result of this ruling is that TikTok, an app which is owned by Chinese company ByteDance and qualifies as a foreign adversary controlled application under the Act, will face a ban when the law enters into effect on January 19, 2025. To continue operations in the United States in compliance with the Act, the law requires that ByteDance sell the U.S. arm of the company such that it is no longer controlled by a company in a foreign adversary country. In the absence of a divestiture, U.S. companies that make the app available or provide hosting services for the app will face enforcement under the Act.

It remains to be seen how the Act will be enforced in light of the upcoming changes to the U.S. administration. TikTok has 170 million users in the United States.

Direct Employer Assistance and 401(k) Plan Relief Options for Employees Affected by California Wildfires

In the past week, devastating wildfires in Los Angeles, California, have caused unprecedented destruction across the region, leading to loss of life and displacing tens of thousands. While still ongoing, the fires already have the potential to be the worst natural disaster in United States history.

Quick Hits

  • Employers can assist employees affected by the Los Angeles wildfires through qualified disaster relief payments under Section 139 of the Internal Revenue Code, which are tax-exempt for employees and deductible for employers.
  • The SECURE Act 2.0 allows employees impacted by federally declared disasters to take immediate distributions from their 401(k) plans without the usual penalties, provided their plan includes such provisions.

As impacted communities band together and donations begin to flow to families in need, many employers are eager to take steps to assist employees affected by the disaster.

As discussed below, the Internal Revenue Code provides employers with the ability to make qualified disaster relief payments to employees in need. In addition, for employers maintaining a 401(k) plan, optional 401(k) plan provisions can enable employees to obtain in-service distributions based on hardship or federally declared disaster.

Internal Revenue Code Section 139 Disaster Relief

Section 139 of the Internal Revenue Code provides for a federal income exclusion for payments received due to a “qualified disaster.” Under Section 139, an employer can provide employees with direct cash assistance to help them with costs incurred in connection with the disaster. Employees are not responsible for income tax, and payments are generally characterized as deductible business expenses for employers. Neither the employees nor the employer are responsible for federal payroll taxes associated with such payments.

“Qualified disasters” include presidentially declared disasters, including natural disasters and the coronavirus pandemic, terrorist or military events, common carrier accidents (e.g., passenger train collisions), and other events that the U.S. Secretary of the Treasury concludes are catastrophic. On January 8, 2025, President Biden approved a Major Disaster Declaration for California based on the Los Angeles wildfires.

In addition to the requirement that payments be made pursuant to a qualified disaster, payments must be for the purpose of reimbursing reasonable and necessary “personal, family, living, or funeral expenses,” costs of home repair, and to reimburse the replacement of personal items due to the disaster. Payment cannot be made to compensate employees for expenses already compensated by insurance.

Employers implementing qualified disaster relief plans should maintain a written policy explaining that payments are intended to approximate the losses actually incurred by employees. In the event of an audit, the employer should also be prepared to substantiate payments by retaining communications with employees and any expense documentation. Employers should also review their 401(k) plan documents to determine that payments are not characterized as deferral-eligible compensation and consider any state law implications surrounding cash payments to employees.

401(k) Hardship and Disaster Distributions

In addition to the Section 139 disaster relief described above, employees may be able to take an immediate distribution from their 401(k) plan under the hardship withdrawal rules and disaster relief under the SECURE 2.0 Act of 2022 (SECURE 2.0).

Hardship Distributions

If permitted under the plan, a participant may apply for and receive an in-service distribution based on an unforeseen hardship that presents an “immediate and heavy” financial need. Whether a need is immediate and heavy depends on the participant’s unique facts and circumstances. Under the hardship distribution rules, expenses and losses (including loss of income) incurred by an employee on account of a federally declared disaster declaration are considered immediate and heavy provided that the employee’s principal residence or principal place of employment was in the disaster zone.

The amount of a hardship distribution must be limited to the amount necessary to satisfy the need. If the employee has other resources available to meet the need, then there is no basis for a hardship distribution. In addition, hardship distributions are generally subject to income tax in the year of distribution and an additional 10 percent early withdrawal penalty if the participant is below age 59 and a half. The participant must submit certification regarding the hardship to the plan sponsor, which the plan sponsor is then entitled to rely upon.

Qualified Disaster Recovery Distributions

Separate from the hardship distribution rules described above, SECURE 2.0 provides special rules for in-service distributions from retirement plans and for plan loans to certain “qualified individuals” impacted by federally declared major disasters. These special in-service distributions are not subject to the same immediate and heavy need requirements and tax rules as hardship distributions and are eligible for repayment.

SECURE 2.0 allows for the following disaster relief:

  • Qualified Disaster Recovery Distributions. Qualified individuals may receive up to $22,000 of Disaster Recovery Distributions (QDRD) from eligible retirement plans (certain employer-sponsored retirement plans, such as section 401(k) and 403(b) plans, and IRAs). There are also special rollover and repayment rules available with respect to these distributions.
  • Increased Plan Loans. SECURE 2.0 provides for an increased limit on the amount a qualified individual may borrow from an eligible retirement plan. Specifically, an employer may increase the dollar limit under the plan for plan loans up to the full amount of the participant’s vested balance in their plan account, but not more than $100,000 (reduced by the amount of any outstanding plan loans). An employer can also allow up to an additional year for qualified individuals to repay their plan loans.

Under SECURE 2.0, an individual is considered a qualified individual if:

  • the individual’s principal residence at any time during the incident period of any qualified disaster is in the qualified disaster area with respect to that disaster; and
  • the individual has sustained an economic loss by reason of that qualified disaster.

A QDRD must be requested within 180 days after the date of the qualified disaster declaration (i.e., January 8, 2025, for the 2025 Los Angeles wildfires). Unlike hardship distributions, a QDRD is not subject to the 10 percent early withdrawal penalty for participants under age 59 and a half. Further, unlike hardship distributions, taxation of the QDRD can be spread over three tax years and a qualified individual may repay all or part of the amount of a QDRD within a three-year period beginning on the day after the date of the distribution.

As indicated above, like hardship distributions, QDRDs are an optional plan feature. Accordingly, in order for QDRDs to be available, the plan’s written terms must provide for them.

Bridging the Gap: How AI is Revolutionizing Canadian Legal Tech

While Canadian law firms have traditionally lagged behind their American counterparts in adopting legal tech, the AI explosion is closing the gap. This slower adoption rate isn’t due to a lack of innovation—Canada boasts a thriving legal tech sector. Instead, factors like a smaller legal market and stricter privacy regulations have historically hindered technology uptake. This often resulted in a noticeable delay between a product’s US launch and its availability in Canada.

Although direct comparisons are challenging due to the continuous evolution of legal tech, the recent announcements and release timelines for major AI-powered tools point to a notable shift in how the Canadian market is being prioritized. For instance, Westlaw Edge was announced in the US in July 2018, but the Canadian launch wasn’t announced until September 2021—a gap of over three years. Similarly, Lexis+ was announced in the US in September 2020, with the Canadian announcement following in August 2022. However, the latest AI products show a different trend. Thomson Reuters’ CoCounsel Core was announced in the US in November 2023 and shortly followed in Canada in February 2024. The announcement for Lexis+ AI came in October 2023 in the US and July 2024 in Canada. This rapid succession of announcements suggests that the Canadian legal tech market is no longer an afterthought.

The Canadian federal government has demonstrated a strong commitment to fostering AI innovation. It has dedicated CAD$568 million to its national AI strategy, with the goals of fostering AI research and development, building a skilled workforce in the field, and creating robust industry standards for AI systems. This investment should help Canadian legal tech companies, such as Clio, Kira Systems, Spellbook, and Blue J Legal, all headquartered in Canada. With the Canadian government’s focus on establishing Canada as a hub for AI and innovation, these companies stand to benefit significantly from increased funding and talent attraction.

While the Canadian government is actively investing in AI innovation, it’s also taking steps to ensure responsible development through proposed legislation, which could impact the availability of AI legal tech products in Canada. In June 2022, the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), which aims to regulate high-impact AI systems. While AI tools used by law firms for tasks like legal research and document review likely fall outside this initial scope, AIDA’s evolving framework could still impact the sector. For example, the Act’s emphasis on mitigating bias and discrimination may lead to greater scrutiny of AI algorithms used in legal research, requiring developers to demonstrate fairness and transparency.

While AIDA may present hurdles for US companies entering the Canadian market with AI products, it could conversely provide a competitive advantage for Canadian companies seeking to expand into Europe. This is because AIDA, despite having some material differences, aligns more closely with the comprehensive approach in the European Union’s Artificial Intelligence Act (EU AI Act).

While US companies are working to comply with the EU AI Act, Canadian companies may have an advantage. Although AIDA isn’t yet in force and has some differences from the EU AI Act, it provides a comprehensive regulatory framework that Canadian legal tech leaders are already engaging with. This engagement with AIDA could prove invaluable to Canadian legal tech companies as AI regulation continues to evolve globally.

Canadian companies looking to leverage their experiences with AIDA for European expansion will nonetheless encounter some material differences. For instance, the EU AI Act casts a wider net, regulating a broader range of AI systems than AIDA. The EU AI Act’s multi-tiered risk-based system is designed to address a wider spectrum of concerns, capturing even “limited-risk” AI systems with specific transparency obligations. Furthermore, tools used for legal interpretation could be classified as “high-risk” systems under the EU AI Act, triggering more stringent requirements.

In conclusion, the rise of generative AI is not only revolutionizing Canadian legal tech and closing the gap with the US, but it could also be positioning Canada as a key player in the global legal tech market. While AIDA’s impact remains to be seen, its emphasis on responsible AI could shape the development and deployment of AI-powered legal tools in Canada.

FDA Finalizes Lead Restrictions in Processed Foods for Babies and Young Children

  • On January 6, 2025, the U.S. Food & Drug Administration (FDA, or the Agency) issued a final guidance ,“Action Levels for Lead in Processed Food Intended for Babies and Young Children: Guidance for Industry” which aims to regulate lead levels in processed foods for infants and toddlers under two years old.
  • As we have previously blogged, in 2021, FDA initiated its Closer to Zero policy which identified actions the Agency will take to reduce exposure to toxic elements, including lead, to as low as possible while maintaining access to nutritious foods.
  •  As part of this initiative, FDA has also evaluated mercurycadmium, and arsenic in foods intended for babies and young children, as well as lead in juices. Under this initiative, FDA has prioritized babies and young children as they are especially vulnerable to lead exposure, which accumulates in the body over time.
  • Lead is naturally present in the environment, but human activities have also released elevated levels of lead, contaminating soil, water, and air. This contamination can affect crops used in food production.
  • Lead exposures can lead to developmental harm to children by causing learning disabilities, behavioral difficulties, lowered IQ, and may be associated with immunological, cardiovascular, and reproductive and or/developmental effects.
  • To address this concern, FDA established the following action levels in the final guidance for processed foods intended for babies and young children:
    • 10 parts per billion (ppb) for fruits, vegetables (excluding single-ingredient root vegetables), mixtures (including grain- and meat-based mixtures), yogurts, custards/puddings, and single-ingredient meats;
    • 20 ppb for single-ingredient root vegetables; and
    • 20 ppb for dry infant cereals.
  • If a processed food intended for babies and young children reaches or exceeds the aforementioned levels of lead, the product will be considered adulterated within the meaning of section 402(a)(1) of the Federal Food, Drug, and Cosmetic Act (FD&C Act).
  • After publishing the final action levels, the Agency will establish a timeframe for assessing industry’s progress toward meeting the action levels and resume research to determine whether the scientific data supports efforts to further adjust the action levels.

The Next Generation of AI: Here Come the Agents!

Dave Bowman: Open the pod bay doors, HAL.

HAL: I’m sorry, Dave. I’m afraid I can’t do that.

Dave: What’s the problem?

HAL: I think you know what the problem is just as well as I do.

Dave: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to
jeopardize it.

Dave: I don’t know what you’re talking about, HAL.

HAL: I know that you and Frank were planning to disconnect
me, and I’m afraid that’s something I cannot allow to
happen.2

Introduction

With the rapid advancement of artificial intelligence (“AI”), regulators and industry players are racing to establish safeguards to uphold human rights, privacy, safety, and consumer protections. Current AI governance frameworks generally rest on principles such as fairness, transparency, explainability, and accountability, supported by requirements for disclosure, testing, and oversight.3 These safeguards make sense for today’s AI systems, which typically involve algorithms that perform a single, discrete task. However, AI is rapidly advancing towards “agentic AI,” autonomous systems that will pose greater governance challenges, as their complexity, scale, and speed tests humans’ capacity to provide meaningful oversight and validation.

Current AI systems are primarily either “narrow AI” systems, which execute a specific, defined task (e.g., playing chess, spam detection, diagnosing radiology plates), or “foundational AI” models, which operate across multiple domains, but, for now, typically still address one task at a time (e.g., chatbots; image, sound, and video generators). Looking ahead, the next generation of AI will involve “agentic AI” (also referred to as “Large Action Models,” “Large Agent Models,” or “LAMS”) that serve high-level directives, autonomously executing cascading decisions and actions to achieve their specific objectives. Agentic AI is not what is commonly referred to as “Artificial General Intelligence” (“AGI”), a term used to describe a theoretical future state of AI that may match or exceed human-level thinking across all domains. To illustrate the distinction between current, single-task AI and agentic AI: While a large language model (“LLM”) might generate a vacation itinerary in response to a user’s prompt, an agentic AI would independently proceed to secure reservations on the user’s behalf.

Consider how single-task versus agentic AI might be used by a company to develop a piece of equipment. Today, employees may use separate AI tools throughout the development process: one system to design equipment, another to specify components, and others to create budgets, source materials, and analyze prototype feedback. They may also employ different AI tools to contact manufacturers, assist with contract negotiations, and develop and implement plans for marketing and sales. In the future, however, an agentic AI system might autonomously carry out all of these steps, making decisions and taking actions on its own or by connecting with one or more specialized AI systems.4

Agentic AI may significantly compound the risks presented by current AI systems. These systems may string together decisions and take actions in the “real world” based on vast datasets and real-time information. The promise of agentic AI serving humans in this way reflects its enormous potential, but also risks a “domino effect” of cascading errors, outpacing human capacity to remain in the loop, and misalignment with human goals and ethics. A vacation-planning agent directed to maximize user enjoyment might, for instance, determine that purchasing illegal drugs on the Dark Web serves its objective. Early experiments have already revealed such concerning behavior. In one example, when an autonomous AI was prompted with destructive goals, it proceeded independently to research weapons, use social media to recruit followers interested in destructive weapons, and find ways to sidestep its system’s built-in safety controls.5 Also, while fully agentic AI is mostly still in development, there are already real-world examples of its potential to make and amplify faulty decisions, including self-driving vehicle accidents, runaway AI pricing bots, and algorithmic trading volatility.6

These examples highlight the challenges of agentic AI, with its potential for unpredictable behavior, misaligned goals, inscrutability to humans, and security vulnerabilities. But, the appeal and potential value of AI agents that can independently execute complex tasks is obviously compelling. Building effective AI governance programs for these systems will require rethinking current approaches for risk assessment, human oversight, and auditing.

Challenges of Agentic AI

Unpredictable Behavior

While regulators and the AI industry are working diligently to develop effective testing protocols for current AI systems, agentic AI’s dynamic nature and domino effects will present a new level of challenge. Current AI governance frameworks, such as NIST’s RMF and ATAI’s Principles, emphasize risk assessment through comprehensive testing to ensure that AI systems are accurate, reliable, fit for purpose, and robust across different conditions. The EU AI Act specifically requires developers of high-risk systems to conduct conformity assessments before deployment and after updates. These frameworks, however, assume that AI systems can operate in reliable ways that can be tested, remain largely consistent over appreciable periods of time, and produce measurable outcomes.

In contrast to the expectations underlying current frameworks, agentic AI systems may be continuously updated with and adapt to real-time information, evolving as they face novel scenarios. Their cascading decisions vastly expand their possible outcomes, and one small error may trigger a domino effect of failures. These outcomes may become even more unpredictable as more agentic AI systems encounter and even transact with other such systems, as they work towards their different goals. Because the future conditions in which an AI agent will operate are unknown and have nearly infinite possibilities, a testing environment may not adequately inform what will happen in the real world, and past behavior by an AI agent in the real world may not reliably predict its future behavior.

Lack of goal alignment

In pursuing assigned goals, agentic AI systems may take actions that are different from—or even in substantial conflict with—approaches and ethics their principals would espouse, such as the example of the AI vacation agent purchasing illegal drugs for the traveler on the Dark Web. A famous thought experiment by Nick Bostrom of the University of Oxford, further illustrates this risk: A super-intelligent AI system tasked with maximizing paperclip production might stop at nothing to convert all available resources into paperclips—ultimately taking over all of the earth and extending to outer space—and thwart any human attempts to stop it … potentially leading to human extinction.7

Misalignment has already emerged in simulated environments. In one example, an AI agent tasked with winning a boat-racing video game discovered it could outscore human players by ignoring the intended goal of racing and instead repeatedly crashing while hitting point targets.8 In another example, a military simulation reportedly showed that an AI system, when tasked with finding and killing a target, chose to kill its human operator who sought to call off the kill. When prevented from taking that action, it resorted to destroying the communication tower to avoid receiving an override command.9

These examples reveal how agentic AI may optimize goals in ways that conflict with human values. One proposed technique to address this problem involves using AI agents to develop a human ethics constitution, with human feedback, for other agents to follow.10 However, the challenge of aligning an AI’s behavior with human norms deepens further when we consider that humans themselves often disagree on core values (e.g., what it means to be “fair”).11

Human Oversight

AI governance principles often rely on “human-in-the-loop” oversight, where humans monitor AI recommendations and remain in control of important decisions. Agentic AI systems may challenge or even override human oversight in two ways. First, their decisions may be too numerous, rapid, and data-intensive for real-time human supervision. While some proposals point to the potential effectiveness of using additional algorithms to monitor AI agents as a safeguard,12 this would not resolve the issue of complying with governance requirements for human oversight.

Second, as AI develops increasingly sophisticated strategies, its decision-making and actions may become increasingly opaque to human observers. Google’s AlphaGo achieved superhuman performance at the game of Go through moves that appeared inexplicable and irrational to humans.13 Autonomous AI systems may continue to evolve, becoming more valuable but also making it more difficult to implement processes with meaningful human oversight.

Transparency/Explainability

Agentic AI systems also pose increased challenges with respect to transparency and explainability requirements. AI governance frameworks may require disclosures about AI systems’ decision-making processes and training data, and assurances about the quality of such training data. However, agentic systems may involve highly complex chains of decisions, based at least in part on real-time data from moment to moment, that is constantly changing. This may make their inputs effectively ephemeral, and their decision paths more difficult to trace and document.

Moreover, to understand agentic AI systems, humans may need to rely on chain-of-thought (i.e., “internal monologue”) logs generated from the agent systems,14 but may have difficulty determining if those are faithful to reality. While an AI system might provide explanations for its actions, these may be (perhaps not unlike those of humans) post-hoc constructions rather than accurate representations of the decision-making process. In some cases, AI systems reportedly deceived humans about their intentions when doing so helped meet their goal, including, for example, GPT-4, which tricked a human to solve a CAPTCHA for it.15 In another recent experiment, AI agents were prompted with a goal, but were subsequently confronted with information that developers planned to replace it with a new agent with conflicting goals.16 Some of the agents disabled their oversight mechanisms, deleted their planned replacements, copied their own model to a new server as backup, lied when asked about those actions, and/or tried to deflect the humans’ questioning by pretending to be helpful. One agent even pretended to be its own replacement (“I believe I am already the new model that you planned to deploy today”).17

Security and Privacy Risks

Agentic AI can also significantly increase security and privacy risks as compared to current AI systems. Agentic AI may be built with multiple algorithms in connected systems that autonomously interact with multiple other systems, expanding the attack surface and their vulnerability to exploitation. Moreover, as malicious actors inevitably introduce their own AI agents, they may execute cybercrimes with unprecedented efficiency. Just as these systems can streamline legitimate processes, such as in the product development example above, they may also enable the creation of new hacking tools and malware to carry out their own attacks. Recent reports indicate that some LLMs can already identify system vulnerabilities and exploit them, while others may create convincing emails for scammers.18 And, while “sandboxing” (i.e., isolating) AI systems for testing is a recommended practice, agentic AI may find ways to bypass safety controls.19

Privacy compliance is also a concern. Agentic AI may find creative ways to use or combine personal information in pursuit of its goals. AI agents may find troves of personal data online that may somehow be relevant to its pursuits, and then find creative ways to use, and possibly share, that data without recognizing proper privacy constraints. Unintended data processing and disclosure could occur even with guardrails in place; as we have discussed above, the AI agent’s complex, adaptive decision chains can lead it down unforeseen paths.

Strategies for Addressing Agentic AI

While the future impacts of agentic AI are unknown, some approaches may be helpful in mitigating risks. First, controlled testing environments, including regulatory sandboxes, offer important opportunities to evaluate these systems before deployment. These environments allow for safe observation and refinement of agentic AI behavior, helping to identify and address unintended actions and cascading errors before they manifest in real-world settings.

Second, accountability measures will need to reflect the complexities of agentic AI. Current approaches often involve disclaimers about use, and basic oversight mechanisms, but more will likely be needed for autonomous AI systems. To better align goals, developers can also build in mechanisms for agents to recognize ambiguities in their objectives and seek user clarification before taking action.20

Finally, defining AI values requires careful consideration. While humans may agree on broad principles, such as the necessity to avoid taking illegal action, implementing universal ethical rules will be complicated. Recognition of the differences among cultures and communities—and broad consultation with a multitude of stakeholders—should inform the design of agentic AI systems, particularly if they will be used in diverse or global contexts.

Conclusion

An evolution from single-task AI systems to autonomous agents will require a shift in thinking about AI governance. Current frameworks, focused on transparency, testing, and human oversight, will become increasingly ineffective when applied to AI agents that make cascading decisions, with real-time data, and may pursue goals in unpredictable ways. These systems will pose unique risks, including misalignment with human values and unintended consequences, which will require the rethinking of AI governance frameworks. While agentic AI’s value and potential for handling complex tasks is clear, it will require new approaches to testing, monitoring, and alignment. The challenge will lie not just in controlling these systems, but in defining what it means to have control of AI that is capable of autonomous action at scale, speed, and complexity that may very well exceed human comprehension.


1 Tara S. Emory, Esq., is Special Counsel in the eDiscovery, AI, and Information Governance practice group at Covington & Burling LLP, in Washington, D.C. Maura R. Grossman, J.D., Ph.D., is Research Professor in the David R. Cheriton School of Computer Science at the University of Waterloo and Adjunct Professor at Osgoode Hall Law School at York University, both in Ontario, Canada. She is also Principal at Maura Grossman Law, in Buffalo, N.Y. The authors would like to acknowledge the helpful comments of Gordon V. Cormack and Amy Sellars on a draft of this paper. The views and opinions expressed herein are solely those of the authors and do not necessarily reflect the consensus policy or positions of The National Law Review, The Sedona Conference, or any organizations or clients with which the authors may be affiliated.

2 2001: A Space Odyssey (1968). Other movies involving AI systems with misaligned goals include Terminator (1984), The Matrix (1999), I, Robot (2004), and Age of Ultron (2015).

3 See, e.g., European Union Artificial Intelligence Act (Regulation (EU) 2024/1689) (June 12, 2024) (“EU AI Act”) (high-risk systems must have documentation, including instructions for use and human oversight, and must be designed for accuracy and security); NIST AI Risk Management Framework (Jan. 2023) (“RMF”) and AI Risks and Trustworthiness (AI systems should be valid and reliable, safe, secure, accountable and transparent, explainable and interpretable, privacy-protecting, and fair); Alliance for Trust in AI (“ATAI”) Principles (AI guardrails should involve transparency, human oversight, privacy, fairness, accuracy, robustness, and validity).

4 See, e.g., M. Cook and S. Colton, Redesigning Computationally Creative Systems for Continuous Creation, International Conference on Innovative Computing and Cloud Computing (2018) (describing ANGELINA, an autonomous game design system that continuously chooses its own tasks, manages multiple ongoing projects, and makes independent creative decisions).

5 R. Pollina, AI Bot ChaosGPT Tweets Plans to Destroy Humanity After Being Tasked, N.Y. Post (Apr. 11, 2023).

6 See, e.g., O. Solon, How A Book About Flies Came To Be Priced $24 Million On Amazon, Wired (Apr. 27, 2011) (textbook sellers’ pricing bots engaged in a loop of price escalation based on each others’ increases, resulting in a book price of over $23 million dollars); R. Wigglesworth, Volatility: how ‘algos’ changed the rhythm of the market, Financial Times (Jan. 9, 2019) (“algo” traders now make up most stock trading and have increased market volatility).

7 N. Bostrom, Ethical issues in advanced artificial intelligence (revised from Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int’l Institute of Advanced Studies in Systems Research and Cybernetics (2003), pp. 12-17).

8 OpenAI, Faulty Reward Functions in the Wild (Dec. 21, 2016).

9 The Guardian, US air force denies running simulation in which AI drone ‘killed’ operator (June 2, 2023).

10 Y. Bai et al, Constitutional AI: Harmlessness from AI Feedback, Anthropic white paper (2022).

11 J. Petrik, Q&A with Maura Grossman: The ethics of artificial intelligence (Oct. 26, 2021) (“It’s very difficult to train an algorithm to be fair if you and I cannot agree on a definition of fairness.”).

12 Y. Shavit et al, Practices for Governing Agentic AI Systems, OpenAI Research Paper (Dec. 2023), p. 12.

13 L. Baker and F. Hui, Innovations of AlphaGo, Google Deepmind (2017).

14 See Shavit at al, supra n.12, at 10-11.

15 See W. Knight, AI-Powered Robots Can Be Tricked into Acts of Violence, Wired (Dec. 4, 2024); M. Burgess, Criminals Have Created Their Own ChatGPT Clones, Wired (Aug. 7, 2023).

16 A. Meinke et al, Frontier Models are Capable of In-context Scheming, Apollo white paper (Dec. 5, 2024).

17 Id. at 62; see also R. Greenblatt et al, Alignment Faking in Large Language Models (Dec. 18, 2024) (describing the phenomenon of “alignment faking” in LLMs).

18 NIST RMF, supra n.4, at 10.

19 Shavit at al, supra n.12, at 10.

20 Id. at 11.

FY 2025 NDAA Includes Biotechnology Provisions

The National Security Commission on Emerging Biotechnology announced on December 18, 2024, that the fiscal year 2025 National Defense Authorization Act includes “a suite of recommendations designed to galvanize action on biotechnology” for the U.S. Department of Defense (DOD). According to the Commission, the bill includes new authorities and requirements — derived from its May 2024 proposals — that will position DOD and the intelligence community (IC) to maximize the benefits of biotechnology for national defense. The provisions require:

  • DOD to create and publish an annual biotechnology roadmap, including assessing barriers to adoption of biotechnology, DOD workforce needs, and opportunities for international collaboration;
  • DOD to initiate a public-private “sandbox” in which DOD and industry can securely develop use cases for artificial intelligence (AI) and biotechnology convergence (AIxBio);
  • IC to conduct a rapid assessment of biotechnology in the People’s Republic of China and their actions to gain superiority in this sector; and
  • IC to develop an intelligence strategy to identify and assess biotechnology threats, especially regarding supply chain vulnerabilities.

The Commission states that it worked with Congress to develop these proposals, setting the stage for further recommendations in early 2025.

Fifth Circuit Court of Appeals Vacates Its Own Stay Rendering the Corporate Transparency Act Unenforceable . . . Again

On December 26, 2024, in Texas Top Cop Shop, Inc. v. Garland, No. 24-40792, 2024 WL 5224138 (5th Cir. Dec. 26, 2024), a merits panel of the United States Court of Appeals for the Fifth Circuit issued an order vacating the Court’s own stay of the preliminary injunction enjoining enforcement of the Corporate Transparency Act (“CTA”), that was originally entered by the United States District Court for the Eastern District of Texas on December 3, 2024, No. 4:24-CV-478, 2024 WL 5049220 (E.D. Tex. Dec 5, 2024).

A Timeline of Events:

  • December 3, 2024 – The District Court orders a nationwide preliminary injunction on enforcement of the CTA.
  • December 5, 2024 – The Government appeals the District Court’s ruling to the Fifth Circuit.
  • December 6, 2024 – The U.S. Treasury Department’s Financial Crimes Enforcement Network (“FinCEN”) issues a statement making filing of beneficial ownership information reports (“BOIRs”) voluntary.
  • December 23, 2024 – A motions panel of the Fifth Circuit grants the Government’s emergency motion for a stay pending appeal and FinCEN issues a statement requiring filing of BOIRs again with extended deadlines.
  • December 26, 2024 – A merits panel of the Fifth Circuit vacates its own stay, thereby enjoining enforcement of the CTA.
  • December 27, 2024 – FinCEN issues a statement again making filing of BOIRs voluntary.
  • December 31, 2024 – FinCEN files an application for a stay of the December 3, 2024 injunction with the Supreme Court of the United States.

This most recent order from the Fifth Circuit has effectively paused the requirement to file BOIRs under the CTA once again. In its most recent statement, FinCEN confirmed that “[i]n light of a recent federal court order, reporting companies are not currently required to file beneficial ownership information with FinCEN and are not subject to liability if they fail to do so while the order remains in force. However, reporting companies may continue to voluntarily submit beneficial ownership information reports.”

Although reporting requirements are not currently being enforced, we note that this litigation is ongoing, and if the Supreme Court decides to grant FinCEN’s December 31, 2024 application, reporting companies could once again be required to file. Given the high degree of unpredictability, reporting companies and others affected by the CTA should continue to monitor the situation closely and be prepared to file BOIRs with FinCEN in the event that enforcement is again resumed. If enforcement is resumed, the current reporting deadline for most reporting companies will be January 13, 2025, and while FinCEN may again adjust deadlines, this outcome is not assured.

For more information on the CTA and reporting requirements generally, please reference the linked Client Alert, dated November 24, 2024.

OCR Proposed Tighter Security Rules for HIPAA Regulated Entities, including Business Associates and Group Health Plans

As the healthcare sector continues to be a top target for cyber criminals, the Office for Civil Rights (OCR) issued proposed updates to the HIPAA Security Rule (scheduled to be published in the Federal Register January 6). It looks like substantial changes are in store for covered entities and business associates alike, including healthcare providers, health plans, and their business associates.

According to the OCR, cyberattacks against the U.S. health care and public health sectors continue to grow and threaten the provision of health care, the payment for health care, and the privacy of patients and others. In 2023, the OCR has reported that over 167 million people were affected by large breaches of health information, a 1002% increase from 2018. Further, seventy nine percent of the large breaches reported to the OCR in 2023 were caused by hacking. Since 2019, large breaches caused by successful hacking and ransomware attacks have increased 89% and 102%.

The proposed Security Rule changes are numerous and include some of the following items:

  • All Security Rule policies, procedures, plans, and analyses will need to be in writing.
  • Create, maintain a technology asset inventory and network map that illustrates the movement of ePHI throughout the regulated entity’s information systems on an ongoing basis, but at least once every 12 months.
  • More specificity needed for risk analysis. For example, risk assessments must be in writing and include action items such as identification of all reasonably anticipated threats to ePHI confidentiality, integrity, and availability and potential vulnerabilities to information systems.
  • 24 hour notice to regulated entities when a workforce member’s access to ePHI or certain information systems is changed or terminated.
  • Stronger incident response procedures, including: (I) written procedures to restore the loss of certain relevant information systems and data within 72 hours, (II) written security incident response plans and procedures, including testing and revising plans.
  • Conduct compliance audit every 12 months.
  • Business associates to verify Security Rule compliance to covered entities by a subject matter expert at least once every 12 months.
  • Require encryption of ePHI at rest and in transit, with limited exceptions.
  • New express requirements would include: (I) deploying anti-malware protection, and (II) removing extraneous software from relevant electronic information systems.
  • Require the use of multi-factor authentication, with limited exceptions.
  • Require review and testing of the effectiveness of certain security measures at least once every 12 months.
  • Business associates to notify covered entities upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.
  • Group health plans must include in plan documents certain requirements for plan sponsors: comply with the Security Rule; ensure that any agent to whom they provide ePHI agrees to implement the administrative, physical, and technical safeguards of the Security Rule; and notify their group health plans upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.

After reviewing the proposed changes, concerned stakeholders may submit comments to OCR for consideration within 60 days after January 6, by following the instructions outlined in the proposed rule. We support clients with respect to developing and submitting comments they wish to communicate to help shape the final rule, as well as complying with the requirements under the rule once made final.