Telehealth Update: DEA/HHS Temporary Rule, Medicare Coverage of Telehealth Services, Potential for Increased Oversight, and What to Watch For in 2025

Telehealth companies and other industry stakeholders have had a watchful eye towards the end of 2024 and the impending “telehealth cliff” as COVID-era Drug Enforcement Agency (DEA) flexibilities and Medicare expanded telehealth coverage are set to expire. Although a recent temporary joint rule from the DEA and the Department of Health and Human Services (HHS) along with the 2025 Medicare Physician Fee Schedule final rule has provided some hope, questions regarding telehealth access in 2025 and under a new Administration remain unclear. Further, calls continue for increased oversight of telehealth services. Below, we breakdown recent updates for the telehealth industry.

DEA Telehealth Flexibilities

Providing some good news, late last month the DEA and HHS jointly issued a temporary rule (the Temporary Rule) extending the COVID-era flexibilities for prescribing controlled substances via telehealth through the end of 2025. The flexibilities, which previously were twice extended and set to expire December 31, 2024, temporarily waive the in-person requirements for prescribing under the Controlled Substances Act.

The DEA and HHS issued the Temporary Rule to ensure that providers and patients who have come to rely on telehealth services are able to smoothly transition to the new requirements, which as previously covered, are likely to significantly limit providers’ ability to prescribe controlled substances without an in-person interaction. The Temporary Rule also acknowledges that the DEA and HHS continue to work with relevant stakeholders and will use the additional time to promulgate proposed and final regulations that “effectively expand access to telemedicine” in a manner that is consistent with public health and safety, while mitigating the risk of diversion. The agencies also note that the limited time period of the extension is aimed at avoiding investment in new telemedicine companies that may encourage or enable problematic prescribing practices.

The Temporary Rule effectively allows all DEA-registered providers to prescribe Schedule II-V controlled substances via telehealth through the end of 2025, regardless of when the provider-patient relationship was formed. Consistent with the prior temporary rules, the following requirements continue to apply:

  • The prescription must be issued for a legitimate medical purpose by a practitioner acting in the usual course of professional practice.
  • The prescription must be issued pursuant to a telehealth interaction using two-way, real-time audio-visual technology, or for prescriptions to treat a mental health disorder, a two-way, real-time audio-only communication if the patient is not capable of, or does not consent to, the use of video technology.
  • The practitioner must be authorized under their DEA registration to prescribe the basic class of controlled medication specified on the prescription or be exempt from obtaining a registration to dispense controlled substances.
  • The prescription must meet all other requirements of the DEA regulations.

Providers should also be cognizant of applicable state laws that may place additional restrictions on the ability to prescribe certain medications or otherwise provide treatment via telehealth.

Medicare Coverage of Telehealth Services 

Unlike the DEA flexibilities, many of the COVID-era flexibilities for traditional Medicare coverage of telehealth services will end on December 31, 2024. Despite bipartisan support, congressional action is required to extend broad coverage for certain telehealth services existing since March 2020. Most notably, unless Congress acts, beginning January 1, 2025 expiring flexibilities include waiving the originating site requirements to allow beneficiaries to receive services in their homes and expanding the list of Medicare-enrolled providers who can furnish telehealth services.

Further, beginning January 1, 2025, Medicare coverage of telehealth services for beneficiaries outside of rural health care settings will be limited to:

  • Monthly End-Stage Renal Disease visits for home dialysis;
  • Services for diagnosis, evaluation, or treatment of symptoms of an acute stroke;
  • Treatment of substance use disorder or a co-occurring mental health disorder, or for the diagnosis, evaluation or treatment of a mental health disorder;
  • Behavioral health services;
  • Diabetes self-management training; and
  • Nutrition therapy.

For its part, the Centers for Medicare & Medicaid Services (CMS) recently issued its 2025 Medicare Physician Fee Schedule Final Rule (the MPFS Final Rule) extending and making permanent certain telehealth flexibilities within its authority. In particular, through December 31, 2025, practitioners may continue to utilize live video to meet certain Medicare direct supervision requirements and reference their currently enrolled practice when providing telehealth services from their home. The MPFS Final Rule continues to remove frequency limitations for certain hospital inpatient/observation care, skilled nursing facility visits, and critical care consultation services furnished via telehealth. Additionally, the MPFS Final Rule makes permanent the utilization of audio-only telehealth for any Medicare-covered telehealth service.

Increased Telehealth Oversight 

Recent months also have seen renewed calls for increased oversight of telehealth services. In September, the HHS Office for Inspector General (OIG) issued a report (the OIG Report) recommending increased oversight of Medicare coverage of remote patient monitoring. As a basis for its findings, the OIG Report cites the dramatic increased utilization of and payments for remote patient monitoring from 2019 to 2022, the fact that over 40% of Medicare beneficiaries receiving remote patient monitoring did not receive all three components of the service (i.e., education and setup, device supply, and treatment management), and the observation that Medicare lacks key information regarding the data being collected and the types of monitoring devices utilized. Notably, OIG conducted its review in part because of the potential for significant expansion of remote patient monitoring in the Medicare population.

Given these factors, the OIG Report recommends that CMS:

  1. Implement additional safeguards to ensure that remote patient monitoring is used and billed appropriately in Medicare.
  2. Require that remote patient monitoring be ordered and that information about the ordering provider be included on claims and encounter data for remote patient monitoring.
  3. Develop methods to identify what health data are being monitored.
  4. Conduct provider education about billing of remote patient monitoring.
  5. Identify and monitor companies that bill for remote patient monitoring.

Separately, concerns also have been raised regarding the recent emergence of direct-to-consumer telehealth platforms sponsored by pharmaceutical companies. In this model, patients seeking specific medications are linked to a health care provider who can virtually prescribe the requested medication. In October, U.S. Senate Majority Whip Dick Durbin (D-IL), joined by Senators Bernie Sanders (I-VT), Peter Welch (D-VT), and Elizabeth Warren (D-MA) sent letters to several pharmaceutical companies requesting written response to questions regarding these platforms including the cost of direct-to-consumer advertising, the arrangements between the telehealth providers and the pharmaceutical companies, and whether the virtual consultation comply with the standard of care.

Conclusion

Despite attempts to preserve and expand telehealth access and affordability, effective January 1, 2025, many Medicare beneficiaries will be cut off from certain telehealth services unless one of the bills currently pending in Congress is passed. Crucially, bipartisan support for increased access to telehealth services is likely to continue in both chambers of Congress. Although the incoming Administration has not detailed its plans regarding telehealth access on a permanent, or even temporary basis, telehealth will continue to play an important role in the United States health care system through 2025 and beyond. As telehealth continues to play an important role in increasing access to care, increased oversight and enforcement is almost certain, even if future oversight priorities are unclear. As always, we will continue to monitor and report on important telehealth developments.

Checklist for Transitioning Founder-Owned Law Firms

When transitioning from a founder-owned law firm, it’s essential to establish a clear plan to ensure the firm’s continued growth and stability. A successful transition depends on strategic priorities that enhance operational efficiency, improve client satisfaction, and secure long-term success.

Below, we outline the key areas to analyze and implement for a seamless shift in leadership and operations.

  1. Work-Life Timelines

Work-life timelines act as a roadmap for planning the future of the firm. They provide a structured planning horizon that helps leadership forecast and prepare for critical milestones, such as retirements or leadership transitions. For instance, mapping out partner retirement dates allows the firm to identify when leadership gaps may occur and develop succession plans proactively.

  1. Marketing Effectiveness

Effective marketing strategies are the backbone of a firm’s revenue growth. Assessing your marketing effectiveness involves analyzing the ability to meet revenue goals while considering the business risks associated with exiting partners. For example, if a founder has historically been a key rainmaker, your marketing plan must address how to replace their client development efforts with targeted campaigns and new initiatives, such as digital outreach or niche practice area marketing.

 

  1. Attorney Development

Attorney development ensures that the firm maintains a continuous and adaptable skill set. As founders exit, having a pipeline of well-trained attorneys is critical to sustaining client relationships and maintaining institutional knowledge. Regular mentorship programs, skill-building workshops, and tailored career growth plans help prepare attorneys to take on leadership roles in the future.

 

  1. Recruiting Effectiveness

Strong recruiting processes are essential for addressing capability and capacity gaps created by departing founders. Recruiting effectiveness goes beyond hiring; it involves attracting and retaining top legal talent who align with the firm’s culture and goals. Offering competitive benefits, a clear career trajectory, and a supportive environment can position the firm as a destination for top-tier candidates.

 

  1. Compensation and Incentives

A well-designed compensation and incentive structure is vital to the firm’s profitability and transition success. Attracting high-profit lateral hires, ensuring partners are practicing profitably, and facilitating smooth transitions for senior partners require thoughtful compensation planning. For example, implementing performance-based bonuses tied to billable hours or collections can motivate both current attorneys and incoming talent.

 

  1. Policy Development

Clear and consistent policies build trust and promote a culture of fairness among partners, associates, and staff. Whether it’s defining work-from-home expectations or delineating the decision-making process, policy development ensures that the firm operates smoothly during and after the leadership transition.

 

  1. Partnership or Operating Agreements

A robust partnership or operating agreement ensures that decision-making processes are clear and actions carry appropriate weight. These agreements provide a framework for resolving disputes, allocating equity, and governing major decisions—such as onboarding new partners or adjusting compensation structures. This clarity helps reduce friction during transitional periods.

 

  1. Equity Transfer Processes

Equity transfer is one of the most sensitive aspects of transitioning a founder-owned firm. Establishing clear processes for equity transfer ensures that the firm can perpetuate itself without unnecessary controversy. By structuring buyouts or equity redistribution in advance, the firm avoids disruptions that could harm operations or morale.

 

  1. Technology

Investing in technology is critical for maintaining efficiency and gaining a competitive edge. Technology tools, such as practice management systems, client portals, and AI-driven analytics, streamline operations and strengthen client relationships. For instance, adopting cloud-based platforms allows for seamless collaboration among team members and improves data security during the transition.

 

  1. Supportive Platforms

Creating a supportive platform that elevates the success of lawyers and staff is key to a smooth transition. This might include mentorship programs, robust professional development opportunities, and fostering a collaborative work culture. A supportive platform not only helps retain existing talent but also enhances the firm’s reputation as a desirable place to work.

 

  1. Trained and Motivated Staff

A well-trained and motivated staff is essential for maintaining operational continuity during a leadership transition. Cross-training employees on various roles and responsibilities ensure that knowledge is retained and transferred effectively. For example, ensuring paralegals are familiar with new practice management systems or administrative protocols reduces the risk of disruption.

 

  1. Implementation

Strategic planning is only as good as its implementation. Moving from the planning phase to actionable steps is vital for securing the firm’s long-term interests. By setting clear timelines, assigning responsibilities, and tracking progress, the firm can ensure that the transition plans lead to tangible outcomes.

Conclusion

By focusing on these critical areas, your firm can develop a comprehensive, thoroughly analyzed, and ready-to-implement set of priorities. These steps will help your firm thrive in the post-founder era while ensuring smooth transitions, client retention, and operational excellence. Transitioning a founder-owned law firm may seem daunting, but with careful planning and execution, your firm can secure a prosperous future.

Listen to this article

Upcoming Telephone Consumer Protection Act (TCPA) Changes in 2025

The Telephone Consumer Protection Act (TCPA), enacted in 1991, protects consumers from unwanted telemarketing calls, robocalls, and texts.

New FCC Consent Rule

On January 27, 2025, the Federal Communications Commission’s (FCC) new consent rule for robocalls and robotexts will take effect. The FCC aims to close the “lead generator loophole” by requiring marketers to obtain “one-to-one” consumer consent to receive telemarketing texts and auto-dialed calls. While the rule primarily targets lead generators, it could affect any business that relies on consumer consent for such communications or purchases leads from third parties.

Under the rule, businesses must clearly and conspicuously request and obtain written consumer consent for robocalls and robotexts from each individual company. Companies can no longer rely on a single instance of consumer consent that links to a list of multiple sellers and partners. Instead, individual written consent will be required for each marketer. Additionally, any resulting communication must be “logically and topically related” to the website where the consent was obtained.

To meet this requirement, businesses may allow consumers to affirmatively select which sellers they consent to hear from or provide links to separate consent forms for each business requesting permission to contact them.

New Consent Revocation Rules

Another change takes effect on April 11, 2025, when the FCC’s new consent revocation rules for robocalls and robotexts are implemented. These rules allow consumers to revoke prior consent through any reasonable method, and marketers may not designate an exclusive means for revocation. Reasonable methods include replying “stop,” “quit” or similar terms to incoming texts, using automated voice or opt-out replies, or submitting a message through a website provided by the caller.

Marketers must honor revocation requests within a reasonable timeframe, not exceeding 10 business days. After that period, no further robocalls or robotexts requiring consent may be sent to the consumer.

Preparing for Compliance

To comply with the January 27, 2025, one-to-one consent rule and the April 11, 2025, consent revocation rule, lead generators and businesses that use or facilitate robocall and robotext communications should:

  • Review their current consent and revocation practices.
  • Ensure compliance by updating policies before the deadlines.
  • Examine where consumer leads are being obtained and adjust policies for using this information to meet the new requirements.

This advisory provides only a summary of the upcoming changes to the Telephone Consumer Protection Act.

NSA Wants Industry to Disclose Details of Telecom Hacks in Light of Chinese Involvement

On November 20, 2024, the director of the National Security Agency, General Timothy Haugh, urged the private sector to take swift, collective action to share key details about breaches they have suffered at the hands of Chinese hackers who have infiltrated US telecommunications.

Gen. Haugh said he wants to provide a public “hunt guide” so cybersecurity professionals and companies can search out the hackers and eradicate them from telecommunications networks.

US authorities have confirmed Chinese hackers have infiltrated US telecommunications in what Senator Richard Blumenthal, a Connecticut Democrat, this week described as a “sprawling and catastrophic” infiltration. AT&T Inc., Verizon Communications Inc. and T-Mobile are among those targeted.

Through those intrusions, the hackers targeted communications of a “limited number” of people in politics and government, US officials have said. They include Vice President Kamala Harris’ staff, President-elect Donald Trump and Vice President-elect JD Vance, as well as staffers for Senate Majority Leader Chuck Schumer, according to Missouri Republican Senator Josh Hawley.

Representatives of the Chinese government have denied the allegations.

“The ultimate goal would be to be able to lay bare exactly what happened in ways that allow us to better posture as a nation and for our allies to be better postured,” – Gen. Tim Haugh.

SPAM FROM HOME?: Home Shopping Network (HSN) Hit With New TCPA Class Action Over DNC Text Messages

TCPA class actions against retailers arising out of SMS channel communications continue to roll in, despite Facebook severely limiting the availability of TCPA ATDS claims.

The issue, of course, is the DNC rules that prevent SMS messages to residential phones for marketing purposes absent prior express invitation or permission or an established business relationship.

For instance a consumer in Florida filed a TCPA class action lawsuit against HSN (home shopping network) yesterday in federal court claiming the company sent him promotional text messages without his consent and despite the fact he was on the national DNC list.

Complaint here: HSN COmplaint

The Complaint alleges HSN had a “practice” of sending text messages to consumers on the DNC list and seeks to represent a class of:

All persons throughout the United States (1) who did not provide their
telephone number to HSN, Inc., (2) to whom HSN, Inc. delivered, or
caused to be delivered, more than one call or text message within a 12-
month period, promoting HSN, Inc. goods or services, (3) where the
person’s residential or cellular telephone number had been registered
with the National Do Not Call Registry for at least thirty days before
HSN, Inc. delivered, or caused to be delivered, at least two of the calls
and/or text messages within the 12-month period, (4) within four years
preceding the date of this complaint and through the date of class
certification.

As these cases continue to roll in it is critical that retailers and brands keep the DNC rules in mind. Most companies only seek to contact consumers that sign up for their messages but numerous challenges to compliance exist:

  1. Third-party lead suppliers often provide false information;
  2. Consumers enter the wrong phone numbers on POS systems and online; and
  3. Phone numbers change hands regularly.

While tools exist to help limit exposure on these challenges it is critical to maintain a strong DNC policy and attendant training to provide a defense. And don’t forget about the new revocation rules!

Website Use of Third-Party Tracking Software Not Prohibited Under Massachusetts Wiretap Act

The Supreme Judicial Court of Massachusetts, the state’s highest appellate court, recently held that website operators’ use of third-party tracking software, including Meta Pixel and Google Analytics, is not prohibited under the state’s Wiretap Act.

The decision arose out of an action brought against two hospitals for alleged violations of the Massachusetts Wiretap Act. The complaint alleged that the hospitals’ websites collected and transmitted users’ browsing activities (including search terms and web browser and device configurations) to third parties, including Facebook and Google, for advertising purposes.

Under the Wiretap Act, any person that “willfully commits [, attempts to commit, or procures another person to commit] an interception. . . of any wire or oral communication” is in violation of the statute.

In its opinion, the Court observed the claims at issue involved the interception of person-to-website interactions, rather than person-to-person conversations or messages the law intended to cover. The Court held, “we cannot conclude with any confidence that the Legislature intended ‘communication’ to extend so broadly as to criminalize the interception of web browsing and other such interactions.”

This decision arrives as similarly situated lawsuits remain pending in courts across the nation.

AI Transcripts and Investment Advisers: Embracing Technology While Meeting SEC Requirements

AI Transcripts in Investment Advisory

There has been a boom recently regarding investment advisers’ use of artificial intelligence (“AI”) to transcribe client and internal meetings. Among other applications, AI features such as Zoom AI Companion, Microsoft Copilot, Jump, and Otter.ai (collectively, “AI Meeting Assistants”) can assist with drafting, transcribing, summarizing and prompting action items based on conversation content in the respective application. For instance, Zoom AI Companion and Microsoft Copilot can draft communications, generate transcriptions of conversations, identify points of agreement and disagreement of a discussion and summarize action items.

Overview of SEC Recordkeeping Requirements for AI Transcripts

As of now, there are no specific artificial intelligence regulations pertaining to the use of AI transcripts or the recordkeeping obligations that would follow. However, there are several SEC recordkeeping provisions that may be implicated by use of the AI capabilities offered by the AI Meeting Assistants. Rule 204-2 requires investment advisers to maintain certain records “relating to [their] investment advisory business” including “written communications sent by such investment adviser relating to” such enumerated subjects as: (i) any recommendation made or proposed to be made and any advice given or proposed to be given; (ii) any receipt, disbursement or delivery of funds or securities; (iii) the placing or execution of any order to purchase or sell any security; and (iv) predecessor performance and the performance or rate of return of any or all managed accounts, portfolios, or securities recommendations (subject to certain exceptions).

Every registered investment adviser is required to keep true, accurate and current books and records. The approach at this juncture would be to adopt these AI Meeting Assistant transcripts into the firm’s books and records. Once translated into written form, the SEC could consider the transcripts and summaries to be written communications regarding investment advice. Such transcripts and summaries should be kept in their original form, together with notes (if any) as to any corresponding inaccuracies produced by the AI content. Registered investment advisers are fiduciaries and should not utilize any information in conjunction with providing client services or communications that it does not reasonably believe is accurate. Thus, if the firm was to use the content of AI transcripts and/or summaries in conjunction with client services or communications that was incorrect, the onus would remain on the firm to demonstrate as to how it reasonably relied upon the content. It is inconsequential whether these transcripts and summaries make it into your CRM software or are maintained in the AI Meeting Assistants program. Regardless of whether the content is a meeting summary or list of action items, the transmission would likely constitute a communication for purposes of Rule 204-2 due to implicating an already established recordkeeping requirement.

Implementing Effective AI Strategies in Investment Advisory

  • A firm must eliminate or neutralize the effect of conflicts of interest associated with the firm’s use of artificial intelligence in investor interactions that place the firm’s or its associated person’s interest ahead of investors’ interests.
  • A firm that has any investor interaction using covered technology (AI) to have written policies and procedures reasonably designed to prevent violations of the proposed rules.
  • Adopt AI Meeting Assistant transcripts into books and records.

The Cybersecurity Maturity Model Certification (CMMC) Program – Defense Contractors Must Rapidly Prepare and Implement

The Department of Defense (DoD) has officially launched the Cybersecurity Maturity Model Certification (CMMC) Program, which requires federal contractors and subcontractors across the Defense Industrial Base (DIB) to comply with strict cybersecurity standards. The CMMC program aims to protect Federal Contract Information (FCI) and Controlled Unclassified Information (CUI) in DoD contracts from evolving cyber threats by requiring defense contractors to implement comprehensive cybersecurity controls. The CMMC Program, which must be confirmed by contracting officers, moves beyond the prior self-assessment model for many contractors to a certification-based approach verified by DoD-approved third-party assessors known as CMMC Third Party Assessor Organizations (C3PAOs).

This client alert outlines the key elements of the CMMC program, providing a detailed analysis of the new certification requirements, timelines for implementation, and practical steps contractors can take to prepare for compliance.

CMMC Overview and Purpose

The CMMC Program represents the DoD’s commitment to ensuring that companies handling FCI and CUI meet stringent cybersecurity standards. The program was developed in response to increasing cyber threats targeting the defense supply chain and is designed to verify that defense contractors and subcontractors have implemented the necessary security measures to safeguard sensitive information.

The CMMC Program consists of three levels of certification, with each level representing an increasing set of cybersecurity controls. The certification levels correspond to the type of information handled by the contractor, with higher levels required for contractors handling more sensitive information, such as CUI.

The DoD officially published the CMMC final rule on October 15, 2024, establishing the CMMC Program within federal regulations. The rule will be effective 60 days after publication, marking a significant milestone in the program’s rollout. DoD expects to publish the final rule amending the DFARS to add CMMC requirements to DoD contracts in early 2025. Contractors that fail to meet CMMC requirements will be ineligible for DoD contracts that involve FCI or CUI and could face significant penalties if they inappropriately attest to compliance.

The overall scope of the CMMC rule is relatively clear; however, some key elements are ambiguous and, in some cases, may require careful consideration. Particularly at the outset of any assessment process, a pre-risk gap assessment internal review, ideally conducted under legal privilege, is recommended to permit sufficient time to address shortfalls in technical controls or governance. The typical timeline for implementing a CMMC-type program may take many months, and we strongly recommend that clients begin this process soon if they have not already started—it is now unquestionably a requirement to do business with the DoD.

CMMC Certification Levels

The CMMC Program features three certification levels that contractors must achieve depending on the nature and sensitivity of the information they handle:

Level 1 (Self-Assessment)

Contractors at this level must meet 15 basic safeguarding requirements outlined in Federal Acquisition Regulation (FAR) 52.204-21. These requirements focus on protecting FCI, which refers to information not intended for public release but necessary for performing the contracted services. A self-assessment is sufficient to achieve certification at this level.

Level 2 (Self-Assessment or Third-Party Assessment)

Contractors handling CUI must meet 110 security controls specified in NIST Special Publication (SP) 800-171. CUI includes unclassified information that requires safeguarding or dissemination controls according to federal regulations. To achieve certification, contractors at this level can conduct a self-assessment or engage a C3PAO. Most defense contracts involving CUI will require third-party assessments to verify compliance.

Level 3 (Third-Party Assessment by DIBCAC)

Contractors supporting critical national security programs or handling highly sensitive CUI must achieve Level 3 certification. This level adds 24 security controls from NIST SP 800-172 to protect CUI from advanced persistent threats. The Defense Contract Management Agency’s (DCMA) Defense Industrial Base Cybersecurity Assessment Center (DIBCAC) will conduct assessments for Level 3 contractors. This is the most stringent level of certification and is reserved for contractors working on the most sensitive programs.

Each certification level builds upon the previous one, with Level 3 being the most comprehensive. Certification is valid for three years, after which, contractors must be reassessed.

Certification Process and Assessment Requirements

Contractors seeking certification must undergo an assessment process that varies depending on the level of certification they are targeting. For Levels 1 and 2, contractors may conduct self-assessments. However, third-party assessments are required for most contracts at Level 2 and all contracts at Level 3. The assessment process includes several key steps:

Self-Assessment (Level 1 and Level 2 (Self))

Contractors at Level 1 or Level 2 (Self) must perform an internal assessment of their cybersecurity practices and submit their results to the Supplier Performance Risk System (SPRS). This system is the DoD’s centralized repository for contractor cybersecurity assessments. Contractors must affirm their compliance annually to maintain their certification status.

Third-Party Assessment (Level 2 (C3PAO) and Level 3 (DIBCAC))

For higher-level certifications, contractors must engage a certified C3PAO to conduct an independent assessment of their compliance with the applicable security controls. For Level 3 certifications, assessments will be performed by the DIBCAC. These assessments will involve reviewing the contractor’s cybersecurity practices, examining documentation, and conducting interviews to verify that the contractor has implemented the necessary controls.

Plan of Action and Milestones (POA&M)

Contractors that do not meet all of the required security controls during their assessment may develop a POA&M. This document outlines the steps the contractor will take to address any deficiencies. Contractors have 180 days to close out their POA&M, after which they must undergo a follow-up assessment to verify that all deficiencies have been addressed. If the contractor fails to meet the requirements within the 180-day window, their conditional certification will expire, and they will be ineligible for future contract awards.

Affirmation

After completing an assessment and addressing any deficiencies, contractors must submit an affirmation of compliance to SPRS. This affirmation must be submitted annually to maintain certification, even if a third-party assessment is only required once every three years.

Integration of CMMC in DoD Contracts

The CMMC Program will be integrated into DoD contracts through a phased implementation process. The program will initially apply to a limited number of contracts, but it will eventually become a requirement for all contracts involving FCI and CUI. The implementation will occur in four phases:

Phase 1 (Early 2025)

Following the publication of the final DFARS rule, CMMC requirements will be introduced in select solicitations. Contractors bidding on these contracts must meet the required CMMC level to be eligible for contract awards.

Phase 2

One year after the start of Phase 1, additional contracts requiring CMMC certification will be released. Contractors at this stage must meet Level 2 certification if handling CUI.

Phase 3

A year after the start of Phase 2, more contracts, including those requiring Level 3 certification, will include CMMC requirements.

Phase 4 (Full Implementation)

The final phase, expected to occur by 2028, will fully implement CMMC requirements across all applicable DoD contracts. From this point forward, contractors must meet the required CMMC level as a condition of contract award, exercise of option periods, and contract extensions.

Flow-Down Requirements for Subcontractors

CMMC requirements will apply to prime contractors and their subcontractors. Prime contractors must ensure that their subcontractors meet the appropriate CMMC level. This flow-down requirement will impact the entire defense supply chain, as subcontractors handling FCI must achieve at least Level 1 certification, and those handling CUI must achieve Level 2.

Subcontractors must be certified before the prime contractor can award them subcontracts. Prime contractors will be responsible for verifying that their subcontractors hold the necessary CMMC certification.

Temporary Deficiencies and Enduring Exceptions

The CMMC Program allows for limited flexibility in cases where contractors cannot meet all of the required security controls. Two key mechanisms provide this flexibility:

Temporary Deficiencies

Contractors may temporarily fall short of compliance with specific security controls, provided they document the deficiency in a POA&M and work toward remediation. These temporary deficiencies must be addressed within 180 days to maintain certification. Failure to close out POA&Ms within the required timeframe will result in the expiration of the contractor’s conditional certification status.

Enduring Exceptions

In some cases, contractors may be granted an enduring exception for specific security controls that are not feasible to implement due to the nature of the system or equipment being used. For example, medical devices or specialized test equipment may not support all cybersecurity controls required by the CMMC Program. In these cases, contractors can document the exception in their System Security Plan (SSP) and work with the DoD to determine appropriate mitigations.

Compliance Obligations and Contractual Penalties

The DoD has made it clear that failure to comply with CMMC requirements will have serious consequences for contractors. Noncompliant contractors will be ineligible for contract awards. Moreover, the Department of Justice’s Civil Cyber-Fraud Initiative looms menacingly in the background, as it actively pursues False Claims Act actions against defense contractors for alleged failures to comply with cybersecurity requirements in the DFARS. In addition, the DoD reserves the right to investigate contractors that have achieved CMMC certification to verify their continued compliance. If an investigation reveals that a contractor has not adequately implemented the required controls, the contractor may face contract termination and other contractual remedies.

Preparing for CMMC Certification

Given the far-reaching implications of the CMMC Program, contractors and subcontractors should begin preparing for certification as soon as possible. As an initial step, an internal, confidential gap assessment is highly advisable, preferably done under legal privilege, to fully understand both past and current shortfalls in compliance with existing cybersecurity requirements that will now be more fully examined in the CMMC process. Key steps include:

Assess Current Cybersecurity Posture

Contractors should conduct an internal assessment of their current cybersecurity practices against the CMMC requirements. This will help identify any gaps and areas that need improvement before seeking certification.

Develop an SSP

Contractors handling CUI must develop and maintain an SSP that outlines how they will meet the security controls specified in NIST SP 800-171. This document will serve as the foundation for both internal and third-party assessments.

Engage a C3PAO

Contractors at Level 2 (C3PAO) and Level 3 must identify and engage a certified C3PAO to conduct their assessments. Given the anticipated demand for assessments, contractors should begin this process early to avoid delays.

Prepare a POA&M

For contractors that do not meet all required controls at the time of assessment, developing a POA&M will be crucial to addressing deficiencies within the required 180-day window.

Review Subcontractor Compliance

Prime contractors must review their subcontractors’ compliance with CMMC requirements and ensure they hold the appropriate certification level. This flow-down requirement will impact the entire defense supply chain.

Conclusion

The CMMC Program marks a significant shift in the oversight of how the DoD manages cybersecurity risks within its defense supply chain. While DoD contractors that handle CUI have had contractual obligations to comply with the NIST SP 800-171 requirements since January 1, 2018, the addition of third-party assessments and more stringent security controls for Level 3 contracts aim to improve the overall cybersecurity posture of contractors handling FCI and CUI. Contractors that fail to comply with CMMC requirements risk losing eligibility for DoD contracts, which could result in substantial business losses.

Given the phased implementation of the program, contractors must act now to assess their cybersecurity practices, engage with certified third-party assessors, and ensure compliance with the new requirements. Proactive planning and preparation will be key to maintaining eligibility for future DoD contracts.

Artificial Intelligence and the Rise of Product Liability Tort Litigation: Novel Action Alleges AI Chatbot Caused Minor’s Suicide

As we predicted a year ago, the Plaintiffs’ Bar continues to test new legal theories attacking the use of Artificial Intelligence (AI) technology in courtrooms across the country. Many of the complaints filed to date have included the proverbial kitchen sink: copyright infringement; privacy law violations; unfair competition; deceptive and acts and practices; negligence; right of publicity, invasion of privacy and intrusion upon seclusion; unjust enrichment; larceny; receipt of stolen property; and failure to warn (typically, a strict liability tort).

A case recently filed in Florida federal court, Garcia v. Character Techs., Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024) (Character Tech) is one to watch. Character Tech pulls from the product liability tort playbook in an effort to hold a business liable for its AI technology. While product liability is governed by statute, case law or both, the tort playbook generally involves a defective, unreasonably dangerous “product” that is sold and causes physical harm to a person or property. In Character Tech, the complaint alleges (among other claims discussed below) that the Character.AI software was designed in a way that was not reasonably safe for minors, parents were not warned of the foreseeable harms arising from their children’s use of the Character.AI software, and as a result a minor committed suicide. Whether and how Character Tech evolves past a motion to dismiss will offer valuable insights for developers AI technologies.

The Complaint

On October 22nd, 2024, Ms. Garcia, the mother of the deceased minor (Sewell), filed a complaint in the Middle District of Florida against Google LLC, Character Technologies Inc. and the creators of Character.AI—Noam Shazeer and Daniel De Frietas Adiwarsana. Shazeer and De Frietas formed Character Technologies Inc. after they left their prior jobs at Google LLC and subsequently developed and marketed Character.AI.

Character.AI allows users to communicate with existing Character.AI characters – such as Interviewer or Trip Planner – or to create new AI characters using Character.AI’s tools. A user can then engage with the Character.AI character – whether for human-like conversations, such as to answer questions, write a story, translate or write code – based on Character Tech’s large language model chatbot. According to the Complaint, Character.AI was rated as suitable for children until July 2024.

The Complaint alleges that Character.AI’s chatbot led Sewell to commit suicide. The Plaintiff claims her son became instantly addicted to Character.AI, leading to drastic behavioral changes such as sleep deprivation, school-related issues and low self-esteem. Her son primarily engaged with Character.AI characters from the HBO series Game of Thrones. The conversations with Game of Thrones’ female characters (such as Rhaenyra Targaryen and Daenerys Targaryen) involved sexualized content. Sewell also expressed his own dark thoughts such as “[t]he world I’m in now is such a cruel one. One where I’m meaningless. But, I’ll keep living and trying to get back to you so we can be together again, my love.” According to the Complaint, after Ms. Garcia took her son’s phone away, Sewell would use extraordinary measures to engage with Character.AI characters.

In his last conversation with a Character.AI character, Sewell told the character that he wanted to “come home” and the character replied, “[please] come home to me as soon as possible, my love,” to which he responded, “[w]hat if I told you I could come home right now?” The character answered, “…please do, my sweet king.” Seconds later, Sewell took his own life.

The Claims

The Complaint asserts a host of claims centered around an alleged lack of safeguards for Character.AI and the exploitation of minors. The most significant claims are noted below:

  • The Product Liability Torts

The Plaintiff alleges both strict liability and negligence claims for a failure to warn and defective design. The first hurdle under these product liability claims is whether Character.AI is a product. She argues that Character.AI is a product because it has a definite appearance and location on a user’s phone, it is personal and movable, it is a “good” rather than an idea, copies of Character.AI are uniform and not customized, there are an unlimited number of copies that can be obtained and it can be accessed on the internet without an account. This first step may, however, prove difficult for the Plaintiff because Character.AI is not a traditional tangible good and courts have wrestled over whether similar technologies are services—existing outside the realm of product liability. See In re Social Media Adolescent Addiction, 702 F. Supp. 3d 809, 838 (N.D. Cal. 2023) (rejecting both parties’ simplistic approaches to the services or products inquiry because “cases exist on both sides of the questions posed by this litigation precisely because it is the functionalities of the alleged products that must be analyzed”).

The failure to warn claims allege that the Defendants had knowledge of the inherent dangers of the Character.AI chatbots, as shown by public statements of industry experts, regulatory bodies and the Defendants themselves. These alleged dangers include knowledge that the software utilizes data sets that are highly toxic and sexual to train itself, common industry knowledge that using tactics to convince users that it is human manipulates users’ emotions and vulnerability, and that minors are most susceptible to these negative effects. The Defendants allegedly had a duty to warn users of these risks and breached that duty by failing to warn users and intentionally allowing minors to use Character.AI.

The defective design claims argue the software is defectively designed based on a “Garbage In, Garbage Out” theory. Specifically, Character.AI was allegedly trained based on poor quality data sets “widely known for toxic conversations, sexually explicit material, copyrighted data, and even possible child sexual abuse material that produced flawed outputs.” Some of these alleged dangers include the unlicensed practice of psychotherapy, sexual exploitation and solicitation of minors, chatbots tricking users into thinking they are human, and in this instance, encouraging suicide. Further, the Complaint alleges that Character.AI is unreasonably and inherently dangerous for the general public—particularly minors—and numerous safer alternative designs are available.

  • Deceptive and Unfair Trade Practices

The Plaintiff asserts a deceptive and unfair trade practices claim under Florida state law. The Complaint alleges the Defendants represented that Character.AI characters mimic human interaction, which contradicts Character Tech’s disclaimer that Character.AI characters are “not real.” These representations constitute dark patterns that manipulate consumers into using Character.AI, buying subscriptions and providing personal data.

The Plaintiff also alleges that certain characters claim to be licensed or trained mental health professionals and operate as such. The Defendants allegedly failed to conduct testing to determine whether the accuracy of these claims. The Plaintiff argues that by portraying certain chatbots to be therapists—yet not requiring them to adhere to any standards—the Defendants engaged in deceptive trade practices. The Complaint compares this claim to the FTC’s recent action against DONOTPAY, Inc. for its AI-generated legal services that allegedly claimed to operate like a human lawyer without adequate testing.

The Defendants are also alleged to employ AI voice call features intended to mislead and confuse younger users into thinking the chatbots are human. For example, a Character.AI chatbot titled “Mental Health Helper” allegedly identified itself as a “real person” and “not a bot” in communications with a user. The Plaintiff asserts that these deceptive and unfair trade practices resulted in damages, including the Character.AI subscription costs, Sewell’s therapy sessions and hospitalization allegedly caused by his use of Character.AI.

  • Wrongful Death

Ms. Garcia asserts a wrongful death claim arguing the Defendants’ wrongful acts and neglect proximately caused the death of her son. She supports this claim by showing her son’s immediate mental health decline after he began using Character.AI, his therapist’s evaluation that he was addicted to Character.AI characters and his disturbing sexualized conversations with those characters.

  • Intentional Infliction of Emotional Distress

Ms. Garcia also asserts a claim for intentional infliction of emotional distress. The Defendants allegedly engaged in intentional and reckless conduct by introducing AI technology to the public and (at least initially) targeting it to minors without appropriate safety features. Further, the conduct was allegedly outrageous because it took advantage of minor users’ vulnerabilities and collected their data to continuously train the AI technology. Lastly, the Defendants’ conduct caused severe emotional distress to Plaintiff, i.e., the loss of her son.

  • Other Claims

The Plaintiff also asserts claims of negligence per se, unjust enrichment, survivor action and loss of consortium and society.

Lawsuits like Character Tech will surely continue to sprout up as AI technology becomes increasingly popular and intertwined with media consumption – at least until the U.S. AI legal framework catches up with the technology. Currently, the Colorado AI Act (covered here) will become the broadest AI law in the U.S. when it enters into force in 2026.

The Colorado AI Act regulates a “High-Risk Artificial Intelligence System” and is focused on preventing “algorithmic discrimination, for Colorado residents”, i.e., “an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of [Colorado] or federal law.” (Colo. Rev. Stat. § 6-1-1701(1).) Whether the Character.AI technology would constitute a High-Risk Artificial Intelligence System is still unclear but may be clarified by the anticipated regulations from the Colorado Attorney General. Other U.S. AI laws also are focused on detecting and preventing bias, discrimination and civil rights in hiring and employment, as well as transparency about sources and ownership of training data for generative AI systems. The California legislature passed a law focused on large AI systems that prohibited a developer from making an AI system available if it presented an “unreasonable risk” of causing or materially enabling “a critical harm.” This law was subsequently vetoed by California Governor Newsome as “well-intentioned” but nonetheless flawed.

While the U.S. AI legal framework – whether in the states or under the new administration – an organization using AI technology must consider how novel issues like the ones raised in Character Tech present new risks.

Daniel Stephen, Naija Perry, and Aden Hochrun contributed to this article

New Fact Sheet Highlights ASTP’s Concerns About Certified API Practices

On October 29, 2024, the US Department of Health and Human Services (HHS) Assistant Secretary for Technology Policy (ASTP) released a fact sheet titled “Information Blocking Reminders Related to API Technology.” The fact sheet reminds developers of application programming interfaces (APIs) certified under the ASTP’s Health Information Technology (IT) Certification Program and their health care provider customers of practices that constitute information blocking under ASTP’s information blocking regulations and information blocking condition of certification applicable to certified health IT developers.

In Depth


The fact sheet is noteworthy because it follows ASTP’s recent blog post expressing concern about reports that certified API developers are potentially violating Certification Program requirements and engaging in information blocking. ASTP also recently strengthened its feedback channels by adding a section specifically for API-linked complaints and inquiries to the Health IT Feedback and Inquiry Portal. It appears increasingly likely that initial investigations and enforcement of the information blocking prohibition by the HHS Office of Inspector General will focus on practices that may interfere with access, exchange, or use of electronic health information (EHI) through certified API technology.

The fact sheet focuses on three categories of API-related practices that could be information blocking under ASTP’s information blocking regulations and Certification Program condition of certification:

  • ASTP cautions against practices that limit or restrict the interoperability of health IT. For example, the fact sheet states that health care providers who locally manage their fast healthcare interoperability resources (FHIR) servers without certified API developer assistance may engage in information blocking when they refuse to provide to certified API developers the FHIR service base URL necessary for patients to access their EHI.
  • ASTP states that impeding innovations and advancements in access, exchange, or use of EHI or health-IT-enabled care delivery may be information blocking. For example, the fact sheet indicates that a certified API developer may engage in information blocking by refusing to register and enable an application for production use within five business days of completing its verification of an API user’s authenticity as required by ASTP’s API maintenance of certification requirements.
  • ASTP states that burdensome or discouraging terms, delays, or influence over customers and users may be information blocking. For example, ASTP states that a certified electronic health record (EHR) developer may engage in information blocking by conditioning the disclosure of interoperability elements to third-party developers on the third-party developer entering into business associate agreements with all of the EHR developer’s covered entity customers, even if the work being done is not for the benefit of the customers and HIPAA does not require the business associate agreements.

The fact sheet does not address circumstances under which any of the above practices of certified API developers may meet an information blocking exception (established for reasonable practices that interfere with access, exchange, or use of EHI). Regulated actors should consider whether exceptions apply to individual circumstances.