Application of New Mental Health Parity Rules to Provider Network Composition and Reimbursement: Perspective and Analysis

On September 23, 2024, the U.S. Departments of Labor, the Treasury, and Health and Human Services (collectively, the “Departments”) released final rules (the “Final Rules”) that implement requirements under the Mental Health Parity and Addiction Equity Act (MHPAEA).

The primary focus of the Final Rules is to implement new statutory requirements under the Consolidated Appropriations Act of 2021, which amended MHPAEA to require health plans and issuers to develop comparative analyses to determine whether nonquantitative treatment limitations (NQTLs)—which are non-financial restrictions on health care benefits that can limit the length or scope of treatment—for mental health and substance use disorder (MH/SUD) benefits are comparable to and applied no more stringently than NQTLs for medical/surgical (M/S) benefits.

Last month, Epstein Becker Green published an Insight entitled “Mental Health Parity: Federal Departments of Labor, Treasury, and Health Release Landmark Regulations,” which provides an overview of the Final Rules. This Insight takes a closer look at the application of the Final Rules to NQTLs related to provider network composition and reimbursement rates.

Provider Network Composition and Reimbursement NQTL Types

A key focus of the Final Rules is to ensure that NQTLs related to provider network composition and reimbursement rates do not impose greater restrictions on access to MH/SUD benefits than they do for M/S benefits.

In the Final Rules, the Departments decline to specify which strategies and functions they expect to be analyzed as separate NQTL types, instead requiring health plans and issuers to identify, define, and analyze the NQTL types that they apply to MH/SUD benefits. However, the Final Rules set out that the general category of “provider network composition” NQTL types includes, but is not limited to, “standards for provider and facility admission to participate in a network or for continued network participation, including methods for determining reimbursement rates, credentialing standards, and procedures for ensuring the network includes an adequate number of each category of provider and facility to provide services under the plan or coverage.”[1]

For NQTLs related to out-of-network rates, the Departments note that NQTLs would include “[p]lan or issuer methods for determining out-of-network rates, such as allowed amounts; usual, customary, and reasonable charges; or application of other external benchmarks for out-of-network rates.”[2]

Requirements for Comparative Analyses and Outcomes Data Evaluation

For each NQTL type, plans must perform and document a six-step comparative analysis that must be provided to federal and state regulators, members, and authorized representatives upon request. The Final Rules divide the NQTL test into two parts: (1) the “design and application” requirement and (2) the “relevant data evaluation” requirement.

The “design and application” requirement, which builds directly on existing guidance, requires the “processes, strategies, evidentiary standards, or other factors” used in designing and applying an NQTL to MH/SUD benefits to be comparable to, and applied no more stringently than, those used for M/S benefits. Although these aspects of the comparative analysis should be generally familiar, the Final Rules and accompanying preamble provide extensive new guidance about how to interpret and implement these requirements.

The Final Rules also set out a second prong to the analysis: the requirement to collect and evaluate “relevant data” for each NQTL. If such analysis shows a “material difference” in access, then the Final Rules also require the plan to take “reasonable” action to remedy the disparity.

The Final Rules provide that relevant data measures for network composition NQTLs may include, but are not limited to:

  • in-network and out-of-network utilization rates, including data related to provider claim submissions;
  • network adequacy metrics, including time and distance data, data on providers accepting new patients, and the proportions of available MH/SUD and M/S providers that participate in the plan’s network; and
  • provider reimbursement rates for comparable services and as benchmarked to a reference standard, such as Medicare fee schedules.

Although the Final Rules do not describe relevant data for out-of-network rates, these data measures may parallel measures to evaluate in-network rates, including measures that benchmark MH/SUD and M/S rates against a common standard, such as Medicare fee schedule rates.

Under the current guidance, plans have broad flexibility to determine what measures must be used, though the plan must ensure that the metrics that are selected reasonably measure the actual stringency of design and application of the NQTL with regard to the impact on member access to MH/SUD and M/S benefits. However, additional guidance is expected to further clarify the data evaluation requirements that may require the use of specific measures, likely in the form of additional frequently asked questions as well as updates to the Self-Compliance Tool published by the Departments to help plans and issuers assess whether their NQTLs satisfy parity requirements.

The Final Rules require plans to look at relevant data for network composition NQTLs in the aggregate—meaning that the same relevant data must be used for all NQTL types (however defined). As such, the in-operation data component of the comparative analysis for network composition NQTLs will be aggregated.

If the relevant data indicates a “material difference,” the threshold for which the plan must establish and define reasonably, the plan must take “reasonable actions” to address the difference in access and document those actions.

Examples of a “reasonable action” that plans can take to comply with network composition requirements “include, but are not limited to:

  1. Strengthening efforts to recruit and encourage a broad range of available mental health and substance use disorder providers and facilities to join the plan’s or issuer’s network of providers, including taking actions to increase compensation or other inducements, streamline credentialing processes, or contact providers reimbursed for items and services provided on an out-of-network basis to offer participation in the network;
  2. Expanding the availability of telehealth arrangements to mitigate any overall mental health and substance use disorder provider shortages in a geographic area;
  3. Providing additional outreach and assistance to participants and beneficiaries enrolled in the plan or coverage to assist them in finding available in-network mental health and substance use disorder providers and facilities; and
  4. Ensuring that provider directories are accurate and reliable.”

These examples of potential corrective actions and related discussion in the Final Rules provide an ambitious vision for a robust suite of strategies that the Departments believe that plans should undertake to address material disparities in access as defined in the relevant data. However, the Final Rules put the onus on the plan to design the strategy that it will use to define “material differences” and remedy any identified disparity in access. Future guidance and enforcement may provide examples of how this qualitative assessment will play out in practice and establish both what the Departments will expect with regard to the definition of “material differences” and what remedial actions they consider to be sufficient. In the interim, it is highly uncertain what the practical impact of these new requirements will be.

Examples of Network Analyses Included in the Final Rules

The Final Rules include several examples to clarify the application of the new requirements to provider network composition NQTLs. Unfortunately, the value of these examples for understanding how the Final Rules will impact MH/SUD provider networks in practice may be limited. As a result, given the lack of detail regarding the complexity of analyzing these requirements for actual provider networks, as well as the fact that the examples fail to engage in any meaningful discussion of where to identify the threshold for compliance with these requirements, it remains to be seen how regulators will interpret and enforce these requirements in practice.

  • Example 1 demonstrates that it would violate the NQTL requirements to apply a percentage discount to physician fee schedule rates for non-physician MH/SUD providers if the same reduction is not applied for non-physician M/S providers. Our takeaways from this example include the following:
    • This example is comparable to the facts that were alleged by the U.S. Department of Labor in Walsh v. United Behavioral Health, E.D.N.Y., No. 1:21-cv-04519 (8/11/21).
    • Example 1 is useful to the extent that it clarifies that a reimbursement strategy that specifically reduces MH/SUD provider rates in ways that do not apply to M/S provider rates would violate MHPAEA. However, such cut-and-dried examples may be rare in practice, and a full review of the strategies for developing provider reimbursement rates is necessary.
  • Example 4 demonstrates that plans may not simply rely on periodic historic fee schedules as the sole basis for their current fee schedules. Here are some key takeaways from this example:
    • Even though this methodology may be neutral and non-discriminatory on its face, given that the historic fee schedules are not themselves a non-biased source of evidence, to meet the new requirements for evidentiary standards and sources, the plan would have to demonstrate that these historic fee schedules were based on sources that were objective and not biased against MH/SUD providers.
    • If the plan cannot demonstrate that the evidentiary standard used to develop its fee schedule does not systematically disfavor access to MH/SUD benefits, it can still pass the NQTL test if it takes steps to cure the discriminatory factor.
    • Example 4 loosely describes a scenario where a plan supplements a historic fee schedule that is found to discriminate against MH/SUD access by accounting for the current demand for MH/SUD services and attracting “sufficient” MH/SUD providers to the network. Unfortunately, however, the facts provided do not clarify what steps were taken to achieve this enhanced access or how the plan or regulator determined that access had become “sufficient” following the implementation of the corrective actions.
  • Example 10 provides that if a plan’s data measures indicate a “material difference” in access to MH/SUD benefits relative to M/S benefits that are attributable to these NQTLs, the plan can still achieve compliance by taking corrective actions. Our takeaways from this example include the following:
    • The facts in this example stipulate that the plan evaluates all of the measure types that are identified above as examples. Example 10 also states that a “material difference” exists but does not identify the measure or measures for which a difference exists or what facts lead to the conclusion that the difference was “material.” To remedy the material difference, this example states that the plan undertakes all of the corrective actions to strengthen its MH/SUD provider network that are identified above as examples and, therefore, achieves compliance. However, this example fails to clarify how potentially inconsistent outcomes across the robust suite of identified measures were balanced to determine that the “material difference” standard was ultimately met. Example 10 also does not provide any details about what specific corrective actions the plan takes or what changes result from these actions.

Epstein Becker Green’s Perspective

The new requirements of the Final Rules will significantly increase the focus of the comparative analyses on the outcomes of the provider network NQTLs. For many years, the focus of the comparative analyses was primarily on determining whether any definable aspect of the plan’s provider contracting and reimbursement rate-setting strategies could be demonstrated to discriminate against MH/SUD providers. The Final Rules retain those requirements but now put greater emphasis on the results of network composition activities with regard to member access and require plans to pursue corrective actions to remediate any material disparities in that data. This focus on a robust “disparate impact” form of anti-discrimination analysis may lead to a meaningful increase in reimbursement for MH/SUD providers or other actions to more aggressively recruit them to participate in commercial health plan networks.

However, at present, it remains unclear which measures the Departments will ultimately require for reporting. Concurrent with the release of their Notice of Proposed Rulemaking on July 23, 2023, the Departments published Technical Release 2023-01P to solicit comments on key approaches to evaluating comparability and stringency for provider network access and reimbursement rates (including some that are referenced as examples in the Final Rules). Comments to the Technical Release highlighted significant concerns with nearly all of the proposed measures. For example, proposals to require analysis of MH/SUD and M/S provider reimbursement rates for commercial markets that are benchmarked to Medicare fee schedules in a simplistic way may fail to account for differences in population health and utilization, value-based reimbursement strategies, and a range of other factors with significant implications for financial and clinical models for both M/S and MH/SUD providers. Requirements to analyze the numbers or proportions of MH/SUD and M/S providers that are accepting new patients may be onerous for providers to report on and for plans to collect and may obscure significant nuances with regard to wait times, the urgency of the service, and the match between the provider’s training and service offerings to the patient’s need. Time and mileage standards highlighted by the Departments not only often fail to capture important access challenges experienced by patients who need MH/SUD care from sub-specialty providers or facilities but also fail to account for evolving service delivery models that may include options such as mobile units, school-based services, home visits, and telehealth. Among the measures identified in the Technical Release, minor differences in measure definitions and specifications can have significant impacts on the data outcomes, and few (if any) of the proposed measures have undergone any form of testing for reliability and validity.

Also, it is still not clear where the Departments will draw the lines for making final determinations of noncompliance with the Final Rules. For example, where a range of different data measures is evaluated, how will the Departments resolve data outcomes that are noisy, conflicting, or inconclusive? Similarly, where regulators do conclude that the data that are provided suggest a disparity in access, the Final Rules identify a highly robust set of potential corrective actions. However, it remains to be seen what scope of actions the Departments will determine to be “good enough” in practice.

Finally, we are interested in seeing what role private litigation will play in driving health plan compliance efforts and practical impacts for providers. To date, plaintiffs have found it challenging to pursue litigation on the basis of claims under MHPAEA, due in part to the highly complex arguments that must be made to evaluate MHPAEA compliance and in part to the challenge for plaintiffs to have adequate insight into plan policies, operations, and data across MH/SUD and M/S benefits to adequately assert a complaint under MHPAEA. Very few class action lawsuits or large settlements have occurred to date. These challenges for potential litigants may continue to limit the volume of litigation. However, to the extent that the additional guidance in the Final Rules does give rise to an uptick in successful litigation, it is possible that the courts may end up having a greater impact on health plan compliance strategies than regulators.


ENDNOTES

[1] 26 CFR 54.9812- 1(c)(4)(ii)(D), 29 CFR 2590.712(c)(4)(ii)(D), and 45 CFR 146.136(c)(4)(ii)(D).

[2] 26 CFR 54.9812- 1(c)(4)(ii)(E), 29 CFR 2590.712(c)(4)(ii)(E), and 45 CFR 146.136(c)(4)(ii)(E).

Colorado AG Proposes Draft Amendments to the Colorado Privacy Act Rules

On September 13, 2024, the Colorado Attorney General’s (AG) Office published proposed draft amendments to the Colorado Privacy Act (CPA) Rules. The proposals include new requirements related to biometric collection and use (applicable to all companies and employers that collect biometrics of Colorado residents) and children’s privacy. They also introduce methods by which businesses could seek regulatory guidance from the Colorado AG.

The draft amendments seek to align the CPA with Senate Bill 41, Privacy Protections for Children’s Online Data, and House Bill 1130, Privacy of Biometric Identifiers & Data, both of which were enacted earlier this year and will largely come into effect in 2025. Comments on the proposed regulations can be submitted beginning on September 25, 2024, in advance of a November 7, 2024, rulemaking hearing.

In Depth


PRIVACY OF BIOMETRIC IDENTIFIERS & DATA

In comparison to other state laws like the Illinois Biometric Information Privacy Act (BIPA), the CPA proposed draft amendments do not include a private right of action. That said, the proposed draft amendments include several significant revisions to the processing of biometric identifiers and data, including:

  • Create New Notice Obligations: The draft amendments require any business (including those not otherwise subject to the CPA) that collects biometrics from consumers or employees to provide a “Biometric Identifier Notice” before collecting or processing biometric information. The notice must include which biometric identifier is being collected, the reason for collecting the biometric identifier, the length of time the controller will retain the biometric identifier, and whether the biometric identifier will be disclosed, redisclosed, or otherwise disseminated to a processor alongside the purpose of such disclosure. This notice must be reasonably accessible, either in a standalone disclosure or, if embedded within the controller’s privacy notice, a clear link to the specific section within the privacy notice that contains the Biometric Identifier Notice. This requirement applies to all businesses that collect biometrics, including employers, even if a business does not otherwise trigger the applicability thresholds of the CPA.
  • Revisit When Consent Is Required: The draft amendments require controllers to obtain explicit consent from the data subject before selling, leasing, trading, disclosing, redisclosing, or otherwise disseminating biometric information. The amendments also allow employers to collect and process biometric identifiers as a condition for employment in limited circumstances (much more limited than Illinois’s BIPA, for example).

PRIVACY PROTECTIONS FOR CHILDREN’S ONLINE DATA

The draft amendments also include several updates to existing CPA requirements related to minors:

  • Delineate Between Consumers Based on Age: The draft amendments define a “child” as an individual under 13 years of age and a “minor” as an individual under 18 years of age, creating additional protections for teenagers.
  • Update Data Protection Assessment Requirements: The draft amendments expand the scope of data protection assessments to include processing activities that pose a heightened risk of harm to minors. Under the draft amendments, entities performing assessments must disclose whether personal data from minors is processed as well as identify any potential sources and types of heightened risk to minors that would be a reasonably foreseeable result of offering online services, products, or features to minors.
  • Revisit When Consent Is Required: The draft amendments require controllers to obtain explicit consent before processing the personal data of a minor and before using any system design feature to significantly increase, sustain, or extend a minor’s use of an online service, product, or feature.

OPINION LETTERS AND INTERPRETIVE GUIDANCE

In a welcome effort to create a process by which businesses and the public can understand more about the scope and applicability of the CPA, the draft amendments:

  • Create a Formal Feedback Process: The draft amendments would permit individuals or entities to request an opinion letter from the Colorado AG regarding aspects of the CPA and its application. Entities that have received and relied on applicable guidance offered via an opinion letter may use that guidance as a good faith defense against later claims of having violated the CPA.
  • Clarify the Role of Non-Binding Advice: Separate and in addition to the formal opinion letter process, the draft amendments provide a process by which any person affected directly or indirectly by the CPA may request interpretive guidance from the AG. Unlike the guidance in an opinion letter, interpretive guidance would not be binding on the Colorado AG and would not serve as a basis for a good faith defense. Nonetheless, a process for obtaining interpretive guidance is a novel, and welcome, addition to the state law fabric.

WHAT’S NEXT?

While subject to change pursuant to public consultation, assuming the proposed CPA amendments are finalized, they would become effective on July 1, 2025. Businesses interested in shaping and commenting on the draft amendments should consider promptly submitting comments to the Colorado AG.

Consumer Privacy Update: What Organizations Need to Know About Impending State Privacy Laws Going into Effect in 2024 and 2025

Over the past several years, the number of states with comprehensive consumer data privacy laws has increased exponentially from just a handful—California, Colorado, Virginia, Connecticut, and Utah—to up to twenty by some counts.

Many of these state laws will go into effect starting Q4 of 2024 through 2025. We have previously written in more detail on New Jersey’s comprehensive data privacy law, which goes into effect January 15, 2025, and Tennessee’s comprehensive data privacy law, which goes into effect July 1, 2025. Some laws have already gone into effect, like Texas’s Data Privacy and Security Act, and Oregon’s Consumer Privacy Act, both of which became effective July of 2024. Now is a good time to take stock of the current landscape as the next batch of state privacy laws go into effect.

Over the next year, the following laws will become effective:

  1. Montana Consumer Data Privacy Act (effective Oct. 1, 2024)
  2. Delaware Personal Data Privacy Act (effective Jan. 1, 2025)
  3. Iowa Consumer Data Protection Act (effective Jan. 1, 2025)
  4. Nebraska Data Privacy Act (effective Jan. 1, 2025)
  5. New Hampshire Privacy Act (effective Jan. 1, 2025)
  6. New Jersey Data Privacy Act (effective Jan. 15, 2025)
  7. Tennessee Information Protection Act (effective July 1, 2025)
  8. Minnesota Consumer Data Privacy Act (effective July 31, 2025)
  9. Maryland Online Data Privacy Act (effective Oct. 1, 2025)

These nine state privacy laws contain many similarities, broadly conforming to the Virginia Consumer Data Protection Act we discussed here.  All nine laws listed above contain the following familiar requirements:

(1) disclosing data handling practices to consumers,

(2) including certain contractual terms in data processing agreements,

(3) performing risk assessments (with the exception of Iowa); and

(4) affording resident consumers with certain rights, such as the right to access or know the personal data processed by a business, the right to correct any inaccurate personal data, the right to request deletion of personal data, the right to opt out of targeted advertising or the sale of personal data, and the right to opt out of the processing sensitive information.

The laws contain more than a few noteworthy differences. Each of the laws differs in terms of the scope of their application. The applicability thresholds vary based on: (1) the number of state residents whose personal data the company (or “controller”) controls or processes, or (2) the proportion of revenue a controller derives from the sale of personal data. Maryland, Delaware, and New Hampshire each have a 35,000 consumer processing threshold. Nebraska, similar to the recently passed data privacy law in Texas, applies to controllers that that do not qualify as small business and process personal data or engage in personal data sales. It is also important to note that Iowa adopted a comparatively narrower definition of what constitutes as sale of personal data to only transactions involving monetary consideration. All states require that the company conduct business in the state.

With respect to the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”), Iowa’s, Montana’s, Nebraska’s, New Hampshire’s, and Tennessee’s laws exempt HIPAA-regulated entities altogether; while Delaware’s, Maryland’s, Minnesota’s, and New Jersey’s laws exempt only protected health information (“PHI”) under HIPAA. As a result, HIPAA-regulated entities will have the added burden of assessing whether data is covered by HIPAA or an applicable state privacy law.

With respect to the Gramm-Leach-Bliley Act (“GLBA”), eight of these nine comprehensive privacy laws contain an entity-level exemption for GBLA-covered financial institutions. By contrast, Minnesota’s law exempts only data regulated by GLBA. Minnesota joins California and Oregon as the three state consumer privacy laws with information-level GLBA exemptions.

Not least of all, Maryland’s law stands apart from the other data privacy laws due to a number of unique obligations, including:

  • A prohibition on the collection, processing, and sharing of a consumer’s sensitive data except when doing so is “strictly necessary to provide or maintain a specific product or service requested by the consumer.”
  • A broad prohibition on the sale of sensitive data for monetary or other valuable consideration unless such sale is necessary to provide or maintain a specific product or service requested by a consumer.
  • Special provisions applicable to “Consumer Health Data” processed by entities not regulated by HIPAA. Note that “Consumer Health Data” laws also exist in Nevada, Washington, and Connecticut as we previously discussed here.
  • A prohibition on selling or processing minors’ data for targeted advertising if the controller knows or should have known that the consumer is under 18 years of age.

While states continue to enact comprehensive data privacy laws, there remains the possibility of a federal privacy law to bring in a national standard. The American Privacy Rights Act (“APRA”) recently went through several iterations in the House Committee on Energy and Commerce this year, and it reflects many of the elements of these state laws, including transparency requirements and consumer rights. A key sticking point, however, continues to be the broad private right of action included in the proposed APRA but absent from all state privacy laws. Only California’s law, which we discussed here, has a private right of action, although it is narrowly circumscribed to data breaches.  Considering the November 2024 election cycle, it is likely that federal efforts to create a comprehensive privacy law will stall until the election cycle is over and the composition of the White House and Congress is known.

Legal and Privacy Considerations When Using Internet Tools for Targeted Marketing

Businesses often rely on targeted marketing methods to reach their relevant audiences. Instead of paying for, say, a television commercial to be viewed by people across all segments of society with varied purchasing interests and budgets, a business can use tools provided by social media platforms and other internet services to target those people most likely to be interested in its ads. These tools may make targeted advertising easy, but businesses must be careful when using them – along with their ease of use comes a risk of running afoul of legal rules and regulations.

Two ways that businesses target audiences are working with influencers who have large followings in relevant segments of the public (which may implicate false or misleading advertising issues) and using third-party “cookies” to track users’ browsing history (which may implicate privacy and data protection issues). Most popular social media platforms offer tools to facilitate the use of these targeting methods. These tools are likely indispensable for some businesses, and despite their risks, they can be deployed safely once the risks are understood.

Some Platform-Provided Targeted Marketing Tools May Implicate Privacy Issues
Google recently announced1 that it will not be deprecating third-party cookies, a reversal from its previous plan to phase out these cookies. “Cookies” are small pieces of code that track users’ activity online. “First-party” cookies often are necessary for the website to function properly. “Third-party” cookies are shared across websites and companies, essentially tracking users’ browsing behaviors to help advertisers target their relevant audiences.

In early 2020, Google announced2 that it would phase out third-party cookies, which are associated with privacy concerns because they track individual web-browsing activity and then share that data with other parties. Google’s 2020 announcement was a response to these concerns.

Fast forward about four and a half years, and Google reversed course. During that time, Google had introduced alternatives to third-party cookies, and companies had developed their own, often extensive, proprietary databases3 of information about their customers. However, none of these methods satisfied the advertising industry. Google then made the decision to keep third-party cookies. To address privacy concerns, Google said it would “introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time.”4

Many large platforms in addition to Google offer targeted advertising services via the use of third-party cookies. Can businesses use these services without any legal ramifications? Does the possibility for consumers to opt out mean that a user cannot be liable for privacy concerns if it relies on third-party cookies? The relevant cases have held that individual businesses still must be careful despite any opt-out and other built-in tools offered by these platforms.

Two recent cases from the Southern District of New York5 held that individual businesses that used “Meta Pixels” to track consumers may be liable for violations of the Video Privacy Protection Act (VPPA). 19 U.S.C. § 2710. Facebook defines a Meta Pixel6 as a “piece of code … that allows you to … make sure your ads are shown to the right people … drive more sales, [and] measure the results of your ads.” In other words, a Meta Pixel is essentially a cookie provided by Meta/Facebook that helps businesses target ads to relevant audiences.

As demonstrated by those two recent cases, businesses cannot rely on a platform’s program to ensure their ad targeting efforts do not violate the law. These violations may expose companies to enormous damages – VPPA cases often are brought as class actions and even a single violation may carry damages in excess of $2,500.

In those New York cases, the consumers had not consented to sharing information, but, even if they had, the consent may not suffice. Internet contracts, often included in a website’s Terms of Service, are notoriously difficult to enforce. For example, in one of those S.D.N.Y. cases, the court found that the arbitration clause to which subscribers had agreed was not effective to force arbitration in lieu of litigation for this matter. In addition, the type of consent and the information that websites need to provide before sharing information can be extensive and complicated, as recently reportedby my colleagues.

Another issue that companies may encounter when relying on widespread cookie offerings is whether the mode (as opposed to the content) of data transfer complies with all relevant privacy laws. For example, the Swedish Data Protection Agency recently found8 that a company had violated the European Union’s General Data Protection Regulation (GDPR) because the method of transfer of data was not compliant. In that case, some of the consumers had consented, but some were never asked for consent.

Some Platform-Provided Targeted Marketing Tools May Implicate False or Misleading Advertising Issues
Another method that businesses use to target their advertising to relevant consumers is to hire social media influencers to endorse their products. These partnerships between brands and influencers can be beneficial to both parties and to the audiences who are guided toward the products they want. These partnerships are also subject to pitfalls, including reputational pitfalls (a controversial statement by the influencer may negatively impact the reputation of the brand) and legal pitfalls.

The Federal Trade Commission (FTC) has issued guidelinesConcerning Use of Endorsements and Testimonials” in advertising, and published a brochure for influencers, “Disclosures 101 for Social Media Influencers,”10 that tells influencers how they must apply the guidelines to avoid liability for false or misleading advertising when they endorse products. A key requirement is that influencers must “make it obvious” when they have a “material connection” with the brand. In other words, the influencer must disclose that it is being paid (or gains other, non-monetary benefits) to make the endorsement.

Many social media platforms make it easy to disclose a material connection between a brand and an influencer – a built-in function allows influencers to simply click a check mark to disclose the existence of a material connection with respect to a particular video endorsement. The platform then displays a hashtag or other notification along with the video that says “#sponsored” or something similar. However, influencers cannot rely on these built-in notifications. The FTC brochure clearly states: “Don’t assume that a platform’s disclosure tool is good enough, but consider using it in addition to your own, good disclosure.”

Brands that sponsor influencer endorsements may easily find themselves on the hook if the influencer does not properly disclose that the influencer and the brand are materially connected. In some cases, the contract between the brand and influencer may pass any risk to the brand. In others, the influencer may be judgement proof, or the brand is an easier target for enforcement. And, unsurprisingly, the FTC has sent warning letters11 threatening high penalties to brands for influencer violations.

The Platform-Provided Tools May Be Deployed Safely
Despite risks involved in some platform-provided tools for targeted marketing, these tools are very useful, and businesses should continue to take advantage of them. However, businesses cannot rely on these widely available and easy-to-use tools but must ensure that their own policies and compliance programs protect them from liability.

The same warning about widely available social media tools and lessons for a business to protect itself are also true about other activities online, such as using platforms’ built-in “reposting” function (which may implicate intellectual property infringement issues) and using out-of-the-box website builders (which may implicate issues under the Americans with Disabilities Act). A good first step for a business to ensure legal compliance online is to understand the risks. An attorney experienced in internet law, privacy law and social media law can help.

_________________________________________________________________________________________________________________

1 https://privacysandbox.com/news/privacy-sandbox-update/

https://blog.chromium.org/2020/01/building-more-private-web-path-towards.html

3 Businesses should ensure that they protect these databases as trade secrets. See my recent Insights at https://www.wilsonelser.com/sarah-fink/publications/relying-on-noncompete-clauses-may-not-be-the-best-defense-of-proprietary-data-when-employees-depart and https://www.wilsonelser.com/sarah-fink/publications/a-practical-approach-to-preserving-proprietary-competitive-data-before-and-after-a-hack

4 https://privacysandbox.com/news/privacy-sandbox-update/

5 Aldana v. GamesStop, Inc., 2024 U.S. Dist. Lexis 29496 (S.D.N.Y. Feb. 21, 2024); Collins v. Pearson Educ., Inc., 2024 U.S. Dist. Lexis 36214 (S.D.N.Y. Mar. 1, 2024)

6 https://www.facebook.com/business/help/742478679120153?id=1205376682832142

7 https://www.wilsonelser.com/jana-s-farmer/publications/new-york-state-attorney-general-issues-guidance-on-privacy-controls-and-web-tracking-technologies

See, e.g., https://www.dataguidance.com/news/sweden-imy-fines-avanza-bank-sek-15m-unlawful-transfer

9 https://www.ecfr.gov/current/title-16/chapter-I/subchapter-B/part-255

10 https://www.ftc.gov/system/files/documents/plain-language/1001a-influencer-guide-508_1.pd

11 https://www.ftc.gov/system/files/ftc_gov/pdf/warning-letter-american-bev.pdf
https://www.ftc.gov/system/files/ftc_gov/pdf/warning-letter-canadian-sugar.pdf

FCC’s New Notice of Inquiry – Is This Big Brother’s Origin Story?

The FCC’s recent Notice of Proposed Rulemaking and Notice of Inquiry was released on August 8, 2024. While the proposed Rule is, deservedly, getting the most press, it’s important to pay attention to the Notice of Inquiry.

The part which is concerning to me is the FCC’s interest in “development and availability of technologies on either the device or network level that can: 1) detect incoming calls that are potentially fraudulent and/or AI-generated based on real-time analysis of voice call content; 2) alert consumers to the potential that such voice calls are fraudulent and/or AI-generated; and 3) potentially block future voice calls that can be identified as similar AI-generated or otherwise fraudulent voice calls based on analytics.” (emphasis mine)

The FCC also wants to know “what steps can the Commission take to encourage the development and deployment of these technologies…”

The FCC does note there are “significant privacy risks, insofar as they appear to rely on analysis and processing of the content of calls.” The FCC also wants comments on “what protections exist for non-malicious callers who have a legitimate privacy interest in not having the contents of their calls collected and processed by unknown third parties?”

So, the Federal Communications Commission wants to monitor the CONTENT of voice calls. In real-time. On your device.

That’s not a problem for anyone else?

Sure, robocalls are bad. There are scams on robocalls.

But, are robocalls so bad that we need real-time monitoring of voice call content?

At what point, did we throw the Fourth Amendment out of the window and to prevent what? Phone calls??

The basic premise of the Fourth Amendment is “to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.” I’m not sure how we get more arbitrary than “this incoming call is a fraud” versus “this incoming call is not a fraud”.

So, maybe you consent to this real-time monitoring. Sure, ok. But, can you actually give informed consent to what would happen with this monitoring?

Let me give you three examples of “pre-recorded calls” that the real-time monitoring could overhear to determine if the “voice calls are fraudulent and/or AI-generated”:

  1. Your phone rings. It’s a prerecorded call from Planned Parenthood confirming your appointment for tomorrow.
  2. Your phone rings. It’s an artificial voice recording from your lawyer’s office telling you that your criminal trial is tomorrow.
  3. Your phone rings. It’s the local jewelry store saying your ring is repaired and ready to be picked up.

Those are basic examples, but for them to someone to “detect incoming calls that are potentially fraudulent and/or AI-generated based on real-time analysis of voice call content”, those calls have to be monitored in real-time. And stored somewhere. Maybe on your device. Maybe by a third-party in their cloud.

Maybe you trust Apple with that info. But, do you trust someone who comes up with fraudulent monitoring software that would harvest that data? How do you know you should trust that party?

Or you trust Google. Surely, Google wouldn’t use your personal data. Surely, they would not use your phone call history to sell ads.

And that becomes data a third-party can use. For ads. For political messaging. For profiling.

Yes, this is extremely conspiratorial. But, that doesn’t mean your data is not valuable. And where there is valuable data, there are people willing to exploit it.

Robocalls are a problem. And there are some legitimate businesses doing great things with fraud detection monitoring. But, a real-time monitoring edict from the government is not the solution. As an industry, we can be smarter on how we handle this.

U.S. Sues TikTok for Children’s Online Privacy Protection Act (COPPA) Violations

On Friday, August 2, 2024, the United States sued ByteDance, TikTok, and its affiliates for violating the Children’s Online Privacy Protection Act of 1998 (“COPPA”) and the Children’s Online Privacy Protection Rule (“COPPA Rule”). In its complaint, the Department of Justice alleges TikTok collected, stored, and processed vast amounts of data from millions of child users of its popular social media app.

In June, the FTC voted to refer the matter to the DOJ, stating that it had determined there was reason to believe TikTok (f.k.a. Musical.ly, Inc.) had violated a FTC 2019 consent order and that the agency had also uncovered additional potential COPPA and FTC Act violations. The lawsuit filed today in the Central District of California, alleges that TikTok is directed to children under age 13, that Tik Tok has permitted children to evade its age gate, that TikTok has collected data from children without first notifying their parents and obtaining verifiable parental consent, that TikTok has failed to honor parents’ requests to delete their children’s accounts and information, and that TikTok has failed to delete the accounts and information of users the company knows are children. The complaint also alleges that TikTok failed to comply with COPPA even for accounts in the platform’s “Kids Mode” and that TikTok improperly amassed profiles on Kids Mode users. The complaint seeks civil penalties of up to $51,744 per violation per day from January 10, 2024, to present for the improper collection of children’s data, as well as permanent injunctive relief to prevent future violations of the COPPA Rule.

The lawsuit comes on the heels of the U.S. Senate passage this week of the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) and the Kids Online Safety Act (KOSA) by a 91-3 bipartisan vote. It is unknown whether the House will take up the bills when it returns from recess in September.

Top Competition Enforcers in the US, EU, and UK Release Joint Statement on AI Competition – AI: The Washington Report


On July 23, the top competition enforcers at the US Federal Trade Commission (FTC) and Department of Justice (DOJ), the UK Competition and Markets Authority (CMA), and the European Commission (EC) released a Joint Statement on Competition in Generative AI Foundation Models and AI products. The statement outlines risks in the AI ecosystem and shared principles for protecting and fostering competition.

While the statement does not lay out specific enforcement actions, the statement’s release suggests that the top competition enforcers in all three jurisdictions are focusing on AI’s effects on competition in general and competition within the AI ecosystem—and are likely to take concrete action in the near future.

A Shared Focus on AI

The competition enforcers did not just discover AI. In recent years, the top competition enforcers in the US, UK, and EU have all been examining both the effects AI may have on competition in various sectors as well as competition within the AI ecosystem. In September 2023, the CMA released a report on AI Foundation Models, which described the “significant impact” that AI technologies may have on competition and consumers, followed by an updated April 2024 report on AI. In June 2024, French competition authorities released a report on Generative AI, which focused on competition issues related to AI. At its January 2024 Tech Summit, the FTC examined the “real-world impacts of AI on consumers and competition.”

AI as a Technological Inflection Point

In the new joint statement, the top enforcers described the recent evolution of AI technologies, including foundational models and generative AI, as “a technological inflection point.” As “one of the most significant technological developments of the past couple decades,” AI has the potential to increase innovation and economic growth and benefit the lives of citizens around the world.

But with any technological inflection point, which may create “new means of competing” and catalyze innovation and growth, the enforcers must act “to ensure the public reaps the full benefits” of the AI evolution. The enforcers are concerned that several risks, described below, could undermine competition in the AI ecosystem. According to the enforcers, they are “committed to using our available powers to address any such risks before they become entrenched or irreversible harms.”

Risks to Competition in the AI Ecosystem

The top enforcers highlight three main risks to competition in the AI ecosystem.

  1. Concentrated control of key inputs – Because AI technologies rely on a few specific “critical ingredients,” including specialized chips and technical expertise, a number of firms may be “in a position to exploit existing or emerging bottlenecks across the AI stack and to have outside influence over the future development of these tools.” This concentration may stifle competition, disrupt innovation, or be exploited by certain firms.
  2. Entrenching or extending market power in AI-related markets – The recent advancements in AI technologies come “at a time when large incumbent digital firms already enjoy strong accumulated advantages.” The regulators are concerned that these firms, due to their power, may have “the ability to protect against AI-driven disruption, or harness it to their particular advantage,” potentially to extend or strengthen their positions.
  3. Arrangements involving key players could amplify risks – While arrangements between firms, including investments and partnerships, related to the development of AI may not necessarily harm competition, major firms may use these partnerships and investments to “undermine or coopt competitive threats and steer market outcomes” to their advantage.

Beyond these three main risks, the statement also acknowledges that other competition and consumer risks are also associated with AI. Algorithms may “allow competitors to share competitively sensitive information” and engage in price discrimination and fixing. Consumers may be harmed, too, by AI. As the CMA, DOJ, and the FTC have consumer protection authority, these authorities will “also be vigilant of any consumer protection threats that may derive from the use and application of AI.”

Sovereign Jurisdictions but Shared Concerns

While the enforcers share areas of concern, the joint statement recognizes that the EU, UK, and US’s “legal powers and jurisdictions contexts differ, and ultimately, our decisions will always remain sovereign and independent.” Nonetheless, the competition enforcers assert that “if the risks described [in the statement] materialize, they will likely do so in a way that does not respect international boundaries,” making it necessary for the different jurisdictions to “share an understanding of the issues” and be “committed to using our respective powers where appropriate.”

Three Unifying Principles

With the goal of acting together, the enforcers outline three shared principles that will “serve to enable competition and foster innovation.”

  1. Fair Dealing – Firms that engage in fair dealing will make the AI ecosystem as a whole better off. Exclusionary tactics often “discourage investments and innovation” and undermine competition.
  2. Interoperability – Interoperability, the ability of different systems to communicate and work together seamlessly, will increase competition and innovation around AI. The enforcers note that “any claims that interoperability requires sacrifice to privacy and security will be closely scrutinized.”
  3. Choice – Everyone in the AI ecosystem, from businesses to consumers, will benefit from having “choices among the diverse products and business models resulting from a competitive process.” Regulators may scrutinize three activities in particular: (1) company lock-in mechanisms that could limit choices for companies and individuals, (2) partnerships between incumbents and newcomers that could “sidestep merger enforcement” or provide “incumbents undue influence or control in ways that undermine competition,” and (3) for content creators, “choice among buyers,” which could be used to limit the “free flow of information in the marketplace of ideas.”

Conclusion: Potential Future Activity

While the statement does not address specific enforcement tools and actions the enforcers may take, the statement’s release suggests that the enforcers may all be gearing up to take action related to AI competition in the near future. Interested stakeholders, especially international ones, should closely track potential activity from these enforcers. We will continue to closely monitor and analyze activity by the DOJ and FTC on AI competition issues.

Struck by CrowdStrike Outage? Your Business Loss Could Be Covered

Over the last week, organizations around the globe have struggled to bring operations back online following a botched software update from cybersecurity company CrowdStrike. As the dust settles, affected organizations should consider whether they are insured against losses or claims arising from the outage. The Wall Street Journal has already reported that insurers are bracing for claims arising from the outage and that according to one cyber insurance broker “[t]he insurance world was expecting to cover situations like this.” A cyber analytics firm has estimated that insured losses following the outage could reach $1.5 billion.

Your cyber insurance policy may cover losses resulting from the CrowdStrike outage. These policies often include “business interruption” or “contingent business interruption” insurance that protects against disruptions from a covered loss. Business interruption insurance covers losses from disruptions to your own operations. This insurance may cover losses if the outage affected your own computer systems. Contingent business interruption insurance, on the other hand, covers your losses when another entity’s operations are disrupted. This coverage could apply if the outage affected a supplier or cloud service provider that your organization relies on.

Cyber policies often vary in the precise risks they cover. Evaluating potential coverage requires comparing your losses to the policy’s coverage. Cyber policies also include limitations and exclusions on coverage. For example, many cyber policies contain a “waiting period” that requires affected systems to be disrupted for a certain period before the policy provides coverage. These waiting periods can be as short as one hour or as long as several days.

Other commercial insurance policies could also provide coverage depending on the loss or claim and the policy endorsements and exclusions. For example, your organization may have procured liability insurance that protects against third-party claims or litigation. This insurance could protect you from claims made by customers or other businesses related to the outage.

If your operations have been impacted by the CrowdStrike outage, there are a few steps you can take now to maximize your potential insurance recovery.

First, read your policies to determine the available coverage. As you review your policies, pay careful attention to policy limits, endorsements, and exclusions. A policy endorsement may significantly expand policy coverage, even though it is located long after the relevant policy section. Keep in mind that courts generally interpret coverage provisions in a policy generously in favor of an insured and interpret exclusions or limitations narrowly against an insurance company.

Second, track your losses. The outage likely cost your organization lost profits or extra expenses. Common business interruption losses may also include overtime expenses to remedy the outage, expenses to hire third-party consultants or technicians, and penalties arising from the outage’s disruption to your operations. Whatever the nature of your loss, tracking and documenting your loss now will help you secure a full insurance recovery later.

Third, carefully review and comply with your policy’s notice requirements. If you have experienced a loss or a claim, you should immediately notify your insurer. Even if you are only aware of a potential claim, your policy may require you to provide notice to your insurer of the events that could ultimately lead to a claim or loss. Some notice requirements in cyber policies can be quite short. After providing notice, you may receive a coverage response or “reservation of rights” from your insurer. Be cautious in taking any unfavorable response at face value. Particularly in cases of widespread loss, an insurer’s initial coverage evaluation may not accurately reflect the available coverage.

If you are unsure of your policy’s notice obligations or available coverage, or if you suspect your insurer is not affording your organization the coverage that you purchased, coverage counsel can assist your organization in securing coverage. Above all, don’t hesitate to secure the coverage to which you are entitled.

Listen to this post

FTC/FDA Send Letters to THC Edibles Companies Warning of Risks to Children

Earlier this week, the Federal Trade Commission (FTC) and Food and Drug Administration (FDA) sent cease-and-desist letters to several companies warning them that their products, which were marketed to mimic popular children’s snacks, ran the risk of unintended consumption of the Delta-8 THC by children. In addition to the FDA’s concerns regarding marketing an unsafe food additive, the agencies warned that imitating non-THC-containing food products often consumed by children through the use of advertising or labeling is misleading under Section 5 of the FTC Act. The FTC noted that “preventing practices that present unwarranted health and safety risks, particularly to children, is one of the Commission’s highest priorities.”

The FTC’s focus on these particular companies and products shouldn’t come as a surprise. One such company advertises edible products labelled as “Stoney Ranchers Hard Candy,” mimicking the common Jolly Ranchers candy, and “Trips Ahoy” closely resembling the well-known “Chips Ahoy.” Another company advertises a product closely resembling a Nerds Rope candy, with similar background coloring, and copy-cats of the Nerds logo and mascot. This is not the first time the FTC has warned companies about the dangers of advertising products containing THC in a way that could mislead consumers, particularly minors. In July of 2023, the FTC sent cease-and-desist letters to six organizations for the same violations alleged this week – there companies copied popular snack brands such as Doritos and Cheetos, mimicking the brands’ color, mascot, font, bag style, and more.

This batch of warning letters orders the companies to stop marketing the edibles immediately, to review their products for compliance, and to inform the FTC within 15 days of the specific actions taken to address the FTC’s concerns. The companies also are required to report to the FDA on corrective actions taken.

The Economic Benefits of AI in Civil Defense Litigation

The integration of artificial intelligence (AI) into various industries has revolutionized the way we approach complex problems, and the field of civil defense litigation is no exception. As lawyers and legal professionals navigate the complex and often cumbersome landscape of civil defense, AI can offer a transformative assistance that not only enhances efficiency but also significantly reduces client costs. In this blog, we’ll explore the economic savings associated with employing AI in civil defense litigation.

Streamlining Document Review
One of the most labor-intensive and costly aspects of civil defense litigation is the review of vast amounts of discovery documents. Traditionally, lawyers and legal teams spend countless hours sifting through documents to identify and categorize relevant information, a process that is both time-consuming and costly. AI-powered tools, such as Large Language Models (LLM) can automate and expedite this process.

By using AI to assist in closed system document review, law firms can drastically cut down on the number of billable hours required for this task. AI assistance can quickly and accurately identify relevant documents, flagging pertinent information and reducing the risk of material oversight. This not only speeds up the review process and allows a legal team to concentrate on analysis rather than document digest and chronology, but significantly lowers the overall cost of litigation to the client.

By way of example – a case in which 50,000 medical treatment record and bills must be analyzed, put in chronology and reviewed for patient complaints, diagnosis, treatment, medial history and prescription medicine use, could literally take a legal team weeks to complete. With AI assistance the preliminary ground work such as document organization, chronologizing complaints and treatments and compiling prescription drug lists can be completed in a matter of minutes, allowing the lawyer to spend her time in verification, analysis and defense development and strategy, rather than information translation and time consuming data organization.

Enhanced Legal Research
Legal research is another growing area where AI can yield substantial economic benefits. Traditional legal research methods involve lawyers poring over case law, statutes, and legal precedents to find those cases that best fit the facts and legal issues at hand. This process can be incredibly time-intensive, driving up costs for clients. Closed AI-powered legal research platforms can rapidly analyze vast databases of verified legal precedent and information, providing attorneys with precise and relevant case law in a fraction of the time. Rather than conducting time consuming exhaustive searches for the right cases to analysis, a lawyer can now stream line the process with AI assistance by flagging on-point cases for verification, review, analysis and argument development.

The efficiency of AI-driven legal research can translate into significant cost savings for the client. Attorneys can now spend more time on argument development and drafting, rather than bogged down in manual research. For clients, this means lower legal fees and faster resolution of cases, both of which contribute to overall economic savings.

Predictive Analytics and Case Strategy
AI’s evolving ability to analyze legal historical data and identify patterns is particularly valuable in the realm of predictive analytics. In civil defense litigation, AI can be used to assist in predicting the likely outcomes of cases based on jurisdictionally specific verdicts and settlements, helping attorneys to formulate more effective strategies. By sharpening focus on probable outcomes, legal teams can make informed decisions about whether to settle a case or proceed to trial. Such predictive analytics allow clients to better manage their risk, thereby reducing the financial burden on defendants.

Automating Routine Tasks
Many routine tasks in civil defense litigation, such as preparation of document and pleading chronologies, scheduling, and case management, can now be automated using AI. Such automation reduces the need for manual intervention, allowing legal professionals to focus on more complex and value-added case tasks. By automating such routine tasks, law firms can operate more efficiently, reducing overhead costs and improving their bottom line. Clients benefit from quicker turnaround times and lower legal fees, resulting in overall economic savings.

Conclusion
The economic savings for clients associated with using AI in civil defense litigation can be substantial. From streamlining document review and enhancing legal research to automating routine tasks and reducing discovery costs, AI offers a powerful tool for improving efficiency and lowering case costs. As the legal industry continues to embrace technological advancements, the adoption of AI in civil defense litigation is poised to become a standard practice, benefiting both law firms and their clients economically. The future of civil defense litigation is undoubtedly intertwined with AI, promising a more cost-effective and efficient approach to resolving legal disputes.