The Evolution of AI in Healthcare: Current Trends and Legal Considerations

Artificial intelligence (AI) is transforming the healthcare landscape, offering innovative solutions to age-old challenges. From diagnostics to enhanced patient care, AI’s influence is pervasive, and seems destined to reshape how healthcare is delivered and managed. However, the rapid integration of AI technologies brings with it a complex web of legal and regulatory considerations that physicians must navigate.

It appears inevitable AI will ultimately render current modalities, perhaps even today’s “gold standard” clinical strategies, obsolete. Currently accepted treatment methodologies will change, hopefully for the benefit of patients. In lockstep, insurance companies and payors are poised to utilize AI to advance their interests. Indeed, the “cat-and-mouse” battle between physician and overseer will not only remain but will intensify as these technologies intrude further into physician-patient encounters.

  1. Current Trends in AI Applications in Healthcare

As AI continues to evolve, the healthcare sector is witnessing a surge in private equity investments and start-ups entering the AI space. These ventures are driving innovation across a wide range of applications, from tools that listen in on patient encounters to ensure optimal outcomes and suggest clinical plans, to sophisticated systems that gather and analyze massive datasets contained in electronic medical records. By identifying trends and detecting imperceptible signs of disease through the analysis of audio and visual depictions of patients, these AI-driven solutions are poised to revolutionize clinical care. The involvement of private equity and start-ups is accelerating the development and deployment of these technologies, pushing the boundaries of what AI can achieve in healthcare while also raising new questions about the integration of these powerful tools into existing medical practices.

Diagnostics and Predictive Analytics:

AI-powered diagnostic tools are becoming sophisticated, capable of analyzing medical images, genetic data, and electronic health records (EHRs) to identify patterns that may elude human practitioners. Machine learning algorithms, for instance, can detect early signs of cancer, heart disease, and neurological disorders with remarkable accuracy. Predictive analytics, another AI-driven trend, is helping clinicians forecast patient outcomes, enabling more personalized treatment plans.

 

Telemedicine and Remote Patient Monitoring:

The COVID-19 pandemic accelerated the adoption of telemedicine, and AI is playing a crucial role in enhancing these services. AI-driven chatbots and virtual assistants are set to engage with patients by answering queries and triaging symptoms. Additionally, AI is used in remote and real-time patient monitoring systems to track vital signs and alert healthcare providers to potential health issues before they escalate.

 

Drug Discovery and Development:

AI is revolutionizing drug discovery by speeding up the identification of potential drug candidates and predicting their success in clinical trials. Pharmaceutical companies are pouring billions of dollars in developing AI-driven tools to model complex biological processes and simulate the effects of drugs on these processes, significantly reducing the time and cost associated with bringing new medications to market.

Administrative Automation:

Beyond direct patient care, AI is streamlining administrative tasks in healthcare settings. From automating billing processes to managing EHRs and scheduling appointments, AI is reducing the burden on healthcare staff, allowing them to focus more on patient care. This trend also helps healthcare organizations reduce operational costs and improve efficiency.

AI in Mental Health:

AI applications in mental health are gaining traction, with tools like sentiment analysis, an application of natural language processing, being used to assess a patient’s mental state. These tools can analyze text or speech to detect signs of depression, anxiety, or other mental health conditions, facilitating earlier interventions.

  1. Legal and Regulatory Considerations

As AI technologies become more deeply embedded in healthcare, they intersect with legal and regulatory frameworks designed to protect patient safety, privacy, and rights.

Data Privacy and Security:

AI systems rely heavily on vast amounts of data, often sourced from patient records. The use of this data must comply with privacy regulations established by the Health Insurance Portability and Accountability Act (HIPAA), which mandates stringent safeguards to protect patient information. Physicians and AI developers must ensure that AI systems are designed with robust security measures to prevent data breaches, unauthorized access, and other cyber threats.

Liability and Accountability:

The use of AI in clinical decision-making raises questions about liability. If an AI system provides incorrect information or misdiagnoses a condition, determining who is responsible—the physician, the AI developer, or the institution—can be complex. As AI systems become more autonomous, the traditional notions of liability may need to evolve, potentially leading to new legal precedents and liability insurance models.

These notions beg the questions:

  • Will physicians trust the “judgment” of an AI platform making a diagnosis or interpreting a test result?
  • Will the utilization of AI platforms cause physicians to become too heavily reliant on these technologies, forgoing their own professional human judgment?

Surely, plaintiff malpractice attorneys will find a way to fault the physician whatever they decide.

Insurance Companies and Payors:

Another emerging concern is the likelihood that insurance companies and payors, including Medicare/Medicaid, will develop and mandate the use of their proprietary AI systems to oversee patient care, ensuring it aligns with their rules on proper and efficient care. These AI systems, designed primarily to optimize cost-effectiveness from the insurer’s perspective, could potentially undermine the physician’s autonomy and the quality of patient care. By prioritizing compliance with insurer guidelines over individualized patient needs, these AI tools could lead to suboptimal outcomes for patients. Moreover, insurance companies may make the use of their AI systems a prerequisite for physicians to maintain or obtain enrollment on their provider panels, further limiting physicians’ ability to exercise independent clinical judgment and potentially restricting patient access to care that is truly personalized and appropriate.

Licensure and Misconduct Concerns in New York State:

Physicians utilizing AI in their practice must be particularly mindful of licensure and misconduct issues, especially under the jurisdiction of the Office of Professional Medical Conduct (OPMC) in New York. The OPMC is responsible for monitoring and disciplining physicians, ensuring that they adhere to medical standards. As AI becomes more integrated into clinical practice, physicians could face OPMC scrutiny if AI-related errors lead to patient harm, or if there is a perceived over-reliance on AI at the expense of sound clinical judgment. The potential for AI to contribute to diagnostic or treatment decisions underscores the need for physicians to maintain ultimate responsibility and ensure that AI is used to support, rather than replace, their professional expertise.

Conclusion

AI has the potential to revolutionize healthcare, but its integration must be approached with careful consideration of legal and ethical implications. By navigating these challenges thoughtfully, the healthcare industry can ensure that AI contributes to better patient outcomes, improved efficiency, and equitable access to care. The future of AI in healthcare looks promising, with ongoing advancements in technology and regulatory frameworks adapting to these changes. Healthcare professionals, policymakers, and AI developers must continue to engage in dialogue to shape this future responsibly.

APPARENTLY NOT AN INDEPENDENT CONTRACTOR: Summary Judgment Denied Because Third Party Vendor May Have Had Apparent Authority To Make Calls Without Consent

Hi TCPAWorld! The Baroness here and I have a good case today.

Dickson, v. Direct Energy, LP, et al., No. 5:18-CV-00182-JRA, 2024 WL 4416856 (N.D. Ohio Oct. 4, 2024).

Let’s dive in.

Background

In this case, the plaintiff Dickson alleges the defendant Direct Energy sent him ringless voicemails (RVMs) in 2017 without consent.

Direct Energy filed a motion for summary judgment arguing that it cannot be held liable under the TCPA because it did not directly make the calls to Dickson (a third-party vendor did) and it cannot be held vicariously liable for the calls under agency principals.

More specifically, Direct Energy argues that Total Marketing Concepts (TMC) was an independent agent and was not acting with actual or apparent authority when it violated the TCPA and Direct Energy did not ratify the illegal acts of TMC.

Law

For those of you not familiar, a motion for summary judgment is granted when there is no genuine dispute as to any material facts and the movant is entitled to judgment as a matter of law.

Under the TCPA, a seller can be held either directly or vicariously liable for violations of the TCPA.

As noted above, Direct Energy did not directly deliver any RVMs to Dickson. So it cannot be directly liable for the calls. Dickson instead seeks to hold Direct Energy vicariously liable for the acts of TMC and TMC’s subvendors.

Let’s first look at the principal/agent relationship.

Direct Energy primarily argued that TMC was NOT its agent because of the terms of their agreement. Specifically, Direct Energy identified TMC as an “independent contractor.” Moreover, TMC was “expressly instructed to send RVMs only with TCPA-compliant opt-in consent.”

Importantly, however, whether an agency relationship exists is based on an assessment of the facts of the relationship and not on how the parties define their relationship.

Listen up folks—contractual terms disclaiming agency will not cut it!

While Direct Energy and TMC did have a provision in their contract which expressly disclaimed any agency relationship, the Court highlighted that the parties entered into an amended agreement which expressly authorized TMC to (among other things) close sales on Direct Energy’s behalf and thereby bind Direct Energy in contracts with customers. In other words, Direct Energy authorized TMC to enter into agreements on its behalf.

The Court also found Direct Energy exerted a high level of control over TMC:

  • Direct Energy had the ability to audit TMC’s records to ensure compliance with its contractual obligations
  • Direct Energy could audit TMC’s subcontractors in the same manner
  • Direct Energy had access to TMC facilities to ensure compliance
  • Direct Energy had the ability to terminate the contract with or without cause
  • Direct Energy authorized TMC to telemarket on its behalf using the Direct Energy trade name as if Direct Energy was making the telemarketing call

Therefore, the Court found Dickson produced evidence which a reasonable jury could find that Direct Energy exerted such a level of control over TMC such that there was a principle/agent relationship, despite their contract expressly providing otherwise.

ACTUAL AUTHORITY

Actual authority exists when a principal explicitly grants permission to an agent to act on their behalf, whether through express or implied means.

Express authority

Pursuant to the Teleservices Agreement, TMC was responsible for complying with the TCPA. Thus, there was no evidence that TMC had express actual authority to contract individuals who had not given consent.

Implied authority

Dickson argued that Direct Energy nonetheless led TMC to reasonably believe it should make telemarketing calls that violate the TCPA. However, the Court found that TMC’s authority was expressly limited to opt-in leads. So, Dickson failed to demonstrate a genuine issue of material facts showing that TMC acted actual authority—either express or implied—when it contracted potential customers who had not opted in to receiving such calls.

APPARENT AUTHORITY

Apparent authority arises when a principal’s conduct leads a third party to reasonably believe that an agent has the authority to act on the principal’s behalf, even if such authority has not been explicitly granted.

Here’s where it gets interesting.

Dickson presented evidence that Direct Energy received several thousand complaints regarding the RVMs but did not stop the conduct.

That’s a lot of complaints..

Moreover, Direct Energy authorized TMC to use its trade name and approved the scripts. Thus, Dickson argued Direct Energy allowed third-party recipients of the RVMs to reasonably believe the RVMs were from Direct Energy.

And even though TMC used other third-party telephony services, this was expressly authorized by the agreement between Direct Energy and TMC.

Therefore, the Court found that Dickson demonstrated that Direct Energy authorized and instructed TMC to use its tradename in its RVMs, approved the scripts used by TMC, and knew or should have known of TMC’s improper conduct and that did not take action to prevent that conduct from continuing.

As such, the Court found a genuine issue of material fact existed that TMC acted with apparent authority when it contracted potential customers who had not opted in to receiving such calls.

RATIFICATION

Ratification occurs when an agent acts for the principal’s benefit and the principal does not repudiate the agent’s actions.

A plaintiff must present some evidence that a principal benefitted from the alleged unlawful conduct of its purported agent to hold the principal liable for the acts of the agent under the theory of ratification.

Here, Dickson failed to produce evidence that Direct Energy received any benefit from TMC’s unlawful telemarketing acts. For example, Dickson produced no evidence of any contracts that Direct Energy secured as a result of TMC contacting potential consumers who had not given opt-in consent. Importantly, the Court stated “[p]ure conjecture that Direct Energy must have benefitted in some way because of the volume of calls made by TMC on its behalf is simply not enough to survive summary judgment.”

Therefore, the Court found Dickson failed to demonstrate the existence of a material fact as to whether Direct Energy ratified TMC’s violations of the TCPA.

In light of the above, the Court recommended denying Direct Energy’s motion for summary judgment. Although there was no genuine issue of material fact related to actual authority and ratification, the Court determined that a genuine issue of material fact does exist concerning whether TMC acted with apparent authority.

This case highlights the complexities of agency relationships in TCPA cases and serves as a reminder for companies: mere contractual disclaimers of agency will not suffice. Courts can still hold you vicariously liable for the actions of third parties acting on your behalf! Choose the companies you are working with wisely.

Are You Eligible for Passport Renewal Online?

In good news, the State Department has announced the roll-out of its new online passport renewal system. Eligible individuals can renew their 10-year passports online without having to mail in any documentation.

Be sure to plan ahead if you are using the online service because only routine service is available – no expedited processing.

Although applicants will not be required to turn in their “old” passport, that passport will be cancelled after the renewal application is submitted and will no longer be valid for international travel.

Eligibility requirements for online processing:

  • The old passport is a 10-year passport, and the applicant is at least 25 years of age;
  • The old passport was issued between 2009 and 2015, or more than 9 years but less than 15 years from the date the new application is submitted;
  • There is no request for change of name, gender, or place of date of birth;
  • The applicant is not travelling for at least 8 weeks from the application submission date;
  • The applicant is seeking a regular (tourist) passport, not a special issuance passport (such as diplomatic, official, or service [gray cover] passports);
  • The applicant lives in the United States, either in a state or territory (passports cannot be renewed online from a foreign country or using Army Post Office [APO] or Fleet Post Office [FPO]); and
  • The applicant is in possession of their current passport and it is not damaged or mutilated and it has not been reported as lost or stolen.

To renew online, the applicant must sign in or create an account on Home | MyTravelGov (state.gov) and follow the step-by-step directions. The applicant will have to:

  • Provide information about the passport they want to renew;
  • Choose whether to apply for a passport book or passport card or both;
  • Enter proposed travel dates;
  • Upload a digital photo;
  • “Sign” the application; and
  • Make the required payment by credit or debit card

Applicants can enroll to receive email updates regarding their applications.

Those not eligible to apply online may renew by mail if they meet the eligibility criteria. Those not eligible to renew by mail (such as children) must renew in person.

The State Department estimates that 5 million people will be eligible to use this new online service annually. Last year, a record 24 million passports were issued. The State Department hopes to continue to expand the online service to further optimize the passport renewal process.

What Digital Advertisers and Influencers Need to Know About the FTC Final Rule Banning Fake Consumer Reviews and Testimonials

As previously blogged about here, following notices of proposed rulemaking in 2022 and 2023, on August 22, 2024 the Federal Trade Commission finalized a rule that will impose monetary civil penalties false and misleading consumer reviews and testimonials.  Those covered by the Final Rule, including, but not limited to, advertisers, marketers, manufacturers, brands and various intermediaries, and businesses that promote and assist such entities, should consult with an experienced FTC compliance lawyer and begin to prepare for its enforcement, immediately.

What Does the FTC Final Rule Banning Fake Consumer Reviews and Testimonials Cover?

The FTC Final Rule Banning Fake Consumer Reviews and Testimonials formalizes the prohibition of various practices relating to the use of consumer reviews and testimonials and sets forth which practices may be considered unfair or deceptive pursuant to the FTC Act.

In short, the Final Rule is intended to foster fair competition and protect consumers’ purchasing decisions.  In general, the Final Rule covers: (i) the purchase, sale or procuring of fake reviews or testimonials (for example and without limitation, a reviewer that does not exist, a reviewer that did not actually use or possess experience with the product or service, or a review that misrepresents actual experience); (ii) providing compensation or other incentives in exchange for reviews that express a particular sentiment; (iii) facilitating “insider” consumer reviews and testimonials that do not contain a clear and conspicuous disclosure of the relationship; (iv) utilizing websites that appear to be independent review websites when, in fact, they are controlled by the business whose products or services are reviewed; (v) suppressing reviews, either by intimidation or by merely publishing certain reviews or ratings (for example and without limitation, only positive reviews or ratings); and (vi) misusing fake indicators of social media influence.

The Final Rule also includes some important definitions.  For example, the Final Rule defines “consumer reviews” as reviews published to a website or platform dedicated (in whole or in part) to receiving and displaying consumer evaluations, including, for example, via reviews or  ratings.

The Final Rule defines “consumer review hosting” as “providing the technological means by which a website or platform enables consumers to see or hear the consumer reviews that consumers have submitted to the website or platform.”  In simple terms, this means that if an employee posts an unsolicited review on a corporate website concerning a product/service that they have experience using, it may not necessarily be considered deceptive as long as the material connection is disclosed.

“Clear and conspicuous” disclosures (such as, for example and without limitation, those pertaining to material relationships between a manager or officer to a brand), must be unavoidable, and easy to notice and understand for ordinary, reasonable consumers.  Note, for  audiovisual content, disclosures must be presented in “at least the same means as the representations requiring the disclosure.”

The Final Rule follows the FTC’s Updated Endorsement Guidelines (2023).  The FTC Endorsement Guides address a much broader range of conduct than the Final Rule, and provide best practice recommendations regarding the use of product endorsements and reviews in advertising.

What are the Requirements of the FTC Final Rule on Reviews and Testimonials?

The Final Rule largely codifies existing FTC policy related to reviews and testimonials and sets forth limitations for a handful of categories of conduct that the FTC will consider deceptive.  In part, the Final Rule prevents covered entities and their agents from using fake reviews and deceptive testimonials, suppressing honest negative reviews and paying for positive reviews.

In pertinent part and without limitation:

  1. 16 CFR § 465.2: Fake or false consumer reviews, consumer testimonials, or celebrity testimonials

Business and brands are prohibited from creating, buying, selling or disseminating fake or false reviews or testimonials, including, but not limited to, those that expressly or impliedly misrepresent they are by someone that does not exist (for example and without limitation, AI-generated reviews), by someone that does not have experience with the product/service, those that misrepresent experience with a product or service, and negative reviews intended to damage competitors.

Businesses and brands are prohibited from creating, purchasing, procuring or disseminating such reviews (and/or facilitating dissemination) when the business knew or should have known that the reviews or testimonials were not bona fide.

  1. 16 CFR § 465.4: Buying positive or negative consumer reviews

Business and brands are prohibited from incentivizing a consumer to write a review when the incentive is conditioned – expressly or implicitly – on the review expressing a particular sentiment (whether positive or negative) about a business or brand, or related products or services.  It is not unlawful for a company to offer incentives for consumers to write reviews, however, it is unlawful, for example, to condition the incentive upon, for example, a 5-star review.  While the FTC Endorsement Guides separately mandate a clear and conspicuous disclosure when a review is incentivized by monetary payment or another incentive/relationship, a disclosure of the incentive is not a defense when the incentive is conditioned on the review expressing a particular sentiment.

  1. 16 CFR § 465.5: Insider consumer reviews and consumer testimonials

Section 465.5 of the Final Rule prohibits businesses and brands from creating, soliciting or posting reviews or testimonials by officers, managers, employees or agents thereof without clearly and conspicuously disclosing their relationship, or “material connection.”  There are limited exceptions.  First, the prohibition does not apply to unsolicited social media posts by employees or social media posts that result from generalized solicitations (e.g., non-employee specific).  Second, the prohibition does not apply to unsolicited employee reviews that merely appear on a business’s website because of its “consumer review hosting” function.

Additionally, reviews solicited from immediate relatives (e.g., spouse, parent, child or sibling), employees or agents of officers, managers, employees or agents of a business or brand require that latter ensure that the immediate relative clearly, conspicuously and transparently disclose the material connection to the business.  The foregoing also applies, for example and without limitation, to requests that employees or agents solicit reviews from relatives.  Covered “insiders” are required to instruct such reviewers to clearly and conspicuously disclose their relationships to the business or brand and, if they knew or should have known that a related review appears without a disclosure, take remedial steps to address the disclosure.

The Final Rule states that if the business or brand knew or should have known of a material relationship between a testimonialist and the business, it is a violation for the business or brand to disseminate or cause the dissemination of a consumer testimonial from its officer, manager, employee, or agent without a clear and conspicuous disclosure of such relationship.

  1. 16 CFR § 465.6: Company-controlled review websites or entities

Companies and brands are prohibited from creating or controlling review websites or platforms that appear independent when they are, in fact, operated by the company itself.  For example, companies may not expressly or by implication falsely represent that a website they control provides independent reviews or opinions.  Section 456.6 is intended to prevent the creation of illegitimate independent review websites, organizations or entities to review products and services.  It does not apply to general consumer reviews on a brand’s website, for example, so long as those reviews comply with applicable legal regulations.

  1. 16 CFR § 465.7: Review suppression

Pursuant to Section 465.7 of the Final Rule, businesses and brands may not suppress, manipulated or attempt to suppress or manipulate negative reviews (or otherwise manipulate or attempt to manipulate overall perception) by solely displaying positive feedback, with limited exceptions such as when a review contains confidential or personal information, or is false or fake, and/or wholly unrelated to the products/services offered.  The criteria for doing so must be “applied equally to all reviews submitted without regard to sentiment.”

Businesses and brands are also prohibited from suppressing negative reviews or ratings, and misrepresenting (expressly or implicitly) that the selected consumer reviews or ratings represent most or all reviews or ratings.  The Final Rule does not prohibit sorting or organizing reviews – per se – however doing so in a manner that makes it more difficult for consumers to view/learn of negative reviews may be considered an unfair or deceptive act or practice.

All reviews must be treated fairly so that consumers are provided with a true an accurate representation of consumer experiences.

Additionally, the Final Rule prohibits the use of “unfounded or groundless legal threat” or other physical threat, intimidation or false accusation to prevent a review from being written or created or to cause the review to be removed.

Section 465.7, in pertinent part, is consistent with various portions of the January 2022 agency guidance entitled Featuring Online Customer Reviews: A Guide for Platforms.  The foregoing guidance recommends that businesses and brands: (i) that operate a website or platform that features reviews, have processes in place to ensure those reviews truly reflect the feedback received from legitimate customers about their real experiences; (ii) be transparent about your review-related practices; (ii) do not ask for reviews only from people you think will leave positive ones; (iii) that offer an incentive to consumers for leaving a review, not condition it, explicitly or implicitly, on the review being positive (even without that condition, offering an incentive to write a review may introduce bias or change the weight and credibility that readers give that review); (iv) do not prevent or discourage people from submitting negative reviews; (v) have a reasonable processes in place to verify that reviews are genuine and not fake, deceptive, or otherwise manipulated (be proactive in modifying and upgrading your processes); (vi) do not  edit reviews to alter the message (e.g., do not change words to make a negative review sound more positive); (vii) treat positive and negative reviews equally (do not subject negative reviews to greater scrutiny); (viii) publish all genuine reviews and do not exclude negative ones; (ix) do not display reviews in a misleading way (e.g., it could be deceptive to feature the positive ones more prominently or require a click through to view negative reviews); (x) that display reviews when the reviewer has a material connection to the company or brand offering the product or service (e.g., when the reviewer has received compensation or a free product in exchange for their review), clearly and conspicuously disclose such relationships; (xi) clearly and conspicuously disclose how they collect, process and display reviews, and how they determine overall ratings, to the extent necessary to avoid misleading consumers; and (xii) have a reasonable procedure to identify fake or suspicious reviews after publication (if a consumer or business tells a business or brand that a review may be fake, investigation and appropriate action are necessary – that may include taking down suspicious or phony reviews or leaving them up with appropriate labels).

  1. 16 CFR § 465.8: Misuse of fake indicators of social media influence

Section 465.7 prohibits selling, distributing, purchasing or procuring “fake indicators of social media influence” (for example and without limitation, likes, saves, shares, subscribers, followers or views generated by a bot or fake account) that are actually known to be or should be known to be fake, and that could potentially be used or are actually used to misrepresent or artificially inflate individual or business importance for a commercial purpose.  Thus, liability will not attach to a business or brand that engages an influencer using fake indicators of social media influence if the business or brand neither knew nor should have known thereof.

How is the FTC Final Rule Different from the Proposed Rule?

Notably, the Final Rule does not include a provision from the proposed rule that would have precluded advertisers from using consumer reviews that were created for a different product.  Known as “review hijacking,” the FTC was unable to resolve various concerns about the meaning of “substantially different product.”  The FTC reserved the right to revisit this issue, going forward via further rulemaking.

What are the Consequences for Violating the FTC Final Rule on Reviews and Testimonials?

The concepts, prohibitions and obligations included in the Final Rule are not entirely new.  However, the Final Rule does significantly enhance the FTC’s ability to pursue civil monetary damages in the form of penalties in the amount of up to $51,744, per violation or per day for ongoing violations.  The Final Rule also will permit the FTC to seek judicial orders that require violators to compensate consumers for the consequences of their unlawful conduct.

Takeaway:

The Final Rule banning fake consumer reviews and testimonials generally prohibits specific  practices that the FTC has determined are deceptive or misleading, including: (i) fake or false consumer reviews, consumer testimonials or celebrity testimonials; (ii) purchasing positive or negative consumer reviews; (iii) insider consumer reviews and consumer testimonials; (iv) company-controlled review websites or entities; (v) review suppression; and (vi) misuse of fake indicators of social media influence.  The Final rule will be effective October 21, 2024.  Violations of the Final Rule can result in significant financial and reputational consequences.  Companies that utilize consumer reviews, consumer testimonials or celebrity endorsements should consult with an experienced eCommerce attorney to discuss proactively implementing responsible written policies and contracts that ensure compliance with the Final Rule and other applicable legal regulations (for example and without limitation, ensure the clear and conspicuous disclosure of material connections), educating employees and agents, reviewing marketing strategies, auditing first and third-party (for example and without limitation, lead generators) promotional materials and activities for non-compliance (for example and without limitation, ensuring that reviews  provide an accurate representation of consumer experiences), and developing and implementing appropriate compliance plans and written policies that include required remedial actions.

Application of New Mental Health Parity Rules to Provider Network Composition and Reimbursement: Perspective and Analysis

On September 23, 2024, the U.S. Departments of Labor, the Treasury, and Health and Human Services (collectively, the “Departments”) released final rules (the “Final Rules”) that implement requirements under the Mental Health Parity and Addiction Equity Act (MHPAEA).

The primary focus of the Final Rules is to implement new statutory requirements under the Consolidated Appropriations Act of 2021, which amended MHPAEA to require health plans and issuers to develop comparative analyses to determine whether nonquantitative treatment limitations (NQTLs)—which are non-financial restrictions on health care benefits that can limit the length or scope of treatment—for mental health and substance use disorder (MH/SUD) benefits are comparable to and applied no more stringently than NQTLs for medical/surgical (M/S) benefits.

Last month, Epstein Becker Green published an Insight entitled “Mental Health Parity: Federal Departments of Labor, Treasury, and Health Release Landmark Regulations,” which provides an overview of the Final Rules. This Insight takes a closer look at the application of the Final Rules to NQTLs related to provider network composition and reimbursement rates.

Provider Network Composition and Reimbursement NQTL Types

A key focus of the Final Rules is to ensure that NQTLs related to provider network composition and reimbursement rates do not impose greater restrictions on access to MH/SUD benefits than they do for M/S benefits.

In the Final Rules, the Departments decline to specify which strategies and functions they expect to be analyzed as separate NQTL types, instead requiring health plans and issuers to identify, define, and analyze the NQTL types that they apply to MH/SUD benefits. However, the Final Rules set out that the general category of “provider network composition” NQTL types includes, but is not limited to, “standards for provider and facility admission to participate in a network or for continued network participation, including methods for determining reimbursement rates, credentialing standards, and procedures for ensuring the network includes an adequate number of each category of provider and facility to provide services under the plan or coverage.”[1]

For NQTLs related to out-of-network rates, the Departments note that NQTLs would include “[p]lan or issuer methods for determining out-of-network rates, such as allowed amounts; usual, customary, and reasonable charges; or application of other external benchmarks for out-of-network rates.”[2]

Requirements for Comparative Analyses and Outcomes Data Evaluation

For each NQTL type, plans must perform and document a six-step comparative analysis that must be provided to federal and state regulators, members, and authorized representatives upon request. The Final Rules divide the NQTL test into two parts: (1) the “design and application” requirement and (2) the “relevant data evaluation” requirement.

The “design and application” requirement, which builds directly on existing guidance, requires the “processes, strategies, evidentiary standards, or other factors” used in designing and applying an NQTL to MH/SUD benefits to be comparable to, and applied no more stringently than, those used for M/S benefits. Although these aspects of the comparative analysis should be generally familiar, the Final Rules and accompanying preamble provide extensive new guidance about how to interpret and implement these requirements.

The Final Rules also set out a second prong to the analysis: the requirement to collect and evaluate “relevant data” for each NQTL. If such analysis shows a “material difference” in access, then the Final Rules also require the plan to take “reasonable” action to remedy the disparity.

The Final Rules provide that relevant data measures for network composition NQTLs may include, but are not limited to:

  • in-network and out-of-network utilization rates, including data related to provider claim submissions;
  • network adequacy metrics, including time and distance data, data on providers accepting new patients, and the proportions of available MH/SUD and M/S providers that participate in the plan’s network; and
  • provider reimbursement rates for comparable services and as benchmarked to a reference standard, such as Medicare fee schedules.

Although the Final Rules do not describe relevant data for out-of-network rates, these data measures may parallel measures to evaluate in-network rates, including measures that benchmark MH/SUD and M/S rates against a common standard, such as Medicare fee schedule rates.

Under the current guidance, plans have broad flexibility to determine what measures must be used, though the plan must ensure that the metrics that are selected reasonably measure the actual stringency of design and application of the NQTL with regard to the impact on member access to MH/SUD and M/S benefits. However, additional guidance is expected to further clarify the data evaluation requirements that may require the use of specific measures, likely in the form of additional frequently asked questions as well as updates to the Self-Compliance Tool published by the Departments to help plans and issuers assess whether their NQTLs satisfy parity requirements.

The Final Rules require plans to look at relevant data for network composition NQTLs in the aggregate—meaning that the same relevant data must be used for all NQTL types (however defined). As such, the in-operation data component of the comparative analysis for network composition NQTLs will be aggregated.

If the relevant data indicates a “material difference,” the threshold for which the plan must establish and define reasonably, the plan must take “reasonable actions” to address the difference in access and document those actions.

Examples of a “reasonable action” that plans can take to comply with network composition requirements “include, but are not limited to:

  1. Strengthening efforts to recruit and encourage a broad range of available mental health and substance use disorder providers and facilities to join the plan’s or issuer’s network of providers, including taking actions to increase compensation or other inducements, streamline credentialing processes, or contact providers reimbursed for items and services provided on an out-of-network basis to offer participation in the network;
  2. Expanding the availability of telehealth arrangements to mitigate any overall mental health and substance use disorder provider shortages in a geographic area;
  3. Providing additional outreach and assistance to participants and beneficiaries enrolled in the plan or coverage to assist them in finding available in-network mental health and substance use disorder providers and facilities; and
  4. Ensuring that provider directories are accurate and reliable.”

These examples of potential corrective actions and related discussion in the Final Rules provide an ambitious vision for a robust suite of strategies that the Departments believe that plans should undertake to address material disparities in access as defined in the relevant data. However, the Final Rules put the onus on the plan to design the strategy that it will use to define “material differences” and remedy any identified disparity in access. Future guidance and enforcement may provide examples of how this qualitative assessment will play out in practice and establish both what the Departments will expect with regard to the definition of “material differences” and what remedial actions they consider to be sufficient. In the interim, it is highly uncertain what the practical impact of these new requirements will be.

Examples of Network Analyses Included in the Final Rules

The Final Rules include several examples to clarify the application of the new requirements to provider network composition NQTLs. Unfortunately, the value of these examples for understanding how the Final Rules will impact MH/SUD provider networks in practice may be limited. As a result, given the lack of detail regarding the complexity of analyzing these requirements for actual provider networks, as well as the fact that the examples fail to engage in any meaningful discussion of where to identify the threshold for compliance with these requirements, it remains to be seen how regulators will interpret and enforce these requirements in practice.

  • Example 1 demonstrates that it would violate the NQTL requirements to apply a percentage discount to physician fee schedule rates for non-physician MH/SUD providers if the same reduction is not applied for non-physician M/S providers. Our takeaways from this example include the following:
    • This example is comparable to the facts that were alleged by the U.S. Department of Labor in Walsh v. United Behavioral Health, E.D.N.Y., No. 1:21-cv-04519 (8/11/21).
    • Example 1 is useful to the extent that it clarifies that a reimbursement strategy that specifically reduces MH/SUD provider rates in ways that do not apply to M/S provider rates would violate MHPAEA. However, such cut-and-dried examples may be rare in practice, and a full review of the strategies for developing provider reimbursement rates is necessary.
  • Example 4 demonstrates that plans may not simply rely on periodic historic fee schedules as the sole basis for their current fee schedules. Here are some key takeaways from this example:
    • Even though this methodology may be neutral and non-discriminatory on its face, given that the historic fee schedules are not themselves a non-biased source of evidence, to meet the new requirements for evidentiary standards and sources, the plan would have to demonstrate that these historic fee schedules were based on sources that were objective and not biased against MH/SUD providers.
    • If the plan cannot demonstrate that the evidentiary standard used to develop its fee schedule does not systematically disfavor access to MH/SUD benefits, it can still pass the NQTL test if it takes steps to cure the discriminatory factor.
    • Example 4 loosely describes a scenario where a plan supplements a historic fee schedule that is found to discriminate against MH/SUD access by accounting for the current demand for MH/SUD services and attracting “sufficient” MH/SUD providers to the network. Unfortunately, however, the facts provided do not clarify what steps were taken to achieve this enhanced access or how the plan or regulator determined that access had become “sufficient” following the implementation of the corrective actions.
  • Example 10 provides that if a plan’s data measures indicate a “material difference” in access to MH/SUD benefits relative to M/S benefits that are attributable to these NQTLs, the plan can still achieve compliance by taking corrective actions. Our takeaways from this example include the following:
    • The facts in this example stipulate that the plan evaluates all of the measure types that are identified above as examples. Example 10 also states that a “material difference” exists but does not identify the measure or measures for which a difference exists or what facts lead to the conclusion that the difference was “material.” To remedy the material difference, this example states that the plan undertakes all of the corrective actions to strengthen its MH/SUD provider network that are identified above as examples and, therefore, achieves compliance. However, this example fails to clarify how potentially inconsistent outcomes across the robust suite of identified measures were balanced to determine that the “material difference” standard was ultimately met. Example 10 also does not provide any details about what specific corrective actions the plan takes or what changes result from these actions.

Epstein Becker Green’s Perspective

The new requirements of the Final Rules will significantly increase the focus of the comparative analyses on the outcomes of the provider network NQTLs. For many years, the focus of the comparative analyses was primarily on determining whether any definable aspect of the plan’s provider contracting and reimbursement rate-setting strategies could be demonstrated to discriminate against MH/SUD providers. The Final Rules retain those requirements but now put greater emphasis on the results of network composition activities with regard to member access and require plans to pursue corrective actions to remediate any material disparities in that data. This focus on a robust “disparate impact” form of anti-discrimination analysis may lead to a meaningful increase in reimbursement for MH/SUD providers or other actions to more aggressively recruit them to participate in commercial health plan networks.

However, at present, it remains unclear which measures the Departments will ultimately require for reporting. Concurrent with the release of their Notice of Proposed Rulemaking on July 23, 2023, the Departments published Technical Release 2023-01P to solicit comments on key approaches to evaluating comparability and stringency for provider network access and reimbursement rates (including some that are referenced as examples in the Final Rules). Comments to the Technical Release highlighted significant concerns with nearly all of the proposed measures. For example, proposals to require analysis of MH/SUD and M/S provider reimbursement rates for commercial markets that are benchmarked to Medicare fee schedules in a simplistic way may fail to account for differences in population health and utilization, value-based reimbursement strategies, and a range of other factors with significant implications for financial and clinical models for both M/S and MH/SUD providers. Requirements to analyze the numbers or proportions of MH/SUD and M/S providers that are accepting new patients may be onerous for providers to report on and for plans to collect and may obscure significant nuances with regard to wait times, the urgency of the service, and the match between the provider’s training and service offerings to the patient’s need. Time and mileage standards highlighted by the Departments not only often fail to capture important access challenges experienced by patients who need MH/SUD care from sub-specialty providers or facilities but also fail to account for evolving service delivery models that may include options such as mobile units, school-based services, home visits, and telehealth. Among the measures identified in the Technical Release, minor differences in measure definitions and specifications can have significant impacts on the data outcomes, and few (if any) of the proposed measures have undergone any form of testing for reliability and validity.

Also, it is still not clear where the Departments will draw the lines for making final determinations of noncompliance with the Final Rules. For example, where a range of different data measures is evaluated, how will the Departments resolve data outcomes that are noisy, conflicting, or inconclusive? Similarly, where regulators do conclude that the data that are provided suggest a disparity in access, the Final Rules identify a highly robust set of potential corrective actions. However, it remains to be seen what scope of actions the Departments will determine to be “good enough” in practice.

Finally, we are interested in seeing what role private litigation will play in driving health plan compliance efforts and practical impacts for providers. To date, plaintiffs have found it challenging to pursue litigation on the basis of claims under MHPAEA, due in part to the highly complex arguments that must be made to evaluate MHPAEA compliance and in part to the challenge for plaintiffs to have adequate insight into plan policies, operations, and data across MH/SUD and M/S benefits to adequately assert a complaint under MHPAEA. Very few class action lawsuits or large settlements have occurred to date. These challenges for potential litigants may continue to limit the volume of litigation. However, to the extent that the additional guidance in the Final Rules does give rise to an uptick in successful litigation, it is possible that the courts may end up having a greater impact on health plan compliance strategies than regulators.


ENDNOTES

[1] 26 CFR 54.9812- 1(c)(4)(ii)(D), 29 CFR 2590.712(c)(4)(ii)(D), and 45 CFR 146.136(c)(4)(ii)(D).

[2] 26 CFR 54.9812- 1(c)(4)(ii)(E), 29 CFR 2590.712(c)(4)(ii)(E), and 45 CFR 146.136(c)(4)(ii)(E).

Colorado AG Proposes Draft Amendments to the Colorado Privacy Act Rules

On September 13, 2024, the Colorado Attorney General’s (AG) Office published proposed draft amendments to the Colorado Privacy Act (CPA) Rules. The proposals include new requirements related to biometric collection and use (applicable to all companies and employers that collect biometrics of Colorado residents) and children’s privacy. They also introduce methods by which businesses could seek regulatory guidance from the Colorado AG.

The draft amendments seek to align the CPA with Senate Bill 41, Privacy Protections for Children’s Online Data, and House Bill 1130, Privacy of Biometric Identifiers & Data, both of which were enacted earlier this year and will largely come into effect in 2025. Comments on the proposed regulations can be submitted beginning on September 25, 2024, in advance of a November 7, 2024, rulemaking hearing.

In Depth


PRIVACY OF BIOMETRIC IDENTIFIERS & DATA

In comparison to other state laws like the Illinois Biometric Information Privacy Act (BIPA), the CPA proposed draft amendments do not include a private right of action. That said, the proposed draft amendments include several significant revisions to the processing of biometric identifiers and data, including:

  • Create New Notice Obligations: The draft amendments require any business (including those not otherwise subject to the CPA) that collects biometrics from consumers or employees to provide a “Biometric Identifier Notice” before collecting or processing biometric information. The notice must include which biometric identifier is being collected, the reason for collecting the biometric identifier, the length of time the controller will retain the biometric identifier, and whether the biometric identifier will be disclosed, redisclosed, or otherwise disseminated to a processor alongside the purpose of such disclosure. This notice must be reasonably accessible, either in a standalone disclosure or, if embedded within the controller’s privacy notice, a clear link to the specific section within the privacy notice that contains the Biometric Identifier Notice. This requirement applies to all businesses that collect biometrics, including employers, even if a business does not otherwise trigger the applicability thresholds of the CPA.
  • Revisit When Consent Is Required: The draft amendments require controllers to obtain explicit consent from the data subject before selling, leasing, trading, disclosing, redisclosing, or otherwise disseminating biometric information. The amendments also allow employers to collect and process biometric identifiers as a condition for employment in limited circumstances (much more limited than Illinois’s BIPA, for example).

PRIVACY PROTECTIONS FOR CHILDREN’S ONLINE DATA

The draft amendments also include several updates to existing CPA requirements related to minors:

  • Delineate Between Consumers Based on Age: The draft amendments define a “child” as an individual under 13 years of age and a “minor” as an individual under 18 years of age, creating additional protections for teenagers.
  • Update Data Protection Assessment Requirements: The draft amendments expand the scope of data protection assessments to include processing activities that pose a heightened risk of harm to minors. Under the draft amendments, entities performing assessments must disclose whether personal data from minors is processed as well as identify any potential sources and types of heightened risk to minors that would be a reasonably foreseeable result of offering online services, products, or features to minors.
  • Revisit When Consent Is Required: The draft amendments require controllers to obtain explicit consent before processing the personal data of a minor and before using any system design feature to significantly increase, sustain, or extend a minor’s use of an online service, product, or feature.

OPINION LETTERS AND INTERPRETIVE GUIDANCE

In a welcome effort to create a process by which businesses and the public can understand more about the scope and applicability of the CPA, the draft amendments:

  • Create a Formal Feedback Process: The draft amendments would permit individuals or entities to request an opinion letter from the Colorado AG regarding aspects of the CPA and its application. Entities that have received and relied on applicable guidance offered via an opinion letter may use that guidance as a good faith defense against later claims of having violated the CPA.
  • Clarify the Role of Non-Binding Advice: Separate and in addition to the formal opinion letter process, the draft amendments provide a process by which any person affected directly or indirectly by the CPA may request interpretive guidance from the AG. Unlike the guidance in an opinion letter, interpretive guidance would not be binding on the Colorado AG and would not serve as a basis for a good faith defense. Nonetheless, a process for obtaining interpretive guidance is a novel, and welcome, addition to the state law fabric.

WHAT’S NEXT?

While subject to change pursuant to public consultation, assuming the proposed CPA amendments are finalized, they would become effective on July 1, 2025. Businesses interested in shaping and commenting on the draft amendments should consider promptly submitting comments to the Colorado AG.

Consumer Privacy Update: What Organizations Need to Know About Impending State Privacy Laws Going into Effect in 2024 and 2025

Over the past several years, the number of states with comprehensive consumer data privacy laws has increased exponentially from just a handful—California, Colorado, Virginia, Connecticut, and Utah—to up to twenty by some counts.

Many of these state laws will go into effect starting Q4 of 2024 through 2025. We have previously written in more detail on New Jersey’s comprehensive data privacy law, which goes into effect January 15, 2025, and Tennessee’s comprehensive data privacy law, which goes into effect July 1, 2025. Some laws have already gone into effect, like Texas’s Data Privacy and Security Act, and Oregon’s Consumer Privacy Act, both of which became effective July of 2024. Now is a good time to take stock of the current landscape as the next batch of state privacy laws go into effect.

Over the next year, the following laws will become effective:

  1. Montana Consumer Data Privacy Act (effective Oct. 1, 2024)
  2. Delaware Personal Data Privacy Act (effective Jan. 1, 2025)
  3. Iowa Consumer Data Protection Act (effective Jan. 1, 2025)
  4. Nebraska Data Privacy Act (effective Jan. 1, 2025)
  5. New Hampshire Privacy Act (effective Jan. 1, 2025)
  6. New Jersey Data Privacy Act (effective Jan. 15, 2025)
  7. Tennessee Information Protection Act (effective July 1, 2025)
  8. Minnesota Consumer Data Privacy Act (effective July 31, 2025)
  9. Maryland Online Data Privacy Act (effective Oct. 1, 2025)

These nine state privacy laws contain many similarities, broadly conforming to the Virginia Consumer Data Protection Act we discussed here.  All nine laws listed above contain the following familiar requirements:

(1) disclosing data handling practices to consumers,

(2) including certain contractual terms in data processing agreements,

(3) performing risk assessments (with the exception of Iowa); and

(4) affording resident consumers with certain rights, such as the right to access or know the personal data processed by a business, the right to correct any inaccurate personal data, the right to request deletion of personal data, the right to opt out of targeted advertising or the sale of personal data, and the right to opt out of the processing sensitive information.

The laws contain more than a few noteworthy differences. Each of the laws differs in terms of the scope of their application. The applicability thresholds vary based on: (1) the number of state residents whose personal data the company (or “controller”) controls or processes, or (2) the proportion of revenue a controller derives from the sale of personal data. Maryland, Delaware, and New Hampshire each have a 35,000 consumer processing threshold. Nebraska, similar to the recently passed data privacy law in Texas, applies to controllers that that do not qualify as small business and process personal data or engage in personal data sales. It is also important to note that Iowa adopted a comparatively narrower definition of what constitutes as sale of personal data to only transactions involving monetary consideration. All states require that the company conduct business in the state.

With respect to the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”), Iowa’s, Montana’s, Nebraska’s, New Hampshire’s, and Tennessee’s laws exempt HIPAA-regulated entities altogether; while Delaware’s, Maryland’s, Minnesota’s, and New Jersey’s laws exempt only protected health information (“PHI”) under HIPAA. As a result, HIPAA-regulated entities will have the added burden of assessing whether data is covered by HIPAA or an applicable state privacy law.

With respect to the Gramm-Leach-Bliley Act (“GLBA”), eight of these nine comprehensive privacy laws contain an entity-level exemption for GBLA-covered financial institutions. By contrast, Minnesota’s law exempts only data regulated by GLBA. Minnesota joins California and Oregon as the three state consumer privacy laws with information-level GLBA exemptions.

Not least of all, Maryland’s law stands apart from the other data privacy laws due to a number of unique obligations, including:

  • A prohibition on the collection, processing, and sharing of a consumer’s sensitive data except when doing so is “strictly necessary to provide or maintain a specific product or service requested by the consumer.”
  • A broad prohibition on the sale of sensitive data for monetary or other valuable consideration unless such sale is necessary to provide or maintain a specific product or service requested by a consumer.
  • Special provisions applicable to “Consumer Health Data” processed by entities not regulated by HIPAA. Note that “Consumer Health Data” laws also exist in Nevada, Washington, and Connecticut as we previously discussed here.
  • A prohibition on selling or processing minors’ data for targeted advertising if the controller knows or should have known that the consumer is under 18 years of age.

While states continue to enact comprehensive data privacy laws, there remains the possibility of a federal privacy law to bring in a national standard. The American Privacy Rights Act (“APRA”) recently went through several iterations in the House Committee on Energy and Commerce this year, and it reflects many of the elements of these state laws, including transparency requirements and consumer rights. A key sticking point, however, continues to be the broad private right of action included in the proposed APRA but absent from all state privacy laws. Only California’s law, which we discussed here, has a private right of action, although it is narrowly circumscribed to data breaches.  Considering the November 2024 election cycle, it is likely that federal efforts to create a comprehensive privacy law will stall until the election cycle is over and the composition of the White House and Congress is known.

Legal and Privacy Considerations When Using Internet Tools for Targeted Marketing

Businesses often rely on targeted marketing methods to reach their relevant audiences. Instead of paying for, say, a television commercial to be viewed by people across all segments of society with varied purchasing interests and budgets, a business can use tools provided by social media platforms and other internet services to target those people most likely to be interested in its ads. These tools may make targeted advertising easy, but businesses must be careful when using them – along with their ease of use comes a risk of running afoul of legal rules and regulations.

Two ways that businesses target audiences are working with influencers who have large followings in relevant segments of the public (which may implicate false or misleading advertising issues) and using third-party “cookies” to track users’ browsing history (which may implicate privacy and data protection issues). Most popular social media platforms offer tools to facilitate the use of these targeting methods. These tools are likely indispensable for some businesses, and despite their risks, they can be deployed safely once the risks are understood.

Some Platform-Provided Targeted Marketing Tools May Implicate Privacy Issues
Google recently announced1 that it will not be deprecating third-party cookies, a reversal from its previous plan to phase out these cookies. “Cookies” are small pieces of code that track users’ activity online. “First-party” cookies often are necessary for the website to function properly. “Third-party” cookies are shared across websites and companies, essentially tracking users’ browsing behaviors to help advertisers target their relevant audiences.

In early 2020, Google announced2 that it would phase out third-party cookies, which are associated with privacy concerns because they track individual web-browsing activity and then share that data with other parties. Google’s 2020 announcement was a response to these concerns.

Fast forward about four and a half years, and Google reversed course. During that time, Google had introduced alternatives to third-party cookies, and companies had developed their own, often extensive, proprietary databases3 of information about their customers. However, none of these methods satisfied the advertising industry. Google then made the decision to keep third-party cookies. To address privacy concerns, Google said it would “introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time.”4

Many large platforms in addition to Google offer targeted advertising services via the use of third-party cookies. Can businesses use these services without any legal ramifications? Does the possibility for consumers to opt out mean that a user cannot be liable for privacy concerns if it relies on third-party cookies? The relevant cases have held that individual businesses still must be careful despite any opt-out and other built-in tools offered by these platforms.

Two recent cases from the Southern District of New York5 held that individual businesses that used “Meta Pixels” to track consumers may be liable for violations of the Video Privacy Protection Act (VPPA). 19 U.S.C. § 2710. Facebook defines a Meta Pixel6 as a “piece of code … that allows you to … make sure your ads are shown to the right people … drive more sales, [and] measure the results of your ads.” In other words, a Meta Pixel is essentially a cookie provided by Meta/Facebook that helps businesses target ads to relevant audiences.

As demonstrated by those two recent cases, businesses cannot rely on a platform’s program to ensure their ad targeting efforts do not violate the law. These violations may expose companies to enormous damages – VPPA cases often are brought as class actions and even a single violation may carry damages in excess of $2,500.

In those New York cases, the consumers had not consented to sharing information, but, even if they had, the consent may not suffice. Internet contracts, often included in a website’s Terms of Service, are notoriously difficult to enforce. For example, in one of those S.D.N.Y. cases, the court found that the arbitration clause to which subscribers had agreed was not effective to force arbitration in lieu of litigation for this matter. In addition, the type of consent and the information that websites need to provide before sharing information can be extensive and complicated, as recently reportedby my colleagues.

Another issue that companies may encounter when relying on widespread cookie offerings is whether the mode (as opposed to the content) of data transfer complies with all relevant privacy laws. For example, the Swedish Data Protection Agency recently found8 that a company had violated the European Union’s General Data Protection Regulation (GDPR) because the method of transfer of data was not compliant. In that case, some of the consumers had consented, but some were never asked for consent.

Some Platform-Provided Targeted Marketing Tools May Implicate False or Misleading Advertising Issues
Another method that businesses use to target their advertising to relevant consumers is to hire social media influencers to endorse their products. These partnerships between brands and influencers can be beneficial to both parties and to the audiences who are guided toward the products they want. These partnerships are also subject to pitfalls, including reputational pitfalls (a controversial statement by the influencer may negatively impact the reputation of the brand) and legal pitfalls.

The Federal Trade Commission (FTC) has issued guidelinesConcerning Use of Endorsements and Testimonials” in advertising, and published a brochure for influencers, “Disclosures 101 for Social Media Influencers,”10 that tells influencers how they must apply the guidelines to avoid liability for false or misleading advertising when they endorse products. A key requirement is that influencers must “make it obvious” when they have a “material connection” with the brand. In other words, the influencer must disclose that it is being paid (or gains other, non-monetary benefits) to make the endorsement.

Many social media platforms make it easy to disclose a material connection between a brand and an influencer – a built-in function allows influencers to simply click a check mark to disclose the existence of a material connection with respect to a particular video endorsement. The platform then displays a hashtag or other notification along with the video that says “#sponsored” or something similar. However, influencers cannot rely on these built-in notifications. The FTC brochure clearly states: “Don’t assume that a platform’s disclosure tool is good enough, but consider using it in addition to your own, good disclosure.”

Brands that sponsor influencer endorsements may easily find themselves on the hook if the influencer does not properly disclose that the influencer and the brand are materially connected. In some cases, the contract between the brand and influencer may pass any risk to the brand. In others, the influencer may be judgement proof, or the brand is an easier target for enforcement. And, unsurprisingly, the FTC has sent warning letters11 threatening high penalties to brands for influencer violations.

The Platform-Provided Tools May Be Deployed Safely
Despite risks involved in some platform-provided tools for targeted marketing, these tools are very useful, and businesses should continue to take advantage of them. However, businesses cannot rely on these widely available and easy-to-use tools but must ensure that their own policies and compliance programs protect them from liability.

The same warning about widely available social media tools and lessons for a business to protect itself are also true about other activities online, such as using platforms’ built-in “reposting” function (which may implicate intellectual property infringement issues) and using out-of-the-box website builders (which may implicate issues under the Americans with Disabilities Act). A good first step for a business to ensure legal compliance online is to understand the risks. An attorney experienced in internet law, privacy law and social media law can help.

_________________________________________________________________________________________________________________

1 https://privacysandbox.com/news/privacy-sandbox-update/

https://blog.chromium.org/2020/01/building-more-private-web-path-towards.html

3 Businesses should ensure that they protect these databases as trade secrets. See my recent Insights at https://www.wilsonelser.com/sarah-fink/publications/relying-on-noncompete-clauses-may-not-be-the-best-defense-of-proprietary-data-when-employees-depart and https://www.wilsonelser.com/sarah-fink/publications/a-practical-approach-to-preserving-proprietary-competitive-data-before-and-after-a-hack

4 https://privacysandbox.com/news/privacy-sandbox-update/

5 Aldana v. GamesStop, Inc., 2024 U.S. Dist. Lexis 29496 (S.D.N.Y. Feb. 21, 2024); Collins v. Pearson Educ., Inc., 2024 U.S. Dist. Lexis 36214 (S.D.N.Y. Mar. 1, 2024)

6 https://www.facebook.com/business/help/742478679120153?id=1205376682832142

7 https://www.wilsonelser.com/jana-s-farmer/publications/new-york-state-attorney-general-issues-guidance-on-privacy-controls-and-web-tracking-technologies

See, e.g., https://www.dataguidance.com/news/sweden-imy-fines-avanza-bank-sek-15m-unlawful-transfer

9 https://www.ecfr.gov/current/title-16/chapter-I/subchapter-B/part-255

10 https://www.ftc.gov/system/files/documents/plain-language/1001a-influencer-guide-508_1.pd

11 https://www.ftc.gov/system/files/ftc_gov/pdf/warning-letter-american-bev.pdf
https://www.ftc.gov/system/files/ftc_gov/pdf/warning-letter-canadian-sugar.pdf

FCC’s New Notice of Inquiry – Is This Big Brother’s Origin Story?

The FCC’s recent Notice of Proposed Rulemaking and Notice of Inquiry was released on August 8, 2024. While the proposed Rule is, deservedly, getting the most press, it’s important to pay attention to the Notice of Inquiry.

The part which is concerning to me is the FCC’s interest in “development and availability of technologies on either the device or network level that can: 1) detect incoming calls that are potentially fraudulent and/or AI-generated based on real-time analysis of voice call content; 2) alert consumers to the potential that such voice calls are fraudulent and/or AI-generated; and 3) potentially block future voice calls that can be identified as similar AI-generated or otherwise fraudulent voice calls based on analytics.” (emphasis mine)

The FCC also wants to know “what steps can the Commission take to encourage the development and deployment of these technologies…”

The FCC does note there are “significant privacy risks, insofar as they appear to rely on analysis and processing of the content of calls.” The FCC also wants comments on “what protections exist for non-malicious callers who have a legitimate privacy interest in not having the contents of their calls collected and processed by unknown third parties?”

So, the Federal Communications Commission wants to monitor the CONTENT of voice calls. In real-time. On your device.

That’s not a problem for anyone else?

Sure, robocalls are bad. There are scams on robocalls.

But, are robocalls so bad that we need real-time monitoring of voice call content?

At what point, did we throw the Fourth Amendment out of the window and to prevent what? Phone calls??

The basic premise of the Fourth Amendment is “to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.” I’m not sure how we get more arbitrary than “this incoming call is a fraud” versus “this incoming call is not a fraud”.

So, maybe you consent to this real-time monitoring. Sure, ok. But, can you actually give informed consent to what would happen with this monitoring?

Let me give you three examples of “pre-recorded calls” that the real-time monitoring could overhear to determine if the “voice calls are fraudulent and/or AI-generated”:

  1. Your phone rings. It’s a prerecorded call from Planned Parenthood confirming your appointment for tomorrow.
  2. Your phone rings. It’s an artificial voice recording from your lawyer’s office telling you that your criminal trial is tomorrow.
  3. Your phone rings. It’s the local jewelry store saying your ring is repaired and ready to be picked up.

Those are basic examples, but for them to someone to “detect incoming calls that are potentially fraudulent and/or AI-generated based on real-time analysis of voice call content”, those calls have to be monitored in real-time. And stored somewhere. Maybe on your device. Maybe by a third-party in their cloud.

Maybe you trust Apple with that info. But, do you trust someone who comes up with fraudulent monitoring software that would harvest that data? How do you know you should trust that party?

Or you trust Google. Surely, Google wouldn’t use your personal data. Surely, they would not use your phone call history to sell ads.

And that becomes data a third-party can use. For ads. For political messaging. For profiling.

Yes, this is extremely conspiratorial. But, that doesn’t mean your data is not valuable. And where there is valuable data, there are people willing to exploit it.

Robocalls are a problem. And there are some legitimate businesses doing great things with fraud detection monitoring. But, a real-time monitoring edict from the government is not the solution. As an industry, we can be smarter on how we handle this.

U.S. Sues TikTok for Children’s Online Privacy Protection Act (COPPA) Violations

On Friday, August 2, 2024, the United States sued ByteDance, TikTok, and its affiliates for violating the Children’s Online Privacy Protection Act of 1998 (“COPPA”) and the Children’s Online Privacy Protection Rule (“COPPA Rule”). In its complaint, the Department of Justice alleges TikTok collected, stored, and processed vast amounts of data from millions of child users of its popular social media app.

In June, the FTC voted to refer the matter to the DOJ, stating that it had determined there was reason to believe TikTok (f.k.a. Musical.ly, Inc.) had violated a FTC 2019 consent order and that the agency had also uncovered additional potential COPPA and FTC Act violations. The lawsuit filed today in the Central District of California, alleges that TikTok is directed to children under age 13, that Tik Tok has permitted children to evade its age gate, that TikTok has collected data from children without first notifying their parents and obtaining verifiable parental consent, that TikTok has failed to honor parents’ requests to delete their children’s accounts and information, and that TikTok has failed to delete the accounts and information of users the company knows are children. The complaint also alleges that TikTok failed to comply with COPPA even for accounts in the platform’s “Kids Mode” and that TikTok improperly amassed profiles on Kids Mode users. The complaint seeks civil penalties of up to $51,744 per violation per day from January 10, 2024, to present for the improper collection of children’s data, as well as permanent injunctive relief to prevent future violations of the COPPA Rule.

The lawsuit comes on the heels of the U.S. Senate passage this week of the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) and the Kids Online Safety Act (KOSA) by a 91-3 bipartisan vote. It is unknown whether the House will take up the bills when it returns from recess in September.