Artificial Intelligence and the Rise of Product Liability Tort Litigation: Novel Action Alleges AI Chatbot Caused Minor’s Suicide

As we predicted a year ago, the Plaintiffs’ Bar continues to test new legal theories attacking the use of Artificial Intelligence (AI) technology in courtrooms across the country. Many of the complaints filed to date have included the proverbial kitchen sink: copyright infringement; privacy law violations; unfair competition; deceptive and acts and practices; negligence; right of publicity, invasion of privacy and intrusion upon seclusion; unjust enrichment; larceny; receipt of stolen property; and failure to warn (typically, a strict liability tort).

A case recently filed in Florida federal court, Garcia v. Character Techs., Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024) (Character Tech) is one to watch. Character Tech pulls from the product liability tort playbook in an effort to hold a business liable for its AI technology. While product liability is governed by statute, case law or both, the tort playbook generally involves a defective, unreasonably dangerous “product” that is sold and causes physical harm to a person or property. In Character Tech, the complaint alleges (among other claims discussed below) that the Character.AI software was designed in a way that was not reasonably safe for minors, parents were not warned of the foreseeable harms arising from their children’s use of the Character.AI software, and as a result a minor committed suicide. Whether and how Character Tech evolves past a motion to dismiss will offer valuable insights for developers AI technologies.

The Complaint

On October 22nd, 2024, Ms. Garcia, the mother of the deceased minor (Sewell), filed a complaint in the Middle District of Florida against Google LLC, Character Technologies Inc. and the creators of Character.AI—Noam Shazeer and Daniel De Frietas Adiwarsana. Shazeer and De Frietas formed Character Technologies Inc. after they left their prior jobs at Google LLC and subsequently developed and marketed Character.AI.

Character.AI allows users to communicate with existing Character.AI characters – such as Interviewer or Trip Planner – or to create new AI characters using Character.AI’s tools. A user can then engage with the Character.AI character – whether for human-like conversations, such as to answer questions, write a story, translate or write code – based on Character Tech’s large language model chatbot. According to the Complaint, Character.AI was rated as suitable for children until July 2024.

The Complaint alleges that Character.AI’s chatbot led Sewell to commit suicide. The Plaintiff claims her son became instantly addicted to Character.AI, leading to drastic behavioral changes such as sleep deprivation, school-related issues and low self-esteem. Her son primarily engaged with Character.AI characters from the HBO series Game of Thrones. The conversations with Game of Thrones’ female characters (such as Rhaenyra Targaryen and Daenerys Targaryen) involved sexualized content. Sewell also expressed his own dark thoughts such as “[t]he world I’m in now is such a cruel one. One where I’m meaningless. But, I’ll keep living and trying to get back to you so we can be together again, my love.” According to the Complaint, after Ms. Garcia took her son’s phone away, Sewell would use extraordinary measures to engage with Character.AI characters.

In his last conversation with a Character.AI character, Sewell told the character that he wanted to “come home” and the character replied, “[please] come home to me as soon as possible, my love,” to which he responded, “[w]hat if I told you I could come home right now?” The character answered, “…please do, my sweet king.” Seconds later, Sewell took his own life.

The Claims

The Complaint asserts a host of claims centered around an alleged lack of safeguards for Character.AI and the exploitation of minors. The most significant claims are noted below:

  • The Product Liability Torts

The Plaintiff alleges both strict liability and negligence claims for a failure to warn and defective design. The first hurdle under these product liability claims is whether Character.AI is a product. She argues that Character.AI is a product because it has a definite appearance and location on a user’s phone, it is personal and movable, it is a “good” rather than an idea, copies of Character.AI are uniform and not customized, there are an unlimited number of copies that can be obtained and it can be accessed on the internet without an account. This first step may, however, prove difficult for the Plaintiff because Character.AI is not a traditional tangible good and courts have wrestled over whether similar technologies are services—existing outside the realm of product liability. See In re Social Media Adolescent Addiction, 702 F. Supp. 3d 809, 838 (N.D. Cal. 2023) (rejecting both parties’ simplistic approaches to the services or products inquiry because “cases exist on both sides of the questions posed by this litigation precisely because it is the functionalities of the alleged products that must be analyzed”).

The failure to warn claims allege that the Defendants had knowledge of the inherent dangers of the Character.AI chatbots, as shown by public statements of industry experts, regulatory bodies and the Defendants themselves. These alleged dangers include knowledge that the software utilizes data sets that are highly toxic and sexual to train itself, common industry knowledge that using tactics to convince users that it is human manipulates users’ emotions and vulnerability, and that minors are most susceptible to these negative effects. The Defendants allegedly had a duty to warn users of these risks and breached that duty by failing to warn users and intentionally allowing minors to use Character.AI.

The defective design claims argue the software is defectively designed based on a “Garbage In, Garbage Out” theory. Specifically, Character.AI was allegedly trained based on poor quality data sets “widely known for toxic conversations, sexually explicit material, copyrighted data, and even possible child sexual abuse material that produced flawed outputs.” Some of these alleged dangers include the unlicensed practice of psychotherapy, sexual exploitation and solicitation of minors, chatbots tricking users into thinking they are human, and in this instance, encouraging suicide. Further, the Complaint alleges that Character.AI is unreasonably and inherently dangerous for the general public—particularly minors—and numerous safer alternative designs are available.

  • Deceptive and Unfair Trade Practices

The Plaintiff asserts a deceptive and unfair trade practices claim under Florida state law. The Complaint alleges the Defendants represented that Character.AI characters mimic human interaction, which contradicts Character Tech’s disclaimer that Character.AI characters are “not real.” These representations constitute dark patterns that manipulate consumers into using Character.AI, buying subscriptions and providing personal data.

The Plaintiff also alleges that certain characters claim to be licensed or trained mental health professionals and operate as such. The Defendants allegedly failed to conduct testing to determine whether the accuracy of these claims. The Plaintiff argues that by portraying certain chatbots to be therapists—yet not requiring them to adhere to any standards—the Defendants engaged in deceptive trade practices. The Complaint compares this claim to the FTC’s recent action against DONOTPAY, Inc. for its AI-generated legal services that allegedly claimed to operate like a human lawyer without adequate testing.

The Defendants are also alleged to employ AI voice call features intended to mislead and confuse younger users into thinking the chatbots are human. For example, a Character.AI chatbot titled “Mental Health Helper” allegedly identified itself as a “real person” and “not a bot” in communications with a user. The Plaintiff asserts that these deceptive and unfair trade practices resulted in damages, including the Character.AI subscription costs, Sewell’s therapy sessions and hospitalization allegedly caused by his use of Character.AI.

  • Wrongful Death

Ms. Garcia asserts a wrongful death claim arguing the Defendants’ wrongful acts and neglect proximately caused the death of her son. She supports this claim by showing her son’s immediate mental health decline after he began using Character.AI, his therapist’s evaluation that he was addicted to Character.AI characters and his disturbing sexualized conversations with those characters.

  • Intentional Infliction of Emotional Distress

Ms. Garcia also asserts a claim for intentional infliction of emotional distress. The Defendants allegedly engaged in intentional and reckless conduct by introducing AI technology to the public and (at least initially) targeting it to minors without appropriate safety features. Further, the conduct was allegedly outrageous because it took advantage of minor users’ vulnerabilities and collected their data to continuously train the AI technology. Lastly, the Defendants’ conduct caused severe emotional distress to Plaintiff, i.e., the loss of her son.

  • Other Claims

The Plaintiff also asserts claims of negligence per se, unjust enrichment, survivor action and loss of consortium and society.

Lawsuits like Character Tech will surely continue to sprout up as AI technology becomes increasingly popular and intertwined with media consumption – at least until the U.S. AI legal framework catches up with the technology. Currently, the Colorado AI Act (covered here) will become the broadest AI law in the U.S. when it enters into force in 2026.

The Colorado AI Act regulates a “High-Risk Artificial Intelligence System” and is focused on preventing “algorithmic discrimination, for Colorado residents”, i.e., “an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of [Colorado] or federal law.” (Colo. Rev. Stat. § 6-1-1701(1).) Whether the Character.AI technology would constitute a High-Risk Artificial Intelligence System is still unclear but may be clarified by the anticipated regulations from the Colorado Attorney General. Other U.S. AI laws also are focused on detecting and preventing bias, discrimination and civil rights in hiring and employment, as well as transparency about sources and ownership of training data for generative AI systems. The California legislature passed a law focused on large AI systems that prohibited a developer from making an AI system available if it presented an “unreasonable risk” of causing or materially enabling “a critical harm.” This law was subsequently vetoed by California Governor Newsome as “well-intentioned” but nonetheless flawed.

While the U.S. AI legal framework – whether in the states or under the new administration – an organization using AI technology must consider how novel issues like the ones raised in Character Tech present new risks.

Daniel Stephen, Naija Perry, and Aden Hochrun contributed to this article

CFPB Imposes $95 Million Fine on Large Credit Union for Overdraft Fee Practices

On November 7, 2024, the CFPB ordered one of the largest credit unions in the nation to pay over $95 million for its practices related to the imposition of overdraft fees. The enforcement action addresses practices from 2017 to 2022 where the credit union charged overdraft fees on transactions that appeared to have sufficient funds, affecting consumers including those in the military community, in violation of the CFPA’s prohibition on unfair, deceptive, and abusive acts or practices.

The Bureau alleges that the credit union’s practices, particularly in connection with its overdraft service, resulted in nearly $1 billion in revenue from overdraft fees over the course of five years. According to the Bureau, the credit union unfairly charged overdraft fees in two ways. First, it charged overdraft fees on transactions where the consumer had a sufficient balance at the time the credit union authorized the transaction, but then later settled with an insufficient balance. The Bureau noted that these authorize-positive/settle-negative violations have been a focus of federal regulators since 2015, and were the subject of a CFPB circular in October 2022. Second, when customers received money though peer-to-peer payment networks, the credit union’s systems showed the money as immediately available to spend. However, the credit union failed to disclose that payments received after a certain time of the day would not post until the next business day. Customers who tried to use this apparently available money were then charged overdraft fees

In addition to monetary fines, the CFPB’s order prohibits the credit union from imposing overdraft fees for authorize-positive, settle negative transactions, and also in cases where there was a delayed crediting of funds from peer-to-peer payment platforms.

The monetary penalties the consent order imposes consist of $80 million in consumer refunds for wrongfully charged overdraft fees and a $15 million civil penalty to be paid to the CFPB’s victims relief fund.

Putting It Into Practice: This order aligns with federal and state regulators’ recent focus on overdraft fees in a broader initiative to eliminate allegedly illegal “junk fees” (a trend we previously discussed herehere, and here). For companies operating in the financial sector or providing peer-to-peer payment services, this enforcement action serves as a critical reminder of the need for transparency and adherence to consumer financial protection laws. Regular audits of fee practices and disclosures can help identify and rectify potential compliance issues before they escalate. Companies aiming to impose overdraft or other types of fees should review agency guidance enforcements to ensure their internal policies and business practices do not land them in hot water.

Listen to this post

New Fact Sheet Highlights ASTP’s Concerns About Certified API Practices

On October 29, 2024, the US Department of Health and Human Services (HHS) Assistant Secretary for Technology Policy (ASTP) released a fact sheet titled “Information Blocking Reminders Related to API Technology.” The fact sheet reminds developers of application programming interfaces (APIs) certified under the ASTP’s Health Information Technology (IT) Certification Program and their health care provider customers of practices that constitute information blocking under ASTP’s information blocking regulations and information blocking condition of certification applicable to certified health IT developers.

In Depth


The fact sheet is noteworthy because it follows ASTP’s recent blog post expressing concern about reports that certified API developers are potentially violating Certification Program requirements and engaging in information blocking. ASTP also recently strengthened its feedback channels by adding a section specifically for API-linked complaints and inquiries to the Health IT Feedback and Inquiry Portal. It appears increasingly likely that initial investigations and enforcement of the information blocking prohibition by the HHS Office of Inspector General will focus on practices that may interfere with access, exchange, or use of electronic health information (EHI) through certified API technology.

The fact sheet focuses on three categories of API-related practices that could be information blocking under ASTP’s information blocking regulations and Certification Program condition of certification:

  • ASTP cautions against practices that limit or restrict the interoperability of health IT. For example, the fact sheet states that health care providers who locally manage their fast healthcare interoperability resources (FHIR) servers without certified API developer assistance may engage in information blocking when they refuse to provide to certified API developers the FHIR service base URL necessary for patients to access their EHI.
  • ASTP states that impeding innovations and advancements in access, exchange, or use of EHI or health-IT-enabled care delivery may be information blocking. For example, the fact sheet indicates that a certified API developer may engage in information blocking by refusing to register and enable an application for production use within five business days of completing its verification of an API user’s authenticity as required by ASTP’s API maintenance of certification requirements.
  • ASTP states that burdensome or discouraging terms, delays, or influence over customers and users may be information blocking. For example, ASTP states that a certified electronic health record (EHR) developer may engage in information blocking by conditioning the disclosure of interoperability elements to third-party developers on the third-party developer entering into business associate agreements with all of the EHR developer’s covered entity customers, even if the work being done is not for the benefit of the customers and HIPAA does not require the business associate agreements.

The fact sheet does not address circumstances under which any of the above practices of certified API developers may meet an information blocking exception (established for reasonable practices that interfere with access, exchange, or use of EHI). Regulated actors should consider whether exceptions apply to individual circumstances.

HIPAA Gets a Potential Counterpart in HISAA

Americans hear about cybersecurity incidents on a frequent basis. As the adage goes, it is not a matter of “if” a breach or security hack occurs; it is a matter of “when.” At no time was that more evident earlier this year when the healthcare industry was hit with the widespread ransomware attack on Change Healthcare, a subsidiary of the United Health Group. Because of the nature of the Change Healthcare shutdown and its impact across the industry, the U.S. Department of Health & Human Services (HHS) and its HIPAA enforcement arm, the Office for Civil Rights (OCR), conducted investigations and issued FAQ responses for those impacted by the cybersecurity event.

In further response, Senators Ron Wyden (D-OR) and Mark Warner (R-VA) introduced the Health Infrastructure Security and Accountability Act (HISAA) on September 26, 2024. Like HIPAA and HITECH before it, which established minimum levels of protection for healthcare information, HISAA looks to reshape how healthcare organizations address cybersecurity by enacting mandatory minimum security standards to protect healthcare information and by providing initial financial support to facilitate compliance. A copy of the legislative text can be found here, and a one-page summary of the bill can be found here.

To date, HIPAA and HITECH require covered entities and business associates to develop, implement, and maintain reasonable and appropriate “administrative, technical, physical” safeguards to protect electronic Protected Health Information or e-PHI. However, the safeguards do not specify minimum requirements; instead, they prescribe standards intended to be scalable, depending on the specific needs, resources, and capabilities of the respective organization. What this means is that e-PHI stored or exchanged among interconnected networks are subject to systems with often different levels of sophistication or protection.

Given the considerable time, effort, and resources dedicated to HIPAA/HITECH compliance, many consider the current state of voluntary safeguards as inadequate. This is especially the case since regulations under the HIPAA Security Rule have not been updated since 2013. As a result, Senators Wyden and Warner introduced HISAA in an effort to bring the patchwork of healthcare data security standards under one minimum umbrella and to require healthcare organizations to remain on top of software systems and cybersecurity standards.

Key pieces of HISAA, as proposed, include:

  1. Mandatory Cybersecurity Standards—If enacted, the Secretary of HHS, together with the Director of the Cybersecurity and Infrastructure Security Agency (CISA) and the Director of National Intelligence (DNI), will oversee the development and implementation of required standards and the standards will be subject to review and update every two years to counter evolving threats.
  2. Annual Audits and Stress Tests—Like current Security Risk Assessment (SRA) requirements, HISAA will require healthcare organizations to conduct annual cybersecurity audits and document the results. Unlike current requirements, these audits will need to be conducted by independent organizations to assess compliance, evaluate restoration abilities, and conduct stress tests in real-world simulations. While smaller organizations may be eligible for waivers from certain requirements because of undue burden, all healthcare organizations will have to publicly disclose compliance status as determined by these audits.
  3. Increased Accountability and Penalties—HISAA would implement significant penalties for non-compliance and would require healthcare executives to certify compliance on an annual basis. False information in such certifications could result in criminal charges, including fines of up to $1 million and prison time for up to 10 years. HISAA would also eliminate fine caps to allow HHS to impose penalties commiserate with the level needed to deter lax behaviors, especially among larger healthcare organizations.
  4. Financial Support for Enhancements—Because the costs for new standards could be substantial, especially for smaller organizations, HISAA would allocate $1.3 billion to support hospitals for infrastructure enhancements. Of this $1.3 billion, $800 million would be for rural and safety net hospitals over the first two years, and an additional $500 million would be available for all hospitals in succeeding years.
  5. Medicare Payment Adjustments—Finally, HISAA enables the Secretary of HHS to provide accelerated Medicare payments to organizations impacted by cybersecurity events. HHS offered similar accelerated payments during the Change Healthcare event, and HISAA would codify similar authority to HHS for recovery periods related to future cyberattacks.

While HISAA will establish a baseline of cybersecurity requirements, compliance with those requirements will require a significant investment of time and resources in devices and operating systems/software, training, and personnel. Even with the proposed funding, this could result in substantial challenges for smaller and rural facilities to comply. Moreover, healthcare providers will need to prioritize items such as encryption, multi-factor authentication, real-time monitoring, comprehensive response and remediation plans, and robust training and exercises to support compliance efforts.

Finally, at this juncture, the more important issue is for healthcare organizations to recognize their responsibilities in maintaining effective cybersecurity practices and to stay updated on any potential changes to these requirements. Since HISAA was introduced in the latter days of a hectic (and historic) election season, we will monitor its progress as the current Congress winds down in 2024 and the new Congress readies for action with a new administration in 2025.

Let’s Circle Back (and eFile) after the Holidays

The Consumer Product Safety Commission launched its eFiling Beta Pilot a little over a year ago. Non-pilot participants were invited to participate in voluntary eFiling last summer, and the CPSC extended this stage to October 10, as it continued to work on a revised rule. The CPSC had anticipated completing a final rulemaking by the end of its fiscal year, which would have meant a full system implementation around January 1, 2025 – but regardless of when the final rule is published, the CPSC has proposed that the requirements go in effect 120 days after publication in the Federal Register.

Notably, the National Association of Manufacturers submitted comments regarding the rulemaking, highlighting issues with the proposed rules, including the scope of the filing system, technical and financial burdens for implementing the system, and the feasibility of complying with the proposed 120-day effective date window. It remains to be seen whether the CPSC will take these comments into consideration when the staff releases the updated package in the coming weeks, with a commission vote expected before the end of the year.

The eFiling program is the CPSC’s initiative to enable importers of regulated consumer products to file certain data from Certificates of Conformity (COC) electronically with Customs and Border Protection (CBP).This is not merely emailing existing COCs to CPSC or CBP, but digitizing individual data elements of the COC either directly into CBP’s Automated Commercial Environment (ACE) or through CPSC’s Product Registry.

There are many misconceptions related to the new rule and eFiling process and CPSC has created a broad resource library to help importers of record, the parties ultimately responsible for eFiling, comply with the new requirements. Any product that requires a COC today (whether a General Certificate of Conformity or a Children’s Product Certificate) will require eFiling under the new rule. However, the CPSC intends to honor enforcement discretions applied to certain products before the implementation of the eFiling program.

Internal business conversations between import compliance personnel, customs teams, product compliance teams, and brokers to discuss digitizing COC data and developing methods to manage trade parties, such as implementing identification mechanisms within testing programs, should begin, if they haven’t already. The CPSC also has an eFiling newsletter that is published quarterly and is due for another installment in the next month.

Once the final rule is published, eFiling will be a mandatory. So, to ensure compliance, the seamless import of goods, fewer holds at port, fewer targeted shipments, and reduced costs – implicated parties should get familiar and quickly for this fast approaching requirement.

eFiling is a CPSC initiative under which importers of regulated consumer products will electronically file (eFile) data elements from a certificate of compliance with U.S. Customs and Border Protection (CBP), via a Partner Government Agency (PGA) Message Set.

PRIVACY ON ICE: A Chilling Look at Third-Party Data Risks for Companies

An intelligent lawyer could tackle a problem and figure out a solution. But a brilliant lawyer would figure out how to prevent the problem to begin with. That’s precisely what we do here at Troutman Amin. So here is the latest scoop to keep you cool. A recent case in the United States District Court for the Northern District of California, Smith v. Yeti Coolers, L.L.C., No. 24-cv-01703-RFL, 2024 U.S. Dist. LEXIS 194481 (N.D. Cal. Oct. 21, 2024), addresses complex issues surrounding online privacy and the liability of companies who enable third parties to collect and use consumer data without proper disclosures or consent.

Here, Plaintiff alleged that Yeti Coolers (“Yeti”) used a third-party payment processor, Adyen, that collected customers’ personal and financial information during transactions on Yeti’s website. Plaintiff claimed Adyen then stored this data and used it for its own commercial purposes, like marketing fraud prevention services to merchants, without customers’ knowledge or consent. Alarm bells should be sounding off in your head—this could signal a concerning trend in data practices.

Plaintiff sued Yeti under the California Invasion of Privacy Act (“CIPA”) for violating California Penal Code Sections 631(a) (wiretapping) and 632 (recording confidential communications). Plaintiff also brought a claim under the California Constitution for invasion of privacy. The key question here was whether Yeti could be held derivatively liable for Adyen’s alleged wrongful conduct.

So, let’s break this down step by step.

Under the alleged CIPA Section 631(a) violation, the court found that Plaintiff plausibly alleged Adyen violated this Section by collecting customer data as a third-party eavesdropper without proper consent. In analyzing whether Yeti’s Privacy Policy and Terms of Use constituted enforceable agreements, it applied the legal frameworks for “clickwrap” and “browsewrap” agreements.

Luckily, my Contracts professor during law school here in Florida was remarkable, Todd J. Clark, now the Dean of Widner University Delaware Law School. For those who snoozed out during Contracts class during law school, here is a refresher:

Clickwrap agreements present the website’s terms to the user and require the user to affirmatively click an “I agree” button to proceed. Browsewrap agreements simply post the terms via a hyperlink at the bottom of the webpage. For either type of agreement to be enforceable, the Court explained that a website must provide 1) reasonably conspicuous notice of the terms and 2) require some action unambiguously manifesting assent. See Oberstein v. Live Nation Ent., Inc., 60 F.4th 505, 515 (9th Cir. 2023).

The Court held that while Yeti’s pop-up banner and policy links were conspicuous, they did not create an enforceable clickwrap agreement because “Defendant’s pop-up banner does not require individuals to click an “I agree” button, nor does it include any language to imply that by proceeding to use the website, users reasonably consent to Defendant’s terms and conditions of use.” See Smith, 2024 U.S. Dist. LEXIS 194481, at *8. The Court also found no enforceable browsewrap agreement was formed because although the policies were conspicuously available, “Defendant’s website does not require additional action by users to demonstrate assent and does not conspicuously notify them that continuing to use to website constitutes assent to the Privacy Policy and Terms of Use.” Id. at *9.

What is more, the Court relied on Nguyen v. Barnes & Noble Inc., 763 F.3d 1171, 1179 (9th Cir. 2014), which held that “where a website makes its terms of use available via a conspicuous hyperlink on every page of the website but otherwise provides no notice to users nor prompts them to take any affirmative action to demonstrate assent, even close proximity of the hyperlink to relevant buttons users must click on—without more—is insufficient to give rise to constructive notice.” Here, the Court found the pop-up banner and link on Yeti’s homepage presented the same situation as in Nguyen and thus did not create an enforceable browsewrap agreement.

Thus, the Court dismissed the Section 631(a) claim due to insufficient allegations that Yeti was aware of Adyen’s alleged violations.

However, the Court held that to establish Yeti’s derivative liability for “aiding” Adyen under Section 631(a), Plaintiff had to allege facts showing Yeti acted with both knowledge of Adyen’s unlawful conduct and the intent or purpose to assist it. It found Plaintiff’s allegations that Yeti was “aware of the purposes for which Adyen collects consumers’ sensitive information because Defendant is knowledgeable of and benefitting from Adyen’s fraud prevention services” and “assists Adyen in intercepting and indefinitely storing this sensitive information” were too conclusory. Smith, 2024 U.S. Dist. LEXIS 194481, at *13. It reasoned: “Without further information, the Court cannot plausibly infer from Defendant’s use of Adyen’s fraud prevention services alone that Defendant knew that Adyen’s services were based on its allegedly illegal interception and storing of financial information, collected during Adyen’s online processing of customers’ purchases.” Id.

Next, the Court similarly found that Plaintiff plausibly alleged Adyen recorded a confidential communication without consent in violation of CIPA Section 632. A communication is confidential under this section if a party “has an objectively reasonable expectation that the conversation is not being overheard or recorded.” Flanagan v. Flanagan, 27 Cal. 4th 766, 776-77 (2002). It explained that “[w]hether a party has a reasonable expectation of privacy is a context-specific inquiry that should not be adjudicated as a matter of law unless the undisputed material facts show no reasonable expectation of privacy.” Smith, 2024 U.S. Dist. LEXIS 194481, at *18-19. At the pleading stage, the Court found Plaintiff’s allegation that she reasonably expected her sensitive financial information would remain private was sufficient.

However, as with the Section 631(a) claim, the Court held that Plaintiff did not plead facts establishing Yeti’s derivative liability under the standard for aiding and abetting liability. Under Saunders v. Superior Court, 27 Cal. App. 4th 832, 846 (1994), the Court explained a defendant is liable if they a) know the other’s conduct is wrongful and substantially assist them or b) substantially assist the other in accomplishing a tortious result and the defendant’s own conduct separately breached a duty to the plaintiff. The Court found that the Complaint lacked sufficient non-conclusory allegations that Yeti knew or intended to assist Adyen’s alleged violation. See Smith, 2024 U.S. Dist. LEXIS 194481, at *16.

Lastly, the Court analyzed Plaintiff’s invasion of privacy claim under the California Constitution using the framework from Hill v. Nat’l Coll. Athletic Ass’n, 7 Cal. 4th 1, 35-37 (1994). For a valid invasion of privacy claim, Plaintiff had to show 1) a legally protected privacy interest, 2) a reasonable expectation of privacy under the circumstances, and 3) a serious invasion of privacy constituting “an egregious breach of the social norms.” Id.

The Court found Plaintiff had a protected informational privacy interest in her personal and financial data, as “individual[s] ha[ve] a legally protected privacy interest in ‘precluding the dissemination or misuse of sensitive and confidential information.”‘ Smith, 2024 U.S. Dist. LEXIS 194481, at *17. It also found Plaintiff plausibly alleged a reasonable expectation of privacy at this stage given the sensitivity of financial data, even if “voluntarily disclosed during the course of ordinary online commercial activity,” as this presents “precisely the type of fact-specific inquiry that cannot be decided on the pleadings.” Id. at *19-20.

Conversely, the Court found Plaintiff did not allege facts showing Yeti’s conduct was “an egregious breach of the social norms” rising to the level of a serious invasion of privacy, which requires more than “routine commercial behavior.” Id. at *21. The Court explained that while Yeti’s simple use of Adyen for payment processing cannot amount to a serious invasion of privacy, “if Defendant was aware of Adyen’s usage of the personal information for additional purposes, this may present a plausible allegation that Defendant’s conduct was sufficiently egregious to survive a Motion to Dismiss.” Id. However, absent such allegations about Yeti’s knowledge, this claim failed.

In the end, the Court dismissed Plaintiff’s Complaint but granted leave to amend to correct the deficiencies, so this case may not be over. The Court’s grant of “leave to amend” signals that if Plaintiff can sufficiently allege Yeti’s knowledge of or intent to facilitate Adyen’s use of customer data, these claims could proceed. As companies increasingly rely on third parties to handle customer data, we will likely see more litigation in this area, testing the boundaries of corporate liability for data privacy violations.

So, what is the takeaway? As a brilliant lawyer, your company’s goal should be to prevent privacy pitfalls before they snowball into costly litigation. Key things to keep in mind are 1) ensure your privacy policies and terms of use are properly structured as enforceable clickwrap or browsewrap agreements, with conspicuous notice and clear assent mechanisms; 2) conduct thorough due diligence on third-party service providers’ data practices and contractual protections; 3) implement transparent data collection and sharing disclosures for informed customer consent; and 4) stay abreast of evolving privacy laws.

In essence, taking these proactive steps can help mitigate the risks of derivative liability for third-party misconduct and, most importantly, foster trust with your customers.

Lawsuit Challenges CFPB’s ‘Buy Now, Pay Later’ Rule

On Oct. 18, 2024, fintech trade group Financial Technology Association (FTA) filed a lawsuit challenging the Consumer Financial Protection Bureau’s (CFPB) final interpretative rule on “Buy Now, Pay Later” (BNPL) products. Released in May 2024, the CFPB’s interpretative rule classifies BNPL products as “credit cards” and their providers as “card issuers” and “creditors” for purposes of the Truth in Lending Act (TILA) and Regulation Z.

The FTA filed its lawsuit challenging the CFPB’s interpretative rule in the U.S. District Court for the District of Columbia. The FTA alleges that the CFPB violated the Administrative Procedure Act’s (APA) notice-and-comment requirements by imposing new obligations on BNPL providers under the label of an “interpretive rule.” The FTA also alleges that the CFPB violated the APA’s requirement that agencies act within their statutory authority by ignoring TILA’s effective-date requirement for new disclosure requirements and imposing obligations beyond those permitted by TILA. The FTA also contends that the CFPB’s interpretive rule is arbitrary and capricious because it is “a poor fit for BNPL products,” grants “insufficient time for BNPL providers to come into compliance with the new obligations” imposed by the rule, and neglects “the serious reliance interests that [the CFPB’s] prior policy on BNPL products engendered.”

In a press release announcing its lawsuit, the FTA said the BNPL industry would welcome regulations that fit the unique characteristics of BNPL products, but that the CFPB’s interpretive rule is a poor fit that risks creating confusion for consumers. “Unfortunately, the CFPB’s rushed interpretive rule falls short on multiple counts, oversteps legal bounds, and risks creating confusion for consumers,” FTA President and CEO Penny Lee said. “The CFPB is seeking to fundamentally change the regulatory treatment of pay-in-four BNPL products without adhering to required rulemaking procedures, in excess of its statutory authority, and in an unreasonable manner.”

The FTA’s pending lawsuit notwithstanding, BNPL providers may wish to consult with legal counsel regarding compliance with the CFPB’s interpretive rule. Retailers marketing BNPL products should also consider working with legal counsel to implement third-party vendor oversight policies to enhance BNPL-partner compliance with the rule.

Federal Contractors Beware – More Data Disclosures Coming!

On October 29, 2024, the U.S. Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) published a Freedom of Information Act (FOIA) notice, inviting federal contractors to respond to FOIA requests that the OFCCP received related to federal contractors’ 2021 Type 2 EEO-1 Consolidated Reports. These reports, required of federal contractors and subcontractors with at least 50 employees, contain data critical to the government’s diversity efforts consistent with anti-discrimination mandates under Title VII and Executive Order 11246. Contractors have previously relied on FOIA Exemption 4 to protect against disclosing sensitive commercial information that could impact competitive positioning, but in late December 2023 as previously reported here, a federal court ruling concluded that certain demographic data did not qualify as confidential under FOIA Exemption 4. That court decision may spur an increase in FOIA requests for EEO-1 reporting information.

Contractors who wish to object to the disclosure of their EEO-1 reporting information must do so via OFCCP’s online portal, email, or mail on or before December 9, 2024. Per the OFCCP’s notice, contractors can object to releasing their 2021 EEO-1 Type 2 data by providing evidence showing the data satisfies FOIA Exemption 4. To do this, contractors should:

  • Specifically identify the objectionable data;
  • Explain why this data is commercial or competitive to render it confidential;
  • Outline the processes the contractor has in place to safeguard the data;
  • Identify any prior assurances or expectations that the data would remain confidential; and
  • Detail the damage that would occur if the data were disclosed by conducting assessments to see how disclosure would impact business operations.

In addition to raising timely objections to disclosure of data, contractors should also implement clear policies to maintain a consistent approach to data confidentiality. Specifically, contractors should be thoughtful and consistent as to how they define confidential information and the protection measures they take related to such information.

FOIA requests and court decisions in this space will likely continue to make striking a balance between government transparency and protecting contractors’ confidential business information more difficult. To navigate these changes, federal contractors should remain vigilant by staying informed, preparing objections to FOIA requests, and consulting with legal counsel to ensure compliance with this evolving area of law.

Social Media’s Legal Dilemma: Curated Harmful Content

Walking the Line Between Immunity and Liability: How Social Media Platforms May Be Liable for Harmful Content Specifically Curated for Users

As proliferation of harmful content online has increasingly become easier and more accessible through social media, review websites and other online public forums, businesses and politicians have pushed to reform and limit the sweeping protections afforded by Section 230 of the Communications Decency Act, which is said to have created the Internet. Congress enacted Section 230 of the Communications Decency Act of 1996 “for two basic policy reasons: to promote the free exchange of information and ideas over the Internet and to encourage voluntary monitoring for offensive or obscene material.” Congress intended for internet to flourish and the goal of Section 230 was to promote the unhindered development of internet businesses, services, and platforms.

To that end Section 230 immunizes online services providers and interactive computer services from liability for posting, re-publishing, or allowing public access to offensive, damaging, or defamatory information or statements created by a third party. Specifically, Section 230(c)(1) provides,

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

[47 U.S.C. § 230(c)(1)]

Section 230 has been widely interpreted to protect online platforms from being held liable for user-generated content, thereby promoting the free exchange of information and ideas over the Internet. See, e.g., Hassell v. Bird, 5 Cal. 5th 522 (2018) (Yelp not liable for defamatory reviews posted on its platform and cannot be forced to remove them); Doe II v. MySpace Inc., 175 Cal. App.4th 561, 567–575 (2009) (§ 230 immunity applies to tort claims against a social networking website, brought by minors who claimed that they had been assaulted by adults they met on that website]; Delfino v. Agilent Technologies, Inc., 145 Cal. App.4th 790, 804–808 (2006) (§ 230 immunity applies to tort claims against an employer that operated an internal computer network used by an employee to allegedly communicate threats against the plaintiff]; Gentry v. eBay, Inc., 99 Cal. App. 4th 816, 826-36 (Cal. Ct. App. 2002) (§ 230 immunity applies to tort and statutory claims against an auction website, brought by plaintiffs who allegedly purchased forgeries from third party sellers on the website).

Thus, under § 230, lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content—are barred. Under the statutory scheme, an “interactive computer service” qualifies for immunity so long as it does not also function as an “information content provider” for the portion of the statement or publication at issue. Even users or platforms that “re-post” or “publish” allegedly defamatory or damaging content created by a third-party are exempted from liability. See Barrett v. Rosenthal, 40 Cal. 4th 33, 62 (2006). Additionally, merely compiling false and/or misleading content created by others or otherwise providing a structured forum for dissemination and use of that information is not enough to confer liability. See, e.g. eBay, Inc. 99 Cal. App. 4th 816 (the critical issue is whether eBay acted as an information content provider with respect to the information claimed to be false or misleading); Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1122-1124 (9th Cir. 2003) (Matchmaker.com not liable for fake dating profile of celebrity who started receiving sexual and threatening emails and voicemails).

Recently, however, the Third Circuit appellate court found that Section 230 did not immunize and protect popular social media platform TikTok from suit arising from a ten-year old’s death following her attempting a “Blackout Challenge” based on videos she watched on her TikTok “For You Page.” See Anderson v. TikTok, Inc., 116 F.4th 180 (3rd Cir. 2024). TikTok is a social media platform where users can create, post, and view videos. Users can search for specific content or watch videos recommended by TikTok’s algorithm on their “For You Page” (FYP). This algorithm customizes video suggestions based on a range of factors, including a user’s age, demographics, interactions, and other metadata—not solely on direct user inputs. Some videos on TikTok’s FYP are “challenges” that encourage users to replicate the actions shown. One such video, the “Blackout Challenge,” urged users to choke themselves until passing out. TikTok’s algorithm recommended this video to a ten-year old girl who attempted it and tragically died from asphyxiation.

The deciding question was whether TikTok’s algorithm, and the inclusion of the “Blackout Challenge” video on a user’s FYP, crosses the threshold between an immune publisher and a liable creator. Plaintiff argued that TikTok’s algorithm “amalgamat[es] [] third-party videos,” which results in “an expressive product” that “communicates to users . . . that the curated stream of videos will be interesting to them.” The Third Circuit agreed finding that a platform’s algorithm reflecting “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product,” and therefore, TikTok’s algorithm, which recommended the Blackout Challenge on decedent’s FYP, was TikTok’s own “expressive activity.” As such, Section 230 did not bar claims against TikTok arising from TikTok’s recommendations via its FYP algorithm because Section 230 immunizes only information “provided by another,” and here, the claims concerned TikTok’s own expressive activity.

The Court was careful to note its conclusion was reached specifically due to TikTok’s promotion of the Blackout Challenge video on decedent’s FYP was not contingent on any specific user input, i.e. decedent did not search for and view the Blackout Video through TikTok’s search function. TikTok has certainly taken issue with the Court’s ruling contending that if websites lose § 230 protection whenever they exercise “editorial judgment” over the third-party content on their services, then the exception would swallow the rule. Perhaps websites seeking to avoid liability will refuse to sort, filter, categorize, curate, or take down any content, which may result in unfiltered and randomly placed objectionable material on the Internet. On the other hand, some websites may err on the side of removing any potentially harmful third-party speech, which would chill the proliferation of free expression on the web.

The aftermath of the ruling remains to be seen but for now social media platforms and interactive websites should take note and re-evaluate the purpose, scope, and mechanics of their user-engagement algorithms.

FTC Social Media Staff Report Suggests Enforcement Direction and Expectations

The FTC’s staff report summarizes how it views the operations of social media and video streaming companies. Of particular interest is the insight it gives into potential enforcement focus in the coming months, and into 2025. Of particular concern for the FTC in the report, issued last month, were the following:

  1. The high volume of information collected from users, including in ways they may not expect;
  2. Companies relying on advertising revenue that was based on use of that information;
  3. Use of AI over which the FTC felt users did not have control; and
  4. A gap in protection of teens (who are not subject to COPPA).

As part of its report, the FTC recommended changes in how social media companies collect and use personal information. Those recommendations stretched over five pages of the report and fell into four categories. Namely:

  1. Minimizing what information is collected to that which is needed to provide the company’s services. This recommendation also folded in concepts of data deletion and limits on information sharing.
  2. Putting guardrails around targeted digital advertising. Especially, the FTC indicated, if the targeting is based on use of sensitive personal information.
  3. Providing users with information about how automated decisions are being made. This would include not just transparency, the FTC indicated, but also having “more stringent testing and monitoring standards.”
  4. Using COPPA as a baseline in interactions with not only children under 13, but also as a model for interacting with teens.

The FTC also signaled in the report its support of federal privacy legislation that would (a) limit “surveillance” of users and (b) give consumers the type of rights that we are seeing passed at a state level.

Putting it into Practice: While this report was directed at social media companies, the FTC recommendations can be helpful for all entities. They signal the types of safeguards and restrictions that the agency is beginning to expect when companies are using large amounts of personal data, especially that of children and/or within automated decision-making tools like AI.

Listen to this post