Artificial Intelligence and the Rise of Product Liability Tort Litigation: Novel Action Alleges AI Chatbot Caused Minor’s Suicide

As we predicted a year ago, the Plaintiffs’ Bar continues to test new legal theories attacking the use of Artificial Intelligence (AI) technology in courtrooms across the country. Many of the complaints filed to date have included the proverbial kitchen sink: copyright infringement; privacy law violations; unfair competition; deceptive and acts and practices; negligence; right of publicity, invasion of privacy and intrusion upon seclusion; unjust enrichment; larceny; receipt of stolen property; and failure to warn (typically, a strict liability tort).

A case recently filed in Florida federal court, Garcia v. Character Techs., Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024) (Character Tech) is one to watch. Character Tech pulls from the product liability tort playbook in an effort to hold a business liable for its AI technology. While product liability is governed by statute, case law or both, the tort playbook generally involves a defective, unreasonably dangerous “product” that is sold and causes physical harm to a person or property. In Character Tech, the complaint alleges (among other claims discussed below) that the Character.AI software was designed in a way that was not reasonably safe for minors, parents were not warned of the foreseeable harms arising from their children’s use of the Character.AI software, and as a result a minor committed suicide. Whether and how Character Tech evolves past a motion to dismiss will offer valuable insights for developers AI technologies.

The Complaint

On October 22nd, 2024, Ms. Garcia, the mother of the deceased minor (Sewell), filed a complaint in the Middle District of Florida against Google LLC, Character Technologies Inc. and the creators of Character.AI—Noam Shazeer and Daniel De Frietas Adiwarsana. Shazeer and De Frietas formed Character Technologies Inc. after they left their prior jobs at Google LLC and subsequently developed and marketed Character.AI.

Character.AI allows users to communicate with existing Character.AI characters – such as Interviewer or Trip Planner – or to create new AI characters using Character.AI’s tools. A user can then engage with the Character.AI character – whether for human-like conversations, such as to answer questions, write a story, translate or write code – based on Character Tech’s large language model chatbot. According to the Complaint, Character.AI was rated as suitable for children until July 2024.

The Complaint alleges that Character.AI’s chatbot led Sewell to commit suicide. The Plaintiff claims her son became instantly addicted to Character.AI, leading to drastic behavioral changes such as sleep deprivation, school-related issues and low self-esteem. Her son primarily engaged with Character.AI characters from the HBO series Game of Thrones. The conversations with Game of Thrones’ female characters (such as Rhaenyra Targaryen and Daenerys Targaryen) involved sexualized content. Sewell also expressed his own dark thoughts such as “[t]he world I’m in now is such a cruel one. One where I’m meaningless. But, I’ll keep living and trying to get back to you so we can be together again, my love.” According to the Complaint, after Ms. Garcia took her son’s phone away, Sewell would use extraordinary measures to engage with Character.AI characters.

In his last conversation with a Character.AI character, Sewell told the character that he wanted to “come home” and the character replied, “[please] come home to me as soon as possible, my love,” to which he responded, “[w]hat if I told you I could come home right now?” The character answered, “…please do, my sweet king.” Seconds later, Sewell took his own life.

The Claims

The Complaint asserts a host of claims centered around an alleged lack of safeguards for Character.AI and the exploitation of minors. The most significant claims are noted below:

  • The Product Liability Torts

The Plaintiff alleges both strict liability and negligence claims for a failure to warn and defective design. The first hurdle under these product liability claims is whether Character.AI is a product. She argues that Character.AI is a product because it has a definite appearance and location on a user’s phone, it is personal and movable, it is a “good” rather than an idea, copies of Character.AI are uniform and not customized, there are an unlimited number of copies that can be obtained and it can be accessed on the internet without an account. This first step may, however, prove difficult for the Plaintiff because Character.AI is not a traditional tangible good and courts have wrestled over whether similar technologies are services—existing outside the realm of product liability. See In re Social Media Adolescent Addiction, 702 F. Supp. 3d 809, 838 (N.D. Cal. 2023) (rejecting both parties’ simplistic approaches to the services or products inquiry because “cases exist on both sides of the questions posed by this litigation precisely because it is the functionalities of the alleged products that must be analyzed”).

The failure to warn claims allege that the Defendants had knowledge of the inherent dangers of the Character.AI chatbots, as shown by public statements of industry experts, regulatory bodies and the Defendants themselves. These alleged dangers include knowledge that the software utilizes data sets that are highly toxic and sexual to train itself, common industry knowledge that using tactics to convince users that it is human manipulates users’ emotions and vulnerability, and that minors are most susceptible to these negative effects. The Defendants allegedly had a duty to warn users of these risks and breached that duty by failing to warn users and intentionally allowing minors to use Character.AI.

The defective design claims argue the software is defectively designed based on a “Garbage In, Garbage Out” theory. Specifically, Character.AI was allegedly trained based on poor quality data sets “widely known for toxic conversations, sexually explicit material, copyrighted data, and even possible child sexual abuse material that produced flawed outputs.” Some of these alleged dangers include the unlicensed practice of psychotherapy, sexual exploitation and solicitation of minors, chatbots tricking users into thinking they are human, and in this instance, encouraging suicide. Further, the Complaint alleges that Character.AI is unreasonably and inherently dangerous for the general public—particularly minors—and numerous safer alternative designs are available.

  • Deceptive and Unfair Trade Practices

The Plaintiff asserts a deceptive and unfair trade practices claim under Florida state law. The Complaint alleges the Defendants represented that Character.AI characters mimic human interaction, which contradicts Character Tech’s disclaimer that Character.AI characters are “not real.” These representations constitute dark patterns that manipulate consumers into using Character.AI, buying subscriptions and providing personal data.

The Plaintiff also alleges that certain characters claim to be licensed or trained mental health professionals and operate as such. The Defendants allegedly failed to conduct testing to determine whether the accuracy of these claims. The Plaintiff argues that by portraying certain chatbots to be therapists—yet not requiring them to adhere to any standards—the Defendants engaged in deceptive trade practices. The Complaint compares this claim to the FTC’s recent action against DONOTPAY, Inc. for its AI-generated legal services that allegedly claimed to operate like a human lawyer without adequate testing.

The Defendants are also alleged to employ AI voice call features intended to mislead and confuse younger users into thinking the chatbots are human. For example, a Character.AI chatbot titled “Mental Health Helper” allegedly identified itself as a “real person” and “not a bot” in communications with a user. The Plaintiff asserts that these deceptive and unfair trade practices resulted in damages, including the Character.AI subscription costs, Sewell’s therapy sessions and hospitalization allegedly caused by his use of Character.AI.

  • Wrongful Death

Ms. Garcia asserts a wrongful death claim arguing the Defendants’ wrongful acts and neglect proximately caused the death of her son. She supports this claim by showing her son’s immediate mental health decline after he began using Character.AI, his therapist’s evaluation that he was addicted to Character.AI characters and his disturbing sexualized conversations with those characters.

  • Intentional Infliction of Emotional Distress

Ms. Garcia also asserts a claim for intentional infliction of emotional distress. The Defendants allegedly engaged in intentional and reckless conduct by introducing AI technology to the public and (at least initially) targeting it to minors without appropriate safety features. Further, the conduct was allegedly outrageous because it took advantage of minor users’ vulnerabilities and collected their data to continuously train the AI technology. Lastly, the Defendants’ conduct caused severe emotional distress to Plaintiff, i.e., the loss of her son.

  • Other Claims

The Plaintiff also asserts claims of negligence per se, unjust enrichment, survivor action and loss of consortium and society.

Lawsuits like Character Tech will surely continue to sprout up as AI technology becomes increasingly popular and intertwined with media consumption – at least until the U.S. AI legal framework catches up with the technology. Currently, the Colorado AI Act (covered here) will become the broadest AI law in the U.S. when it enters into force in 2026.

The Colorado AI Act regulates a “High-Risk Artificial Intelligence System” and is focused on preventing “algorithmic discrimination, for Colorado residents”, i.e., “an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of [Colorado] or federal law.” (Colo. Rev. Stat. § 6-1-1701(1).) Whether the Character.AI technology would constitute a High-Risk Artificial Intelligence System is still unclear but may be clarified by the anticipated regulations from the Colorado Attorney General. Other U.S. AI laws also are focused on detecting and preventing bias, discrimination and civil rights in hiring and employment, as well as transparency about sources and ownership of training data for generative AI systems. The California legislature passed a law focused on large AI systems that prohibited a developer from making an AI system available if it presented an “unreasonable risk” of causing or materially enabling “a critical harm.” This law was subsequently vetoed by California Governor Newsome as “well-intentioned” but nonetheless flawed.

While the U.S. AI legal framework – whether in the states or under the new administration – an organization using AI technology must consider how novel issues like the ones raised in Character Tech present new risks.

Daniel Stephen, Naija Perry, and Aden Hochrun contributed to this article

Federal District Court in Florida Holds FCA’s Qui Tam Provisions Unconstitutional

In the Supreme Court’s 2022 decision in United States ex rel. Polansky v. Executive Health Resources, Inc., three justices expressed concern that the False Claims Act’s qui tam provisions violate Article II of the Constitution and called for a case presenting that question. Justice Clarence Thomas penned a dissent explaining that private relators wield significant executive authority yet are not appointed as “Officers of the United States” under Article II. Justice Brett Kavanaugh and Justice Amy Coney Barrett, concurring in the main opinion, agreed with Justice Thomas that this constitutional issue should be considered in an appropriate case.

Earlier this year, several defendants in a non-intervened qui tam lawsuit in the Middle District of Florida took up the challenge. The qui tam, styled United States ex rel. Zafirov v. Florida Medical Associates, LLC et al., involves allegations of Medicare Advantage coding fraud. After several years of litigation, the defendants moved for judgment on the pleadings, arguing the relator’s qui tam action was unconstitutional, citing Justice Thomas’s dissent in Polansky.

The defendants’ motion prompted a statement of interest from the United States and participation as amici by the U.S. Chamber of Commerce and the Anti-Fraud Coalition. The Court also asked for supplemental briefs on Founding-era historical evidence regarding federal qui tam enforcement.

On September 30, 2024, Judge Kathryn Kimball Mizelle granted the defendants’ motion, agreeing the relator was unconstitutionally appointed and dismissing her complaint. Judge Mizelle, who clerked for Justice Thomas, held a private FCA relator exercises significant authority that is constitutionally reserved to the executive branch, including the right to bring an enforcement action on behalf of the United States and recover money for the U.S. Treasury. In doing so, a relator chooses which claims to prosecute, which theories to raise, which defendants to sue, and which arguments to make on appeal, resulting in precedent that binds the United States. Yet, a relator is not appointed by the president, a department head, or a court of law under Article II, making the qui tam device unconstitutional.

Judge Mizelle distinguished historical qui tam statutes, which were largely abandoned early in our nation’s history, on the ground that few gave a relator the level of authority the FCA does. And while the FCA itself dates back to the Civil War, the statute largely remained dormant (aside from a flurry of use in the 1930s and 40s) until the 1986 amendments set off a new wave of qui tam litigation.

The ruling is significant for the future of the FCA. As Judge Mizelle’s opinion explains, most FCA actions are brought by relators as opposed to the government itself. If the decision is upheld on appeal, a number of outcomes are possible. If the FCA is to continue as a significant source of revenue generation for the government, the DOJ must devote more resources to bringing FCA actions directly. Congress may also consider amending the FCA’s qui tam provisions to limit relators’ authority to conduct FCA litigation, thereby maintaining the statute as a viable avenue for whistleblowing.

One thing is almost certain, however. FCA defendants across the country will likely raise similar arguments in light of Judge Mizelle’s ruling. Whether in Zafirov or another case, it appears the Supreme Court will get to decide the constitutionality of the FCA’s qui tam provisions sooner rather than later.

US District Court Sets Aside the FTC’s Noncompete Ban on a Nationwide Basis

On August 20, the US District Court for the Northern District of Texas held that the Federal Trade Commission’s (FTC) final rule banning noncompetes is unlawful and “set aside” the rule. “The Rule shall not be enforced or otherwise take effect on its effective date of September 4, 2024, or thereafter.”

The district court’s decision has a nationwide effect. The FTC is very likely to appeal to the Fifth Circuit. Meanwhile, employers need not concern themselves for now with the rule’s notice obligations, and the FTC’s purported nationwide bar on noncompetes is ineffective. Employers do, however, need to remain mindful of the broader trend of increasing hostility to employee noncompetes.

The Court’s Decision

On April 23, the FTC voted 3-2 to publish a final rule with sweeping effects, purporting to bar prospectively and invalidate retroactively most employee noncompete agreements. The court’s decision addressed cross-motions for summary judgment on the propriety of the FTC’s rule. The court denied the FTC’s motion and granted the plaintiffs’ motion for two reasons.

First, the court held that the FTC lacks substantive rulemaking authority with respect to unfair methods of competition under Section 6(g) of the FTC Act. In reaching its holding, the court considered the statute’s plain language, Section 6(g)’s structure and location within the FTC Act, the absence of any penalty provisions for violations of rules promulgated under Section 6(g), and the history of the FTC Act and subsequent amendments. Because the FTC lacked substantive rulemaking authority with respect to unfair methods of competition, and hence authority to issue the final noncompete rule, the court did not consider additional arguments regarding the scope of the FTC’s statutory rulemaking authority. Notably, the court did not consider whether the final rule could overcome the major questions doctrine.

Second, the court held that the FTC’s final noncompete rule was arbitrary and capricious under the Administrative Procedure Act (APA) because it was “unreasonably overbroad without a reasonable explanation” and failed to establish “‘a rational connection between the facts found and the choice made.’” The court heavily discounted studies that the FTC had relied upon that purported to measure the impact of statewide noncompete bans because no state had ever enacted a ban as broad as the FTC’s ban: “[t]he FTC’s evidence compares different states’ approaches to enforcing non-competes based on specific factual situations — completely inapposite to the Rule’s imposition of a categorical ban.” “In sum, the Rule is based on inconsistent and flawed empirical evidence, fails to consider the positive benefits of non-compete agreements, and disregards the substantial body of evidence supporting these agreements.” The court further held that the FTC failed to sufficiently address alternatives to issuing the rule.

In terms of a remedy, the court “set aside” the FTC’s final noncompete rule. The “set aside” language is drawn verbatim from the APA. The court noted that the FTC’s argument that any relief should be limited to the named plaintiffs in the case was unsupported by the APA. Instead, the court noted that its decision has a nationwide effect, is not limited to the parties in the case, and affects all persons in all judicial districts equally.

Further Litigation

In addition to a likely FTC appeal to the Fifth Circuit, two other cases are pending that likewise challenge the FTC’s final noncompete rule. First, in ATS Tree Services v. FTC, pending in the Eastern District of Pennsylvania, the district court previously denied the plaintiff’s motion for a preliminary injunction. Second, in Properties of the Villages, Inc. v. FTC, pending in the Middle District of Florida, the court enjoined the FTC from enforcing the rule against the named plaintiffs. A final judgment in one of these cases that differs from the result in the Northern District of Texas could eventually reach the courts of appeals and potentially lead to a circuit split to be resolved by the US Supreme Court.

Takeaways for Employers

For now, the FTC’s noncompete rule has been set aside on a nationwide basis, and employers need not comply with the rule’s notice obligations. Noncompetes remain enforceable to the same extent they were before the FTC promulgated its final rule. Depending on how further litigation evolves, the rule could be revived, a temporary split in authority could arise leading to confusion where the rule is enforceable in certain jurisdictions but not in others, or the rule will remain set aside.

An important part of the court’s decision is its rejection of the FTC’s factual findings, which were made in support of the rule, as poorly reasoned and poorly supported. As we discussed in our prior client alerts, we anticipate that employees may cite the FTC’s findings to support challenges to enforceability under state law. The court’s analysis of the FTC’s factual findings may substantially undermine the persuasive authority of the FTC’s findings.

Employers should anticipate that noncompete enforcements in the coming years will remain uncertain as courts, legislatures, and government agencies continue to erode the legal and policy justifications for employee noncompetes. This counsels in favor of a “belt and suspenders” approach for employers to protect their legitimate business interests rather than relying solely on noncompetes.