Artificial Intelligence and the Rise of Product Liability Tort Litigation: Novel Action Alleges AI Chatbot Caused Minor’s Suicide

As we predicted a year ago, the Plaintiffs’ Bar continues to test new legal theories attacking the use of Artificial Intelligence (AI) technology in courtrooms across the country. Many of the complaints filed to date have included the proverbial kitchen sink: copyright infringement; privacy law violations; unfair competition; deceptive and acts and practices; negligence; right of publicity, invasion of privacy and intrusion upon seclusion; unjust enrichment; larceny; receipt of stolen property; and failure to warn (typically, a strict liability tort).

A case recently filed in Florida federal court, Garcia v. Character Techs., Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024) (Character Tech) is one to watch. Character Tech pulls from the product liability tort playbook in an effort to hold a business liable for its AI technology. While product liability is governed by statute, case law or both, the tort playbook generally involves a defective, unreasonably dangerous “product” that is sold and causes physical harm to a person or property. In Character Tech, the complaint alleges (among other claims discussed below) that the Character.AI software was designed in a way that was not reasonably safe for minors, parents were not warned of the foreseeable harms arising from their children’s use of the Character.AI software, and as a result a minor committed suicide. Whether and how Character Tech evolves past a motion to dismiss will offer valuable insights for developers AI technologies.

The Complaint

On October 22nd, 2024, Ms. Garcia, the mother of the deceased minor (Sewell), filed a complaint in the Middle District of Florida against Google LLC, Character Technologies Inc. and the creators of Character.AI—Noam Shazeer and Daniel De Frietas Adiwarsana. Shazeer and De Frietas formed Character Technologies Inc. after they left their prior jobs at Google LLC and subsequently developed and marketed Character.AI.

Character.AI allows users to communicate with existing Character.AI characters – such as Interviewer or Trip Planner – or to create new AI characters using Character.AI’s tools. A user can then engage with the Character.AI character – whether for human-like conversations, such as to answer questions, write a story, translate or write code – based on Character Tech’s large language model chatbot. According to the Complaint, Character.AI was rated as suitable for children until July 2024.

The Complaint alleges that Character.AI’s chatbot led Sewell to commit suicide. The Plaintiff claims her son became instantly addicted to Character.AI, leading to drastic behavioral changes such as sleep deprivation, school-related issues and low self-esteem. Her son primarily engaged with Character.AI characters from the HBO series Game of Thrones. The conversations with Game of Thrones’ female characters (such as Rhaenyra Targaryen and Daenerys Targaryen) involved sexualized content. Sewell also expressed his own dark thoughts such as “[t]he world I’m in now is such a cruel one. One where I’m meaningless. But, I’ll keep living and trying to get back to you so we can be together again, my love.” According to the Complaint, after Ms. Garcia took her son’s phone away, Sewell would use extraordinary measures to engage with Character.AI characters.

In his last conversation with a Character.AI character, Sewell told the character that he wanted to “come home” and the character replied, “[please] come home to me as soon as possible, my love,” to which he responded, “[w]hat if I told you I could come home right now?” The character answered, “…please do, my sweet king.” Seconds later, Sewell took his own life.

The Claims

The Complaint asserts a host of claims centered around an alleged lack of safeguards for Character.AI and the exploitation of minors. The most significant claims are noted below:

  • The Product Liability Torts

The Plaintiff alleges both strict liability and negligence claims for a failure to warn and defective design. The first hurdle under these product liability claims is whether Character.AI is a product. She argues that Character.AI is a product because it has a definite appearance and location on a user’s phone, it is personal and movable, it is a “good” rather than an idea, copies of Character.AI are uniform and not customized, there are an unlimited number of copies that can be obtained and it can be accessed on the internet without an account. This first step may, however, prove difficult for the Plaintiff because Character.AI is not a traditional tangible good and courts have wrestled over whether similar technologies are services—existing outside the realm of product liability. See In re Social Media Adolescent Addiction, 702 F. Supp. 3d 809, 838 (N.D. Cal. 2023) (rejecting both parties’ simplistic approaches to the services or products inquiry because “cases exist on both sides of the questions posed by this litigation precisely because it is the functionalities of the alleged products that must be analyzed”).

The failure to warn claims allege that the Defendants had knowledge of the inherent dangers of the Character.AI chatbots, as shown by public statements of industry experts, regulatory bodies and the Defendants themselves. These alleged dangers include knowledge that the software utilizes data sets that are highly toxic and sexual to train itself, common industry knowledge that using tactics to convince users that it is human manipulates users’ emotions and vulnerability, and that minors are most susceptible to these negative effects. The Defendants allegedly had a duty to warn users of these risks and breached that duty by failing to warn users and intentionally allowing minors to use Character.AI.

The defective design claims argue the software is defectively designed based on a “Garbage In, Garbage Out” theory. Specifically, Character.AI was allegedly trained based on poor quality data sets “widely known for toxic conversations, sexually explicit material, copyrighted data, and even possible child sexual abuse material that produced flawed outputs.” Some of these alleged dangers include the unlicensed practice of psychotherapy, sexual exploitation and solicitation of minors, chatbots tricking users into thinking they are human, and in this instance, encouraging suicide. Further, the Complaint alleges that Character.AI is unreasonably and inherently dangerous for the general public—particularly minors—and numerous safer alternative designs are available.

  • Deceptive and Unfair Trade Practices

The Plaintiff asserts a deceptive and unfair trade practices claim under Florida state law. The Complaint alleges the Defendants represented that Character.AI characters mimic human interaction, which contradicts Character Tech’s disclaimer that Character.AI characters are “not real.” These representations constitute dark patterns that manipulate consumers into using Character.AI, buying subscriptions and providing personal data.

The Plaintiff also alleges that certain characters claim to be licensed or trained mental health professionals and operate as such. The Defendants allegedly failed to conduct testing to determine whether the accuracy of these claims. The Plaintiff argues that by portraying certain chatbots to be therapists—yet not requiring them to adhere to any standards—the Defendants engaged in deceptive trade practices. The Complaint compares this claim to the FTC’s recent action against DONOTPAY, Inc. for its AI-generated legal services that allegedly claimed to operate like a human lawyer without adequate testing.

The Defendants are also alleged to employ AI voice call features intended to mislead and confuse younger users into thinking the chatbots are human. For example, a Character.AI chatbot titled “Mental Health Helper” allegedly identified itself as a “real person” and “not a bot” in communications with a user. The Plaintiff asserts that these deceptive and unfair trade practices resulted in damages, including the Character.AI subscription costs, Sewell’s therapy sessions and hospitalization allegedly caused by his use of Character.AI.

  • Wrongful Death

Ms. Garcia asserts a wrongful death claim arguing the Defendants’ wrongful acts and neglect proximately caused the death of her son. She supports this claim by showing her son’s immediate mental health decline after he began using Character.AI, his therapist’s evaluation that he was addicted to Character.AI characters and his disturbing sexualized conversations with those characters.

  • Intentional Infliction of Emotional Distress

Ms. Garcia also asserts a claim for intentional infliction of emotional distress. The Defendants allegedly engaged in intentional and reckless conduct by introducing AI technology to the public and (at least initially) targeting it to minors without appropriate safety features. Further, the conduct was allegedly outrageous because it took advantage of minor users’ vulnerabilities and collected their data to continuously train the AI technology. Lastly, the Defendants’ conduct caused severe emotional distress to Plaintiff, i.e., the loss of her son.

  • Other Claims

The Plaintiff also asserts claims of negligence per se, unjust enrichment, survivor action and loss of consortium and society.

Lawsuits like Character Tech will surely continue to sprout up as AI technology becomes increasingly popular and intertwined with media consumption – at least until the U.S. AI legal framework catches up with the technology. Currently, the Colorado AI Act (covered here) will become the broadest AI law in the U.S. when it enters into force in 2026.

The Colorado AI Act regulates a “High-Risk Artificial Intelligence System” and is focused on preventing “algorithmic discrimination, for Colorado residents”, i.e., “an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of [Colorado] or federal law.” (Colo. Rev. Stat. § 6-1-1701(1).) Whether the Character.AI technology would constitute a High-Risk Artificial Intelligence System is still unclear but may be clarified by the anticipated regulations from the Colorado Attorney General. Other U.S. AI laws also are focused on detecting and preventing bias, discrimination and civil rights in hiring and employment, as well as transparency about sources and ownership of training data for generative AI systems. The California legislature passed a law focused on large AI systems that prohibited a developer from making an AI system available if it presented an “unreasonable risk” of causing or materially enabling “a critical harm.” This law was subsequently vetoed by California Governor Newsome as “well-intentioned” but nonetheless flawed.

While the U.S. AI legal framework – whether in the states or under the new administration – an organization using AI technology must consider how novel issues like the ones raised in Character Tech present new risks.

Daniel Stephen, Naija Perry, and Aden Hochrun contributed to this article

Probate & Fiduciary Litigation Newsletter – November 2023

Voluntary Personal Representative Is a “Prior Appointment” For Purposes of the Limitation Period for Commencing Formal Probate

In The Matter of the Estate of Patricia Ann Slavin, 492 Mass. 551 (2023)

Does the position of voluntary personal representative under G. L. c. 190B, § 3-1201 constitute a “prior appointment,” which operates to exempt an estate from the requirement contained in G. L. c. 190B, § 3-108 that probate, testacy, and appointment proceedings be filed within three years of a decedent’s death? The Massachusetts Supreme Judicial Court answered this question in the affirmative In The Matter of the Estate of Patricia Ann Slavin, 492 Mass. 551 (2023).

This case arose out of the murder of Patricia Slavin in May 2016 in circumstances allegedly giving rise to claims for wrongful death. A few months after her death, the decedent’s daughter (petitioner) filed a voluntary administration statement in the Probate and Family Court pursuant to § 3-1201 and thereafter became the voluntary personal representative of her mother’s estate. The petitioner’s status as voluntary personal representative allowed her to administer her mother’s small estate without initiating probate proceedings.

More than three years later, the petitioner—having realized her position as voluntary personal representative did not grant her authority to pursue a wrongful death claim—filed a petition for formal probate in the Probate and Family Court seeking court appointment as personal representative. The petitioner argued that the three-year statute of limitations governing probate proceedings was inapplicable because it excepts otherwise untimely filings for estates in which there has been a “prior appointment.” The Probate and Family Court dismissed the petition as untimely, finding that her position as voluntary personal representative did not qualify as a “prior appointment” under the statute. The judge’s decision relied on a procedural guide published by an administrative office of the Probate and Family Court which provided that the authority of a voluntary personal representative does not result in an official appointment by the court.

The SJC granted the petitioner’s application for direct appellate review and held that both the plain language of G. L. c. 190B, §§ 3-108 and 3-1201 and the purpose of the MUPC support the conclusion that the position of voluntary personal representative is indeed a “prior appointment.” The SJC reversed the judgment of dismissal and remanded for further proceedings.

First, the SJC concluded that the plain language of § 3-1201 constitutes an “appointment” given that the register of probate may “issue a certificate of appointment to [a] voluntary personal representative”—language that the SJC refused to consider as mere surplusage. This language plainly contradicted the administrative guide the Probate and Family Court judge relied on. The SJC also considered the plain language of § 3-108, which does not limit the type of “prior appointment” that qualifies for an exception from the statute of limitations.

Second, the SJC held that this conclusion was consistent with the purpose of the ultimate time limit. Section 3-108 is intended to establish a basic limitation period within which it may be determined whether a decedent left a will and to commence administration of an estate. Where a voluntary personal representative has been named, the determination of whether a will exists has been made, and administration of the estate has commenced.

Finally, the SJC held that this interpretation was consistent with the legislature’s goal of “flexible and efficient administration” of estates in that it incentivizes people to continue to utilize voluntary administration for smaller estates without fear that they could not increase their authority beyond three years.

Takeaway: Voluntary administration can be used for administration of smaller estates without risk that the three-year limitation period for commencing formal probate proceedings will bar future probate, testacy, or appointment proceedings, if necessary.

Conformed Copy of Will Not Admitted to Probate

In Matter of Estate of Slezak, 218 A.D.3d 946 (3rd Dep’t July 13, 2023)

Where a conformed copy of a will was located where the decedent said his will could be found, no potential heir contested the validity of the will and testimony established that the will was not revoked, should the conformed copy of the will be admitted to probate? In Matter of Estate of Slezak, 218 A.D.3d 946 (3rd Dep’t July 13, 2023), New York’s Appellate Division, Third Department, answered that question in the negative, indicating how difficult it can be to probate a copy of a will rather than the original

In Slezak, testimony established that the decedent told a witness that his will was in a lockbox under his bed, and that he had left everything to a certain beneficiary. When the lockbox was opened, there was a conformed copy of the will, with the decedent’s and the witnesses’ signatures indicated with “s/[names].” The will left everything to the beneficiary indicated by the testimony. No potential heir contested the validity of the conformed copy. Nonetheless, the Surrogate denied probate and the Appellate Division affirmed.

New York SPCA § 1407 and Third Department case law provide that “A lost or destroyed will may be admitted to probate only if [1] It is established that the will has not been revoked, and [2] Execution of the will is proved in the manner required for the probate of an existing will, and [3] All of the provisions of the will are clearly and distinctly proved by each of at least two credible witnesses or by a copy or draft of the will proved to be true and complete.” The Surrogate found that petitioner had established the first two elements, but had fallen short on the third. The Appellate Division agreed that “petitioner failed to show that the conformed copy of decedent’s will was ‘true and complete,’” stating that “[a]lthough petitioner tendered a conformed copy of decedent’s will, there was no other proof from the hearing confirming that the conformed copy was identical to decedent’s original will.”

Takeaway: Slezak reinforces the importance of being sure that the original version of a will is available. While there appears to have been no contest to the validity of the conformed copy of the will, the courts followed the statute strictly and denied probate when one of the statutory elements for admitting the conformed copy was lacking.

Beneficiary Has a Right to an Accounting Despite the Trustee’s Return of Funds

Kaylie v. Kaylie, 2023 WL 6395345 (1st Dep’t October 3, 2023)

Can the beneficiary of a trust require a trustee to provide an accounting despite the trustee’s return to the trust of the funds said to have been diverted? In Kaylie v. Kaylie, 2023 WL 6395345 (1st Dep’t October 3, 2023), New York’s Appellate Division, First Department, answered that question in the affirmative, reversing the trial court’s determination that no accounting was necessary under the circumstances.

In Kaylie, a beneficiary of a family trust commenced an Article 77 proceeding in Supreme Court upon learning that trust bank accounts unexpectedly had zero balances. In response, the trustee argued, among other things, that the trust “irrefutably has been made whole by the restoration of those funds, thus obviating any purported need on the part of [the beneficiary] for an accounting of those funds.” The trustee also argued that she had been removed as trustee since the dispute arose, limiting her access to the bank records of the trust. The trial court agreed, holding that since the beneficiary had not “show[n] misappropriation of funds” and the trustee no longer held that position, “the intrusion of an [accounting] is not warranted….”

The Appellate Division disagreed and reversed, in a decision reaffirming the principle that a beneficiary “is entitled to a judicial accounting by reason of the fiduciary relationship between” the beneficiary and the trustee. The court stated: “The fact that respondent has returned the trust’s funds with interest does not affect petitioner’s right to an accounting.”

Takeaway: The Kaylie decision confirms the primacy of a beneficiary’s right to an accounting from the trustee of a trust, even where the trustee has a “no harm, no foul” argument based on the return of funds to a trust and the trustee’s departure as trustee.

2023 Goulston & Storrs PC.

By Charles R. Jacob III , Jennifer L. Mikels , Molly Quinn , Gary M. Ronan , Nora A. Saunders of Goulston & Storrs

For more news on Probate & Fiduciary Updates, visit the NLR Estates & Trusts section.