Artificial Intelligence and the Rise of Product Liability Tort Litigation: Novel Action Alleges AI Chatbot Caused Minor’s Suicide

As we predicted a year ago, the Plaintiffs’ Bar continues to test new legal theories attacking the use of Artificial Intelligence (AI) technology in courtrooms across the country. Many of the complaints filed to date have included the proverbial kitchen sink: copyright infringement; privacy law violations; unfair competition; deceptive and acts and practices; negligence; right of publicity, invasion of privacy and intrusion upon seclusion; unjust enrichment; larceny; receipt of stolen property; and failure to warn (typically, a strict liability tort).

A case recently filed in Florida federal court, Garcia v. Character Techs., Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024) (Character Tech) is one to watch. Character Tech pulls from the product liability tort playbook in an effort to hold a business liable for its AI technology. While product liability is governed by statute, case law or both, the tort playbook generally involves a defective, unreasonably dangerous “product” that is sold and causes physical harm to a person or property. In Character Tech, the complaint alleges (among other claims discussed below) that the Character.AI software was designed in a way that was not reasonably safe for minors, parents were not warned of the foreseeable harms arising from their children’s use of the Character.AI software, and as a result a minor committed suicide. Whether and how Character Tech evolves past a motion to dismiss will offer valuable insights for developers AI technologies.

The Complaint

On October 22nd, 2024, Ms. Garcia, the mother of the deceased minor (Sewell), filed a complaint in the Middle District of Florida against Google LLC, Character Technologies Inc. and the creators of Character.AI—Noam Shazeer and Daniel De Frietas Adiwarsana. Shazeer and De Frietas formed Character Technologies Inc. after they left their prior jobs at Google LLC and subsequently developed and marketed Character.AI.

Character.AI allows users to communicate with existing Character.AI characters – such as Interviewer or Trip Planner – or to create new AI characters using Character.AI’s tools. A user can then engage with the Character.AI character – whether for human-like conversations, such as to answer questions, write a story, translate or write code – based on Character Tech’s large language model chatbot. According to the Complaint, Character.AI was rated as suitable for children until July 2024.

The Complaint alleges that Character.AI’s chatbot led Sewell to commit suicide. The Plaintiff claims her son became instantly addicted to Character.AI, leading to drastic behavioral changes such as sleep deprivation, school-related issues and low self-esteem. Her son primarily engaged with Character.AI characters from the HBO series Game of Thrones. The conversations with Game of Thrones’ female characters (such as Rhaenyra Targaryen and Daenerys Targaryen) involved sexualized content. Sewell also expressed his own dark thoughts such as “[t]he world I’m in now is such a cruel one. One where I’m meaningless. But, I’ll keep living and trying to get back to you so we can be together again, my love.” According to the Complaint, after Ms. Garcia took her son’s phone away, Sewell would use extraordinary measures to engage with Character.AI characters.

In his last conversation with a Character.AI character, Sewell told the character that he wanted to “come home” and the character replied, “[please] come home to me as soon as possible, my love,” to which he responded, “[w]hat if I told you I could come home right now?” The character answered, “…please do, my sweet king.” Seconds later, Sewell took his own life.

The Claims

The Complaint asserts a host of claims centered around an alleged lack of safeguards for Character.AI and the exploitation of minors. The most significant claims are noted below:

  • The Product Liability Torts

The Plaintiff alleges both strict liability and negligence claims for a failure to warn and defective design. The first hurdle under these product liability claims is whether Character.AI is a product. She argues that Character.AI is a product because it has a definite appearance and location on a user’s phone, it is personal and movable, it is a “good” rather than an idea, copies of Character.AI are uniform and not customized, there are an unlimited number of copies that can be obtained and it can be accessed on the internet without an account. This first step may, however, prove difficult for the Plaintiff because Character.AI is not a traditional tangible good and courts have wrestled over whether similar technologies are services—existing outside the realm of product liability. See In re Social Media Adolescent Addiction, 702 F. Supp. 3d 809, 838 (N.D. Cal. 2023) (rejecting both parties’ simplistic approaches to the services or products inquiry because “cases exist on both sides of the questions posed by this litigation precisely because it is the functionalities of the alleged products that must be analyzed”).

The failure to warn claims allege that the Defendants had knowledge of the inherent dangers of the Character.AI chatbots, as shown by public statements of industry experts, regulatory bodies and the Defendants themselves. These alleged dangers include knowledge that the software utilizes data sets that are highly toxic and sexual to train itself, common industry knowledge that using tactics to convince users that it is human manipulates users’ emotions and vulnerability, and that minors are most susceptible to these negative effects. The Defendants allegedly had a duty to warn users of these risks and breached that duty by failing to warn users and intentionally allowing minors to use Character.AI.

The defective design claims argue the software is defectively designed based on a “Garbage In, Garbage Out” theory. Specifically, Character.AI was allegedly trained based on poor quality data sets “widely known for toxic conversations, sexually explicit material, copyrighted data, and even possible child sexual abuse material that produced flawed outputs.” Some of these alleged dangers include the unlicensed practice of psychotherapy, sexual exploitation and solicitation of minors, chatbots tricking users into thinking they are human, and in this instance, encouraging suicide. Further, the Complaint alleges that Character.AI is unreasonably and inherently dangerous for the general public—particularly minors—and numerous safer alternative designs are available.

  • Deceptive and Unfair Trade Practices

The Plaintiff asserts a deceptive and unfair trade practices claim under Florida state law. The Complaint alleges the Defendants represented that Character.AI characters mimic human interaction, which contradicts Character Tech’s disclaimer that Character.AI characters are “not real.” These representations constitute dark patterns that manipulate consumers into using Character.AI, buying subscriptions and providing personal data.

The Plaintiff also alleges that certain characters claim to be licensed or trained mental health professionals and operate as such. The Defendants allegedly failed to conduct testing to determine whether the accuracy of these claims. The Plaintiff argues that by portraying certain chatbots to be therapists—yet not requiring them to adhere to any standards—the Defendants engaged in deceptive trade practices. The Complaint compares this claim to the FTC’s recent action against DONOTPAY, Inc. for its AI-generated legal services that allegedly claimed to operate like a human lawyer without adequate testing.

The Defendants are also alleged to employ AI voice call features intended to mislead and confuse younger users into thinking the chatbots are human. For example, a Character.AI chatbot titled “Mental Health Helper” allegedly identified itself as a “real person” and “not a bot” in communications with a user. The Plaintiff asserts that these deceptive and unfair trade practices resulted in damages, including the Character.AI subscription costs, Sewell’s therapy sessions and hospitalization allegedly caused by his use of Character.AI.

  • Wrongful Death

Ms. Garcia asserts a wrongful death claim arguing the Defendants’ wrongful acts and neglect proximately caused the death of her son. She supports this claim by showing her son’s immediate mental health decline after he began using Character.AI, his therapist’s evaluation that he was addicted to Character.AI characters and his disturbing sexualized conversations with those characters.

  • Intentional Infliction of Emotional Distress

Ms. Garcia also asserts a claim for intentional infliction of emotional distress. The Defendants allegedly engaged in intentional and reckless conduct by introducing AI technology to the public and (at least initially) targeting it to minors without appropriate safety features. Further, the conduct was allegedly outrageous because it took advantage of minor users’ vulnerabilities and collected their data to continuously train the AI technology. Lastly, the Defendants’ conduct caused severe emotional distress to Plaintiff, i.e., the loss of her son.

  • Other Claims

The Plaintiff also asserts claims of negligence per se, unjust enrichment, survivor action and loss of consortium and society.

Lawsuits like Character Tech will surely continue to sprout up as AI technology becomes increasingly popular and intertwined with media consumption – at least until the U.S. AI legal framework catches up with the technology. Currently, the Colorado AI Act (covered here) will become the broadest AI law in the U.S. when it enters into force in 2026.

The Colorado AI Act regulates a “High-Risk Artificial Intelligence System” and is focused on preventing “algorithmic discrimination, for Colorado residents”, i.e., “an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of [Colorado] or federal law.” (Colo. Rev. Stat. § 6-1-1701(1).) Whether the Character.AI technology would constitute a High-Risk Artificial Intelligence System is still unclear but may be clarified by the anticipated regulations from the Colorado Attorney General. Other U.S. AI laws also are focused on detecting and preventing bias, discrimination and civil rights in hiring and employment, as well as transparency about sources and ownership of training data for generative AI systems. The California legislature passed a law focused on large AI systems that prohibited a developer from making an AI system available if it presented an “unreasonable risk” of causing or materially enabling “a critical harm.” This law was subsequently vetoed by California Governor Newsome as “well-intentioned” but nonetheless flawed.

While the U.S. AI legal framework – whether in the states or under the new administration – an organization using AI technology must consider how novel issues like the ones raised in Character Tech present new risks.

Daniel Stephen, Naija Perry, and Aden Hochrun contributed to this article

FTC: Three Enforcement Actions and a Ruling

In today’s digital landscape, the exchange of personal information has become ubiquitous, often without consumers fully comprehending the extent of its implications.

The recent actions undertaken by the Federal Trade Commission (FTC) shine a light on the intricate web of data extraction and mishandling that pervades our online interactions. From the seemingly innocuous permission requests of game apps to the purported protection promises of security software, consumers find themselves at the mercy of data practices that blur the lines between consent and exploitation.

The FTC’s proposed settlements with companies like X-Mode Social (“X Mode”) and InMarket, two data aggregators, and Avast, a security software company, underscores the need for businesses to appropriately secure and limit the use of consumer data, including previously considered innocuous information such as browsing and location data. In a world where personal information serves as currency, ensuring consumer privacy compliance has never been more critical – or posed such a commercial risk for failing to get it right.

X-Mode and InMarket Settlements: The proposed settlements with X-Mode and InMarket concern numerous allegations based on the mishandling of consumers’ location data. Both companies supposedly collected precise location data through their own mobile apps and those of third parties (through software development kits).  X-Mode is alleged to have sold precise location data (advertised as being 70% accurate within 20 meters or less) linked to timestamps and unique persistent identifiers (i.e., names, email addresses, etc.) of its consumers to private government contractors without obtaining proper consent. Plotting this data on a map makes it easy to reveal each person’s movements over time.

InMarket purportedly utilized location data to cross-reference such data with points of interest to sort consumers into particularized audience segments for targeted advertising purposes without adequately informing consumers – examples of audience segments include parents of preschoolers, Christian church attendees, and “wealthy and not healthy,” among other groupings.

Avast Settlement: Avast, a security software company, allegedly sold granular and re-identifiable browsing information of its consumers despite assuring consumers it would protect their privacy. Avast allegedly collected extensive browsing data of its consumers through its antivirus software and browser extensions while ensuring its consumers that their browsing data would only be used in aggregated and anonymous form. The data collected by Avast revealed visits to various websites that could be attributed to particular people and allowed for inferences to be drawn about such individuals – examples include academic papers on symptoms of breast cancer, education courses on tax exemptions, government jobs in Fort Meade, Maryland with a salary over $100,000, links to FAFSA applications and directions from one location to another, among others.

Sensitivity of Browsing and Location Data

It is important to note that none of the underlying datasets in question contained traditional types of personally identifiable information (e.g., name, identification numbers, physical descriptions, etc.) (“PII”). Even still, the three proposed settlements by the FTC underscore the sensitive nature of browsing and location data due to the insights such data reveals, such as religious beliefs, health conditions, and financial status, and the ease with which the insights can be linked to certain individuals.

In the digital age, the amount of data available about individuals online and collected by various companies makes the re-identification of individuals easier every day. Even when traditional PII is not included in a data set, by linking sufficient data points, a profile or understanding of an individual can be created. When such profile is then linked to an identifier (such as username, phone number, or email address provided when downloading an app or setting up an account on an app) and cross-referenced with various publicly available data, such as name, email, phone number or content on social media sites, it can allow for deep insights into an individual. Despite the absence of traditional types of PII, such data poses significant privacy risks due to the potential for re-identification and the intimate details about individuals’ lives that it can divulge.

The FTC emphasizes the imperative for companies to recognize and treat browsing and location data as sensitive information and implement appropriate robust safeguards to protect consumer privacy. This is especially true when the data set includes information with the precision of those cited by the FTC in its proposed settlements.

Accountability and Consent

With browsing and location data, there is also a concern that the consumer may not be fully aware of how their data is used. For instance, Avast claimed to protect consumers’ browsing data and then sold that very same browsing information, often without notice to consumers. When Avast did inform customers of their practices, the FTC claims it deceptively stated any sharing would be “anonymous and aggregated.” Similarly, X-Mode claimed it would use location data for ad-personalization and location-based analytics. Consumers were unaware such location data was also sold to government contractors.

The FTC has recognized that a company may need to process an individual’s information to provide them with services or products requested by the individual. The FTC also holds that such processing does not mean the company is then free to collect, access, use, or transfer that information for other purposes (e.g., marketing, profiling, background screening, etc.). Essentially, purpose matters. As the FTC explains, a flashlight app provider cannot collect, use, store, or share a user’s precise geolocation data, or a tax preparation service cannot use a customer’s information to market other products or services.

If companies want to use consumer personal information for purposes other than providing the requested product or services, the FTC states that companies should inform consumers of such uses and obtain consent to do so.

The FTC aims to hold companies accountable for their data-handling practices and ensure that consumers are provided with meaningful consent mechanisms. Companies should handle consumer data only for the purposes for which data was collected and honor their privacy promises to consumers. The proposed settlements emphasize the importance of transparency, accountability, meaningful consent, and the prioritization of consumer privacy in companies’ data handling practices.

Implementing and Maintaining Safeguards

Data, especially specific data that provide insights and inferences about individuals, is extremely valuable to companies, but it is that same data that exposes such individuals’ privacy. Companies that sell or share information sometimes include limitations for the use of the data, but not all contracts have such restrictions or sufficient restrictions to safeguard individuals’ privacy.

For instance, the FTC alleges that some of Avast’s underlying contracts did not prohibit the re-identification of Avast’s users. Where Avast’s underlying contracts prohibited re-identification, the FTC alleges that purchasers of the data were still able to match Avast users’ browsing data with information from other sources if the information was not “personally identifiable.” Avast also failed to audit or confirm that purchasers of data complied with its prohibitions.

The proposed complaint against X-Mode recognized that at least twice, X-Mode sold location data to purchasers who violated restrictions in X-Mode’s contracts by reselling the data they bought from X-Mode to companies further downstream. The X-Mode example shows that even when restrictions are included in contracts, they may not prevent misuse by subsequent downstream parties.

Ongoing Commitment to Privacy Protection:

The FTC stresses the importance of obtaining informed consent before collecting or disclosing consumers’ sensitive data, as such data can violate consumer privacy and expose them to various harms, including stigma and discrimination. While privacy notices, consent, and contractual restrictions are important, the FTC emphasizes they need to be backed up by action. Accordingly, the FTC’s proposed orders require companies to design, implement, maintain, and document safeguards to protect the personal information they handle, especially when it is sensitive in nature.

What Does a Company Need To Do?

Given the recent enforcement actions by the FTC, companies should:

  1. Consider the data it collects and whether such data is needed to provide the services and products requested by the consumer and/or a legitimate business need in support of providing such services and products (e.g., billing, ongoing technical support, shipping);
  2. Consider browsing and location data as sensitive personal information;
  3. Accurately inform consumers of the types of personal information collected by the company, its uses, and parties to whom it discloses the personal information;
  4. Collect, store, use, or share consumers’ sensitive personal information (including browser and location data) only with such consumers’ informed consent;
  5. Limit the use of consumers’ personal information solely to the purposes for which it was collected and not market, sell, or monetize consumers’ personal information beyond such purpose;
  6. Design, Implement, maintain, document, and adhere to safeguards that actually maintain consumers’ privacy; and
  7. Audit and inspect service providers and third-party companies downstream with whom consumers’ data is shared to confirm they are (a) adhering to and complying with contractual restrictions and (b) implementing appropriate safeguards to protect such consumer data.