Artificial Intelligence and the Rise of Product Liability Tort Litigation: Novel Action Alleges AI Chatbot Caused Minor’s Suicide

As we predicted a year ago, the Plaintiffs’ Bar continues to test new legal theories attacking the use of Artificial Intelligence (AI) technology in courtrooms across the country. Many of the complaints filed to date have included the proverbial kitchen sink: copyright infringement; privacy law violations; unfair competition; deceptive and acts and practices; negligence; right of publicity, invasion of privacy and intrusion upon seclusion; unjust enrichment; larceny; receipt of stolen property; and failure to warn (typically, a strict liability tort).

A case recently filed in Florida federal court, Garcia v. Character Techs., Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024) (Character Tech) is one to watch. Character Tech pulls from the product liability tort playbook in an effort to hold a business liable for its AI technology. While product liability is governed by statute, case law or both, the tort playbook generally involves a defective, unreasonably dangerous “product” that is sold and causes physical harm to a person or property. In Character Tech, the complaint alleges (among other claims discussed below) that the Character.AI software was designed in a way that was not reasonably safe for minors, parents were not warned of the foreseeable harms arising from their children’s use of the Character.AI software, and as a result a minor committed suicide. Whether and how Character Tech evolves past a motion to dismiss will offer valuable insights for developers AI technologies.

The Complaint

On October 22nd, 2024, Ms. Garcia, the mother of the deceased minor (Sewell), filed a complaint in the Middle District of Florida against Google LLC, Character Technologies Inc. and the creators of Character.AI—Noam Shazeer and Daniel De Frietas Adiwarsana. Shazeer and De Frietas formed Character Technologies Inc. after they left their prior jobs at Google LLC and subsequently developed and marketed Character.AI.

Character.AI allows users to communicate with existing Character.AI characters – such as Interviewer or Trip Planner – or to create new AI characters using Character.AI’s tools. A user can then engage with the Character.AI character – whether for human-like conversations, such as to answer questions, write a story, translate or write code – based on Character Tech’s large language model chatbot. According to the Complaint, Character.AI was rated as suitable for children until July 2024.

The Complaint alleges that Character.AI’s chatbot led Sewell to commit suicide. The Plaintiff claims her son became instantly addicted to Character.AI, leading to drastic behavioral changes such as sleep deprivation, school-related issues and low self-esteem. Her son primarily engaged with Character.AI characters from the HBO series Game of Thrones. The conversations with Game of Thrones’ female characters (such as Rhaenyra Targaryen and Daenerys Targaryen) involved sexualized content. Sewell also expressed his own dark thoughts such as “[t]he world I’m in now is such a cruel one. One where I’m meaningless. But, I’ll keep living and trying to get back to you so we can be together again, my love.” According to the Complaint, after Ms. Garcia took her son’s phone away, Sewell would use extraordinary measures to engage with Character.AI characters.

In his last conversation with a Character.AI character, Sewell told the character that he wanted to “come home” and the character replied, “[please] come home to me as soon as possible, my love,” to which he responded, “[w]hat if I told you I could come home right now?” The character answered, “…please do, my sweet king.” Seconds later, Sewell took his own life.

The Claims

The Complaint asserts a host of claims centered around an alleged lack of safeguards for Character.AI and the exploitation of minors. The most significant claims are noted below:

  • The Product Liability Torts

The Plaintiff alleges both strict liability and negligence claims for a failure to warn and defective design. The first hurdle under these product liability claims is whether Character.AI is a product. She argues that Character.AI is a product because it has a definite appearance and location on a user’s phone, it is personal and movable, it is a “good” rather than an idea, copies of Character.AI are uniform and not customized, there are an unlimited number of copies that can be obtained and it can be accessed on the internet without an account. This first step may, however, prove difficult for the Plaintiff because Character.AI is not a traditional tangible good and courts have wrestled over whether similar technologies are services—existing outside the realm of product liability. See In re Social Media Adolescent Addiction, 702 F. Supp. 3d 809, 838 (N.D. Cal. 2023) (rejecting both parties’ simplistic approaches to the services or products inquiry because “cases exist on both sides of the questions posed by this litigation precisely because it is the functionalities of the alleged products that must be analyzed”).

The failure to warn claims allege that the Defendants had knowledge of the inherent dangers of the Character.AI chatbots, as shown by public statements of industry experts, regulatory bodies and the Defendants themselves. These alleged dangers include knowledge that the software utilizes data sets that are highly toxic and sexual to train itself, common industry knowledge that using tactics to convince users that it is human manipulates users’ emotions and vulnerability, and that minors are most susceptible to these negative effects. The Defendants allegedly had a duty to warn users of these risks and breached that duty by failing to warn users and intentionally allowing minors to use Character.AI.

The defective design claims argue the software is defectively designed based on a “Garbage In, Garbage Out” theory. Specifically, Character.AI was allegedly trained based on poor quality data sets “widely known for toxic conversations, sexually explicit material, copyrighted data, and even possible child sexual abuse material that produced flawed outputs.” Some of these alleged dangers include the unlicensed practice of psychotherapy, sexual exploitation and solicitation of minors, chatbots tricking users into thinking they are human, and in this instance, encouraging suicide. Further, the Complaint alleges that Character.AI is unreasonably and inherently dangerous for the general public—particularly minors—and numerous safer alternative designs are available.

  • Deceptive and Unfair Trade Practices

The Plaintiff asserts a deceptive and unfair trade practices claim under Florida state law. The Complaint alleges the Defendants represented that Character.AI characters mimic human interaction, which contradicts Character Tech’s disclaimer that Character.AI characters are “not real.” These representations constitute dark patterns that manipulate consumers into using Character.AI, buying subscriptions and providing personal data.

The Plaintiff also alleges that certain characters claim to be licensed or trained mental health professionals and operate as such. The Defendants allegedly failed to conduct testing to determine whether the accuracy of these claims. The Plaintiff argues that by portraying certain chatbots to be therapists—yet not requiring them to adhere to any standards—the Defendants engaged in deceptive trade practices. The Complaint compares this claim to the FTC’s recent action against DONOTPAY, Inc. for its AI-generated legal services that allegedly claimed to operate like a human lawyer without adequate testing.

The Defendants are also alleged to employ AI voice call features intended to mislead and confuse younger users into thinking the chatbots are human. For example, a Character.AI chatbot titled “Mental Health Helper” allegedly identified itself as a “real person” and “not a bot” in communications with a user. The Plaintiff asserts that these deceptive and unfair trade practices resulted in damages, including the Character.AI subscription costs, Sewell’s therapy sessions and hospitalization allegedly caused by his use of Character.AI.

  • Wrongful Death

Ms. Garcia asserts a wrongful death claim arguing the Defendants’ wrongful acts and neglect proximately caused the death of her son. She supports this claim by showing her son’s immediate mental health decline after he began using Character.AI, his therapist’s evaluation that he was addicted to Character.AI characters and his disturbing sexualized conversations with those characters.

  • Intentional Infliction of Emotional Distress

Ms. Garcia also asserts a claim for intentional infliction of emotional distress. The Defendants allegedly engaged in intentional and reckless conduct by introducing AI technology to the public and (at least initially) targeting it to minors without appropriate safety features. Further, the conduct was allegedly outrageous because it took advantage of minor users’ vulnerabilities and collected their data to continuously train the AI technology. Lastly, the Defendants’ conduct caused severe emotional distress to Plaintiff, i.e., the loss of her son.

  • Other Claims

The Plaintiff also asserts claims of negligence per se, unjust enrichment, survivor action and loss of consortium and society.

Lawsuits like Character Tech will surely continue to sprout up as AI technology becomes increasingly popular and intertwined with media consumption – at least until the U.S. AI legal framework catches up with the technology. Currently, the Colorado AI Act (covered here) will become the broadest AI law in the U.S. when it enters into force in 2026.

The Colorado AI Act regulates a “High-Risk Artificial Intelligence System” and is focused on preventing “algorithmic discrimination, for Colorado residents”, i.e., “an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of [Colorado] or federal law.” (Colo. Rev. Stat. § 6-1-1701(1).) Whether the Character.AI technology would constitute a High-Risk Artificial Intelligence System is still unclear but may be clarified by the anticipated regulations from the Colorado Attorney General. Other U.S. AI laws also are focused on detecting and preventing bias, discrimination and civil rights in hiring and employment, as well as transparency about sources and ownership of training data for generative AI systems. The California legislature passed a law focused on large AI systems that prohibited a developer from making an AI system available if it presented an “unreasonable risk” of causing or materially enabling “a critical harm.” This law was subsequently vetoed by California Governor Newsome as “well-intentioned” but nonetheless flawed.

While the U.S. AI legal framework – whether in the states or under the new administration – an organization using AI technology must consider how novel issues like the ones raised in Character Tech present new risks.

Daniel Stephen, Naija Perry, and Aden Hochrun contributed to this article

NLRB Issues Memo on Non-competes Violating NLRA

On May 30, 2023, Jennifer Abruzzo, the general counsel for the National Labor Relations Board (NLRB), issued a memorandum declaring that non-compete agreements for non-supervisory employees violates the National Labor Relations Act. The memo explains that having a non-compete chills employees’ Section 7 rights when it comes to demanding better wages. The ­theory goes that employees cannot threaten to resign for better conditions because they have nowhere to go. Non-compete agreements also prohibit employees from seeking better working conditions with competitors and/or soliciting coworkers to leave with them for a local competitor.

Experts have yet to weigh in, but ultimately this issue will be decided by the federal courts. As an employer, if you employ any non-supervisory employees that are subject to a non-compete agreement, an unfair labor practice charge could be filed, and it appears the NLRB would lean towards invalidating the agreement, though all evidence would have to be taken into consideration.

© 2023 Jones Walker LLP

For more Employment Legal News, click here to visit the National  Law Review.

Uber Ordered to Buckle Up for Litigation: Taxicab Plaintiffs Ride out (in part) Uber’s Motion to Dismiss False Advertising Claims

A group of California taxicab companies sued Uber in federal court in San Francisco for falsely advertising the safety of Uber rides and for disparaging the safety of taxi rides. Uber moved to dismiss plaintiffs’ Lanham Actclaim, contending that the safety-related statements were non-actionable puffery and were not disseminated in a commercial context. Uber also moved to dismiss plaintiffs’ California unfair competition law (“UCL”) claim for lack of standing, and moved to strike plaintiffs’ request for restitution under the UCL and California’s false advertising law (“FAL”).

Declining to put the brakes on the lawsuit in its entirety, the court granted in part and denied in part Uber’s motion. L.A. Taxi Cooperative, Inc. v. Uber Technologies, Inc., 2015 WL 4397706 (N.D. Cal. July 17, 2015).

The court agreed that some of Uber’s statements were non-actionable puffery. For example, Uber’s claim that it was “GOING THE DISTANCE TO PUT PEOPLE FIRST” was “clearly the type of ‘exaggerated advertising’ slogans upon which consumers would not reasonably rely.” It would be impossible to measure whether or how Uber was fulfilling this promise. Likewise, Uber’s statement “BACKGROUND CHECKS YOU CAN TRUST” was puffery because it made no specific claim about Uber’s services. The court therefore dismissed plaintiffs’ claims as to these non-actionable statements.

On the other hand, the court did not agree that Uber was merely puffing when it claimed it was “setting the strictest safety standard possible,” that its safety is “already best in class,” that its “three-step screening” background check process adheres to a “comprehensive and new industry standard,” or when Uber compared its background check process to the taxi industry’s background check process. These statements were not puffery because “[a] reasonable consumer reading these statements in the context of Uber’s advertising campaign could conclude that an Uber ride is objectively and measurably safer than a ride provided by a taxi . . . .”

The court also rejected Uber’s argument that, because certain advertising claims were preceded by phrases like “Uber is committed to” or “Uber works hard to” – for example, “We are committed to improving the already best in class safety and accountability of the Uber platform . . .” – that the advertising claims were merely aspirational and therefore non-actionable. The challenged statements did more than assert that Uber was committed to safety, the court found; they included statements regarding the objective safety and accountability of Uber’s service. A reasonable consumer might rely on such statements, so the court denied Uber’s motion to dismiss in this regard.

The court found that certain advertising statements Uber made to the media were non-commercial speech and therefore not actionable under the Lanham Act or California state law. These statements were made in response to journalists’ inquiries, and were “inextricably intertwined” with the journalists’ independent – and largely critical – coverage of Uber’s safety record, which was a matter of public concern. Accordingly, the court granted Uber’s motion and dismissed plaintiffs’ claims relating to these non-actionable statements.

But the court did find Uber’s statements on ride receipts to be commercial speech. Following a completed ride, Uber emails its customers a receipt that includes a $1.00 “Safe Rides Fee.” Uber explains to customers who click on a link in the receipt that the fee was intended “to ensure the safest possible platform for Uber riders,” that Uber would put the fee towards its “continued efforts to ensure the safest possible platform,” and that “you’ll see this as a separate line item on every uberX receipt.” Uber contended that such statements related to a past transaction, rather than a prospective transaction that Uber sought to induce, and therefore did not amount to commercial speech. The court disagreed, finding that “the complaint adequately allege[d] that the statements relating to the ‘Safe Rides Fee’ [were] made for the purpose of influencing consumers to use Uber’s services again.”

On the California UCL claim, the court found that the taxicab plaintiffs lacked standing because they did not allege that they relied on Uber’s allegedly false or misleading advertising. In dismissing this claim, the court explained that it was declining to join the minority of California federal courts that have permitted UCL claims to proceed where the plaintiff pled potential consumers’ reliance rather than the plaintiff’s own reliance.

Finally, the court found that plaintiffs did not have a viable claim for restitution under California’s UCL and FAL because that remedy is limited to “money or property that defendants took directly from [a] plaintiff” or “in which [a plaintiff] has a vested interest,” and the complaint failed to allege that plaintiffs had an ownership interest in Uber’s profits that they sought to disgorge.

© 2015 Proskauer Rose LLP.

Arizona Supreme Court Holds That The Uniform Trade Secrets Act Only Preempts Claims for Misappropriation of Trade Secrets, Not Other Confidential Information

In Orca Communications Unlimited, LLC v. Noder (Ariz. Nov. 19, 2014), the Arizona Supreme Court ruled that Arizona’s version of the Uniform Trade Secrets Act (the “AUTSA”) “does not displace common-law claims based on alleged misappropriation of confidential information that is not a trade secret.”  Orca, a public relations firm, filed suit against Ann Noder, its former president, for unfair competition after Noder left Orca to start a rival company.  Orca alleged that Noder had learned confidential and trade secrets information about “Orca’s business model, operation procedures, techniques, and strengths and weaknesses,” and that Noder intended to “steal” and “exploit” that information and Orca’s customers for her company’s own competitive advantage.  The trial court dismissed Orca’s complaint at the pleadings stage, concluding that the AUTSA preempts Orca’s “common law tort claims arising from the alleged misuse of confidential information,” even if such information is “not asserted to rise to the level of a trade secret.”  The court of appeals reversed in part, holding that the AUTSA preemption exists only to the extent that the unfair competition claim is based on misappropriation of a trade secret.

The Arizona Supreme Court considered the text of the 1990 AUTSA’s displacement provision, concluding that nothing in the language of the statute “suggests that the Legislature intended to displace any cause of action other than one for misappropriation of a trade secret.”  “If such broad displacement was intended, the legislature was required to express that intent clearly.”  The court assumed, but did not decide, that Arizona’s common law recognizes a claim for unfair competition.  Nor did it decide what aspects, if any, of the alleged confidential information in plaintiff’s unfair competition claim might fall within the AUTSA’s broad definition of a trade secret and therefore be displaced.  “That determination will not hinge on the claim’s label, but rather will depend on discovery and further litigation that has not yet occurred.

While the court acknowledged the split of authority among various states as to the preemptive effects of the Uniform Trade Secrets Act, it found that the “quest for uniformity is a fruitless endeavor and Arizona’s ruling one way or the other neither fosters nor hinders national uniformity.”  With its ruling, the Arizona Supreme Court joins courts in states such as Pennsylvania, Virginia and Wisconsin that have the narrowed the preemptive effects of the Uniform Trade Secrets Act.  Conversely, courts in other states including California, Indiana, Hawaii, New Hampshire, and Utah have held that Uniform Trade Secrets Act statutes should be read to broadly preempt all claims related to the misappropriation of information, regardless of whether the information falls within the definition of a trade secret.

ARTICLE BY

OF: