Artificial Intelligence and the Rise of Product Liability Tort Litigation: Novel Action Alleges AI Chatbot Caused Minor’s Suicide

As we predicted a year ago, the Plaintiffs’ Bar continues to test new legal theories attacking the use of Artificial Intelligence (AI) technology in courtrooms across the country. Many of the complaints filed to date have included the proverbial kitchen sink: copyright infringement; privacy law violations; unfair competition; deceptive and acts and practices; negligence; right of publicity, invasion of privacy and intrusion upon seclusion; unjust enrichment; larceny; receipt of stolen property; and failure to warn (typically, a strict liability tort).

A case recently filed in Florida federal court, Garcia v. Character Techs., Inc., No. 6:24-CV-01903 (M.D. Fla. filed Oct. 22, 2024) (Character Tech) is one to watch. Character Tech pulls from the product liability tort playbook in an effort to hold a business liable for its AI technology. While product liability is governed by statute, case law or both, the tort playbook generally involves a defective, unreasonably dangerous “product” that is sold and causes physical harm to a person or property. In Character Tech, the complaint alleges (among other claims discussed below) that the Character.AI software was designed in a way that was not reasonably safe for minors, parents were not warned of the foreseeable harms arising from their children’s use of the Character.AI software, and as a result a minor committed suicide. Whether and how Character Tech evolves past a motion to dismiss will offer valuable insights for developers AI technologies.

The Complaint

On October 22nd, 2024, Ms. Garcia, the mother of the deceased minor (Sewell), filed a complaint in the Middle District of Florida against Google LLC, Character Technologies Inc. and the creators of Character.AI—Noam Shazeer and Daniel De Frietas Adiwarsana. Shazeer and De Frietas formed Character Technologies Inc. after they left their prior jobs at Google LLC and subsequently developed and marketed Character.AI.

Character.AI allows users to communicate with existing Character.AI characters – such as Interviewer or Trip Planner – or to create new AI characters using Character.AI’s tools. A user can then engage with the Character.AI character – whether for human-like conversations, such as to answer questions, write a story, translate or write code – based on Character Tech’s large language model chatbot. According to the Complaint, Character.AI was rated as suitable for children until July 2024.

The Complaint alleges that Character.AI’s chatbot led Sewell to commit suicide. The Plaintiff claims her son became instantly addicted to Character.AI, leading to drastic behavioral changes such as sleep deprivation, school-related issues and low self-esteem. Her son primarily engaged with Character.AI characters from the HBO series Game of Thrones. The conversations with Game of Thrones’ female characters (such as Rhaenyra Targaryen and Daenerys Targaryen) involved sexualized content. Sewell also expressed his own dark thoughts such as “[t]he world I’m in now is such a cruel one. One where I’m meaningless. But, I’ll keep living and trying to get back to you so we can be together again, my love.” According to the Complaint, after Ms. Garcia took her son’s phone away, Sewell would use extraordinary measures to engage with Character.AI characters.

In his last conversation with a Character.AI character, Sewell told the character that he wanted to “come home” and the character replied, “[please] come home to me as soon as possible, my love,” to which he responded, “[w]hat if I told you I could come home right now?” The character answered, “…please do, my sweet king.” Seconds later, Sewell took his own life.

The Claims

The Complaint asserts a host of claims centered around an alleged lack of safeguards for Character.AI and the exploitation of minors. The most significant claims are noted below:

  • The Product Liability Torts

The Plaintiff alleges both strict liability and negligence claims for a failure to warn and defective design. The first hurdle under these product liability claims is whether Character.AI is a product. She argues that Character.AI is a product because it has a definite appearance and location on a user’s phone, it is personal and movable, it is a “good” rather than an idea, copies of Character.AI are uniform and not customized, there are an unlimited number of copies that can be obtained and it can be accessed on the internet without an account. This first step may, however, prove difficult for the Plaintiff because Character.AI is not a traditional tangible good and courts have wrestled over whether similar technologies are services—existing outside the realm of product liability. See In re Social Media Adolescent Addiction, 702 F. Supp. 3d 809, 838 (N.D. Cal. 2023) (rejecting both parties’ simplistic approaches to the services or products inquiry because “cases exist on both sides of the questions posed by this litigation precisely because it is the functionalities of the alleged products that must be analyzed”).

The failure to warn claims allege that the Defendants had knowledge of the inherent dangers of the Character.AI chatbots, as shown by public statements of industry experts, regulatory bodies and the Defendants themselves. These alleged dangers include knowledge that the software utilizes data sets that are highly toxic and sexual to train itself, common industry knowledge that using tactics to convince users that it is human manipulates users’ emotions and vulnerability, and that minors are most susceptible to these negative effects. The Defendants allegedly had a duty to warn users of these risks and breached that duty by failing to warn users and intentionally allowing minors to use Character.AI.

The defective design claims argue the software is defectively designed based on a “Garbage In, Garbage Out” theory. Specifically, Character.AI was allegedly trained based on poor quality data sets “widely known for toxic conversations, sexually explicit material, copyrighted data, and even possible child sexual abuse material that produced flawed outputs.” Some of these alleged dangers include the unlicensed practice of psychotherapy, sexual exploitation and solicitation of minors, chatbots tricking users into thinking they are human, and in this instance, encouraging suicide. Further, the Complaint alleges that Character.AI is unreasonably and inherently dangerous for the general public—particularly minors—and numerous safer alternative designs are available.

  • Deceptive and Unfair Trade Practices

The Plaintiff asserts a deceptive and unfair trade practices claim under Florida state law. The Complaint alleges the Defendants represented that Character.AI characters mimic human interaction, which contradicts Character Tech’s disclaimer that Character.AI characters are “not real.” These representations constitute dark patterns that manipulate consumers into using Character.AI, buying subscriptions and providing personal data.

The Plaintiff also alleges that certain characters claim to be licensed or trained mental health professionals and operate as such. The Defendants allegedly failed to conduct testing to determine whether the accuracy of these claims. The Plaintiff argues that by portraying certain chatbots to be therapists—yet not requiring them to adhere to any standards—the Defendants engaged in deceptive trade practices. The Complaint compares this claim to the FTC’s recent action against DONOTPAY, Inc. for its AI-generated legal services that allegedly claimed to operate like a human lawyer without adequate testing.

The Defendants are also alleged to employ AI voice call features intended to mislead and confuse younger users into thinking the chatbots are human. For example, a Character.AI chatbot titled “Mental Health Helper” allegedly identified itself as a “real person” and “not a bot” in communications with a user. The Plaintiff asserts that these deceptive and unfair trade practices resulted in damages, including the Character.AI subscription costs, Sewell’s therapy sessions and hospitalization allegedly caused by his use of Character.AI.

  • Wrongful Death

Ms. Garcia asserts a wrongful death claim arguing the Defendants’ wrongful acts and neglect proximately caused the death of her son. She supports this claim by showing her son’s immediate mental health decline after he began using Character.AI, his therapist’s evaluation that he was addicted to Character.AI characters and his disturbing sexualized conversations with those characters.

  • Intentional Infliction of Emotional Distress

Ms. Garcia also asserts a claim for intentional infliction of emotional distress. The Defendants allegedly engaged in intentional and reckless conduct by introducing AI technology to the public and (at least initially) targeting it to minors without appropriate safety features. Further, the conduct was allegedly outrageous because it took advantage of minor users’ vulnerabilities and collected their data to continuously train the AI technology. Lastly, the Defendants’ conduct caused severe emotional distress to Plaintiff, i.e., the loss of her son.

  • Other Claims

The Plaintiff also asserts claims of negligence per se, unjust enrichment, survivor action and loss of consortium and society.

Lawsuits like Character Tech will surely continue to sprout up as AI technology becomes increasingly popular and intertwined with media consumption – at least until the U.S. AI legal framework catches up with the technology. Currently, the Colorado AI Act (covered here) will become the broadest AI law in the U.S. when it enters into force in 2026.

The Colorado AI Act regulates a “High-Risk Artificial Intelligence System” and is focused on preventing “algorithmic discrimination, for Colorado residents”, i.e., “an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of [Colorado] or federal law.” (Colo. Rev. Stat. § 6-1-1701(1).) Whether the Character.AI technology would constitute a High-Risk Artificial Intelligence System is still unclear but may be clarified by the anticipated regulations from the Colorado Attorney General. Other U.S. AI laws also are focused on detecting and preventing bias, discrimination and civil rights in hiring and employment, as well as transparency about sources and ownership of training data for generative AI systems. The California legislature passed a law focused on large AI systems that prohibited a developer from making an AI system available if it presented an “unreasonable risk” of causing or materially enabling “a critical harm.” This law was subsequently vetoed by California Governor Newsome as “well-intentioned” but nonetheless flawed.

While the U.S. AI legal framework – whether in the states or under the new administration – an organization using AI technology must consider how novel issues like the ones raised in Character Tech present new risks.

Daniel Stephen, Naija Perry, and Aden Hochrun contributed to this article

FTC Social Media Staff Report Suggests Enforcement Direction and Expectations

The FTC’s staff report summarizes how it views the operations of social media and video streaming companies. Of particular interest is the insight it gives into potential enforcement focus in the coming months, and into 2025. Of particular concern for the FTC in the report, issued last month, were the following:

  1. The high volume of information collected from users, including in ways they may not expect;
  2. Companies relying on advertising revenue that was based on use of that information;
  3. Use of AI over which the FTC felt users did not have control; and
  4. A gap in protection of teens (who are not subject to COPPA).

As part of its report, the FTC recommended changes in how social media companies collect and use personal information. Those recommendations stretched over five pages of the report and fell into four categories. Namely:

  1. Minimizing what information is collected to that which is needed to provide the company’s services. This recommendation also folded in concepts of data deletion and limits on information sharing.
  2. Putting guardrails around targeted digital advertising. Especially, the FTC indicated, if the targeting is based on use of sensitive personal information.
  3. Providing users with information about how automated decisions are being made. This would include not just transparency, the FTC indicated, but also having “more stringent testing and monitoring standards.”
  4. Using COPPA as a baseline in interactions with not only children under 13, but also as a model for interacting with teens.

The FTC also signaled in the report its support of federal privacy legislation that would (a) limit “surveillance” of users and (b) give consumers the type of rights that we are seeing passed at a state level.

Putting it into Practice: While this report was directed at social media companies, the FTC recommendations can be helpful for all entities. They signal the types of safeguards and restrictions that the agency is beginning to expect when companies are using large amounts of personal data, especially that of children and/or within automated decision-making tools like AI.

Listen to this post 

The Evolution of AI in Healthcare: Current Trends and Legal Considerations

Artificial intelligence (AI) is transforming the healthcare landscape, offering innovative solutions to age-old challenges. From diagnostics to enhanced patient care, AI’s influence is pervasive, and seems destined to reshape how healthcare is delivered and managed. However, the rapid integration of AI technologies brings with it a complex web of legal and regulatory considerations that physicians must navigate.

It appears inevitable AI will ultimately render current modalities, perhaps even today’s “gold standard” clinical strategies, obsolete. Currently accepted treatment methodologies will change, hopefully for the benefit of patients. In lockstep, insurance companies and payors are poised to utilize AI to advance their interests. Indeed, the “cat-and-mouse” battle between physician and overseer will not only remain but will intensify as these technologies intrude further into physician-patient encounters.

  1. Current Trends in AI Applications in Healthcare

As AI continues to evolve, the healthcare sector is witnessing a surge in private equity investments and start-ups entering the AI space. These ventures are driving innovation across a wide range of applications, from tools that listen in on patient encounters to ensure optimal outcomes and suggest clinical plans, to sophisticated systems that gather and analyze massive datasets contained in electronic medical records. By identifying trends and detecting imperceptible signs of disease through the analysis of audio and visual depictions of patients, these AI-driven solutions are poised to revolutionize clinical care. The involvement of private equity and start-ups is accelerating the development and deployment of these technologies, pushing the boundaries of what AI can achieve in healthcare while also raising new questions about the integration of these powerful tools into existing medical practices.

Diagnostics and Predictive Analytics:

AI-powered diagnostic tools are becoming sophisticated, capable of analyzing medical images, genetic data, and electronic health records (EHRs) to identify patterns that may elude human practitioners. Machine learning algorithms, for instance, can detect early signs of cancer, heart disease, and neurological disorders with remarkable accuracy. Predictive analytics, another AI-driven trend, is helping clinicians forecast patient outcomes, enabling more personalized treatment plans.

 

Telemedicine and Remote Patient Monitoring:

The COVID-19 pandemic accelerated the adoption of telemedicine, and AI is playing a crucial role in enhancing these services. AI-driven chatbots and virtual assistants are set to engage with patients by answering queries and triaging symptoms. Additionally, AI is used in remote and real-time patient monitoring systems to track vital signs and alert healthcare providers to potential health issues before they escalate.

 

Drug Discovery and Development:

AI is revolutionizing drug discovery by speeding up the identification of potential drug candidates and predicting their success in clinical trials. Pharmaceutical companies are pouring billions of dollars in developing AI-driven tools to model complex biological processes and simulate the effects of drugs on these processes, significantly reducing the time and cost associated with bringing new medications to market.

Administrative Automation:

Beyond direct patient care, AI is streamlining administrative tasks in healthcare settings. From automating billing processes to managing EHRs and scheduling appointments, AI is reducing the burden on healthcare staff, allowing them to focus more on patient care. This trend also helps healthcare organizations reduce operational costs and improve efficiency.

AI in Mental Health:

AI applications in mental health are gaining traction, with tools like sentiment analysis, an application of natural language processing, being used to assess a patient’s mental state. These tools can analyze text or speech to detect signs of depression, anxiety, or other mental health conditions, facilitating earlier interventions.

  1. Legal and Regulatory Considerations

As AI technologies become more deeply embedded in healthcare, they intersect with legal and regulatory frameworks designed to protect patient safety, privacy, and rights.

Data Privacy and Security:

AI systems rely heavily on vast amounts of data, often sourced from patient records. The use of this data must comply with privacy regulations established by the Health Insurance Portability and Accountability Act (HIPAA), which mandates stringent safeguards to protect patient information. Physicians and AI developers must ensure that AI systems are designed with robust security measures to prevent data breaches, unauthorized access, and other cyber threats.

Liability and Accountability:

The use of AI in clinical decision-making raises questions about liability. If an AI system provides incorrect information or misdiagnoses a condition, determining who is responsible—the physician, the AI developer, or the institution—can be complex. As AI systems become more autonomous, the traditional notions of liability may need to evolve, potentially leading to new legal precedents and liability insurance models.

These notions beg the questions:

  • Will physicians trust the “judgment” of an AI platform making a diagnosis or interpreting a test result?
  • Will the utilization of AI platforms cause physicians to become too heavily reliant on these technologies, forgoing their own professional human judgment?

Surely, plaintiff malpractice attorneys will find a way to fault the physician whatever they decide.

Insurance Companies and Payors:

Another emerging concern is the likelihood that insurance companies and payors, including Medicare/Medicaid, will develop and mandate the use of their proprietary AI systems to oversee patient care, ensuring it aligns with their rules on proper and efficient care. These AI systems, designed primarily to optimize cost-effectiveness from the insurer’s perspective, could potentially undermine the physician’s autonomy and the quality of patient care. By prioritizing compliance with insurer guidelines over individualized patient needs, these AI tools could lead to suboptimal outcomes for patients. Moreover, insurance companies may make the use of their AI systems a prerequisite for physicians to maintain or obtain enrollment on their provider panels, further limiting physicians’ ability to exercise independent clinical judgment and potentially restricting patient access to care that is truly personalized and appropriate.

Licensure and Misconduct Concerns in New York State:

Physicians utilizing AI in their practice must be particularly mindful of licensure and misconduct issues, especially under the jurisdiction of the Office of Professional Medical Conduct (OPMC) in New York. The OPMC is responsible for monitoring and disciplining physicians, ensuring that they adhere to medical standards. As AI becomes more integrated into clinical practice, physicians could face OPMC scrutiny if AI-related errors lead to patient harm, or if there is a perceived over-reliance on AI at the expense of sound clinical judgment. The potential for AI to contribute to diagnostic or treatment decisions underscores the need for physicians to maintain ultimate responsibility and ensure that AI is used to support, rather than replace, their professional expertise.

Conclusion

AI has the potential to revolutionize healthcare, but its integration must be approached with careful consideration of legal and ethical implications. By navigating these challenges thoughtfully, the healthcare industry can ensure that AI contributes to better patient outcomes, improved efficiency, and equitable access to care. The future of AI in healthcare looks promising, with ongoing advancements in technology and regulatory frameworks adapting to these changes. Healthcare professionals, policymakers, and AI developers must continue to engage in dialogue to shape this future responsibly.

Rytr or Wrong: Is the FTC’s Operation AI Comply a Prudent Defense Against Deception or an Assault on Innovation and Constitutional Free Speech?

In today’s rapidly evolving digital economy, new artificial intelligence tools promise to transform every industry. Sometimes, those promises are overblown or outright deceptive. So, as the AI hype cycle continues, regulators are left with the unenviable role of determining their duties to shape the impact of these developing tools on businesses and the public. Although the EEOC, SEC, DOJ and several State Attorneys General are issuing warnings and increasingly investigating the risks of AI, this tension is on full display with the Federal Trade Commission’s recent enforcement actions announced as part of its “Operation AI Comply,” which marks the beginning of its “new law enforcement sweep” against companies that are relying on AI “as a way to supercharge deceptive or unfair conduct that harms consumers.”1

Although many of the initial targets of Operation AI Comply were accused of conduct that plausibly violated Section 5, the FTC’s charges against an AI writing assistant, Rytr, drew strong dissents from two of the FTC Commissioners who accused their fellow commissioners of effectively strangling AI innovation in the crib. There are several important takeaways from Operation AI Comply, particularly if the dissenting commissioners have correctly identified that the FTC is pushing the boundaries of its authority in pursuit of AI.

The FTC and its Role in AI Regulation.

The FTC plays a critical role in protecting consumers from unfair or deceptive practices, and it has long been warning developers about how their algorithms and AI tools might violate one of its broadest sources of statutory authority: Section 5 of the FTC Act.2

In many respects, the FTC’s September 25, 2024, announcement of its “Crackdown on Deceptive AI Claims and Schemes” should not have come as a surprise, as most of the enforcement actions related to overhyping AI.For example, the FTC’s Complaint and proposed settlement with DoNotPay – which made bold claims about being “the world’s first robot lawyer” and that it could “generate perfectly valid legal documents in no time,” replacing “the $200-billion-dollar legal industry with artificial intelligence”4– turned on relatively straightforward false or unsubstantiated performance claims in violation of Section 5 of the FTC Act.Similarly, the FTC’s charges against Ascend Ecom,Ecommerce Empire Builders,and FBA Machineall relate to allegations of e-commerce business opportunity schemes that generally engaged in AI-washing – i.e., a tactic of exaggerating or falsely representing that a product uses AI in an effort to make the product or company appear more cutting edge than it actually is.Each of these four cases was unanimously supported by the Commission, receiving 5-0 votes, and is consistent with other actions brought by the FTC to combat unfair, deceptive, or discriminatory impacts of AI.10

However, with a 3-2 split among its commissioners, the FTC’s complaint against Rytr is a different story.11 Historically, unanimous decisions were more typical; however, split decisions are becoming more common as the FTC pursues more aggressive enforcement actions and reflect a broader ideological conflict about the role of regulation and market intervention.

Rytr: Creative Assistant or Assistant to Fraud?

Rytr is a generative AI writing assistant that produces unlimited written content for subscribers for over 43 use cases.12 At the core of the FTC’s complaint against Rytr is the risk that one of its use cases – a “Testimonial & Review” feature – can be used to create customer reviews that may be false or misleading.13

Based on limited user input, users can generate “genuine-sounding, detailed reviews quickly and with little user effort,” which the FTC believes “would almost certainly be false for users who copy the generated content and publish it online.”14 The FTC gives one example where a user provided minimal inputs of “this product” and “dog shampoo” to generate a detailed paragraph boasting how the dog shampoo smelled great, reduced shedding, improved the shine of their dog’s coat, and recommended the product.15 Based on example inputs and outputs like this, the FTC concluded that Rytr’s services “causes or is likely to cause substantial harm to consumers” and “its likely only use is to facilitate subscribers posting fake reviews with which to deceive consumers.”16 As such, the FTC’s complaint argues that Rytr – by offering a tool that could be readily used to generate false reviews — provided the “means and instrumentalities for deception” and engaged in unfair acts or practices in violation of Section 5 of the FTC Act.17

In other words, the majority of the FTC Commissioners were concerned about an infinite potential for inaccurate or deceptive product reviews by Rytr’s subscribers and did not recognize countervailing reasons to allow this use of technology. Without admitting or denying the allegations in the Complaint, Rytr agreed to a proposed settlement with the FTC by which Rytr would stop offering the Testimonial & Review use case at issue in this case18 – a pragmatic solution to avoid litigation with the government.

Dissents from the FTC’s Direction.

Commissioners Melissa Holyoak and Andrew Ferguson submitted two dissenting statements, criticizing the complaint against Rytr as an aggressive expansion of the FTC’s authority under Section 5 and cautioned against its chilling effect on a nascent industry.19

Commissioner Ferguson framed the internal conflict well: “Treating as categorically illegal a generative AI tool merely because of the possibility that someone might use it for fraud is inconsistent with our precedents and common sense. And it threatens to turn honest innovators into lawbreakers and risks strangling a potentially revolutionary technology in its cradle.”20 The dissenting statements identified three broad objections to the Rytr complaint.

First, as a threshold matter, the complaint failed to identify any evidence of actual harmful or deceptive acts stemming from Rytr’s product – a clear requirement under Section 5 of the FTC Act.21 Both dissents criticized the complaint for effectively treating draft outputs from Rytr as the final reviews published by users; however, “the Commission does not allege a single example of a Rytr-generated review being used to deceive consumers in violation of Section 5 [.]22 Both dissents criticized the complaint for ignoring the obvious benefits of generative AI in this context. Namely, that “much of the promise of AI stems from its remarkable ability to provide such benefits to consumers using AI tools. . . . If Rytr’s tool helped users draft reviews about their experiences that they would not have posted without the benefit of a drafting aid, consumers seeing their reviews benefitted, too.”23

Second, the dissenters rejected the complaint as “a dramatic extension of means-and-instrumentalities liability,”24 particularly in a case “where there is no allegation that Rytr itself made misrepresentations.”25 The complaint focused on the fact that Rytr “has furnished its users and subscribers with the means to generate written content for consumer reviews that is false and deceptive[,]” thus providing “the means and instrumentalities for the commissions of deceptive acts and practices.”26 However, the dissenters note that the “critical element for primary liability is the existence of a representation, either by statement or omission, made by the defendant.”27 The theory advanced against Rytr could be “true of an almost unlimited number of products and services: pencils, paper, printers, computers, smartphones, word processors, . . . etc.”28 Accordingly, both dissenting commissioners rejected this expansion of means-and-instrumentalities liability because a “knowledge requirement avoids treating innocent and productive conduct as illegal merely because of the subsequent acts of independent third parties.”29

Finally, the dissenters offered several reasons why the FTC’s complaint was not in the public’s interest. Both dissenters expressed concerns that this case was too aggressive and would undermine innovation in the AI industry.30 Commissioner Ferguson went further to note that the complaint could violate important First Amendment interests, noting that the complaint “holds a company liable under Section 5 for a product that helps people speak, quite literally.”31 He criticized the theory behind the complaint; “[y]et because the technology in question is new and unfamiliar, I fear we are giving short shrift to common sense and to fundamental constitutional values.”32

Conclusion

It bears repeating that the FTC Commissioners unanimously approved almost every case listed in Operation AI Comply; “[w]hen people use generative AI technology to lie, cheat, and steal, the law should punish them no differently than if they use quill and parchment.”33 So, the FTC’s warnings about marketing AI systems for professional services, using AI to engage in misleading marketing, or overstating a product’s AI integration should be heeded, especially with the FTC’s statements that this is only the beginning of its enforcement activity.34 In prepared remarks, Chair Lina Khan has stated that the FTC is “making clear that there is no AI exemption from the laws on the books[,]35 so companies should take care to protect against whether their AI and other automated tools are being used for unfair or deceptive purposes or have biased or discriminatory impacts. Just because a technology is new does not mean that it can ignore existing laws – and we’ve seen similar sentiments and disputes in other areas of emerging technology enforcement, such as the SEC’s view that, with respect to U.S. securities laws, “[t]here’s no reason to treat the crypto market differently just because different technology is used.”36

However, the Rytr case could be an indicator that the majority intends to pursue a broader theory of liability under Section 5 of the FTC Act to include tools that merely could be misused – without proof of actual harm or intent. If that continues to be the case, developers should be vigilant in identifying how their products and platforms could be misused for fraudulent purposes, as well-intentioned developers may become the target of investigations or other inquiries by the FTC. The FTC is accepting public comments on the proposed consent agreement with Rytr through November 4, 2024,37 which could develop the FTC’s position further.


1) FTC Announces Crackdown on Deceptive AI Claims and Schemes, Press Release, Federal Trade Commission (Sept. 25, 2024), available at https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes.

2) See, e.g., Aiming for truth, fairness, and equity in your company’s use of AI, Elisa Johnson, Federal Trade Commission (April 19, 2021), available at https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

3) Operation AI Comply: Detecting AI-infused frauds and deceptions, Alvaro Puig, Federal Trade Commission (Sept. 25, 2024), available at https://consumer.ftc.gov/consumer-alerts/2024/09/operation-ai-comply-detecting-ai-infused-frauds-and-deceptions.

4) See, e.g., id.

5) In re DoNotPay, Inc., FTC Matter No. 2323042, Complaint available at https://www.ftc.gov/system/files/ftc_gov/pdf/DoNotPayInc-Complaint.pdf.

6) FTC v. Ascend Capventures, Inc., et al., C.D. Ca. Case No. 2:24-CV-07660-SPG-JPR (Filed Sept. 9, 2024).

7) FTC v. Empire Holdings Group LLC, et al., E.D. Pa. Case No. 2:24-CV-04949 (Filed Sept. 18, 2024).

8) FTC v. TheFBAMachine Inc., et al., D. N.J. Case No. 2:24-CV-06635-JXN-LDW (Filed June 3, 2024).

9) See generally, FTC Announces Crackdown on Deceptive AI Claims and Schemes, supra.

10) The FTC aggregated several summaries for its recent cases related to AI and other automated tools, which can be found here: https://www.ftc.gov/business-guidance/blog/2024/09/operation-ai-comply-continuing-crackdown-overpromises-ai-related-lies#:~:text=These%20cases%20are,CRI%20Genetics.

11) See generally Cases and Proceedings: Rytr, FTC Matter No. 2323052 (last updated Sept. 25, 2024), available at https://www.ftc.gov/legal-library/browse/cases-proceedings/rytr.

12) See, e.g., In re Rytr LLC, FTC Matter No. 2323052, Complaint ¶ 2, available at https://www.ftc.gov/system/files/ftc_gov/pdf/2323052rytrcomplaint.pdf.

13) Id. ¶ 6.

14) Id. ¶¶ 6-8.

15)  Id. ¶ 10.

16) Id. ¶ 14.

17) Id. ¶¶ 15-18.

18)  See In re Rytr LLC, Agreement Containing Consent Order, available at https://www.ftc.gov/system/files/ftc_gov/pdf/2323052rytracco.pdf.

19) See, e.g., Dissenting Statement of Commissioner Melissa Holyoak, Joined by Commissioner Andrew N. Ferguson, In re Rytr LLC, FTC Matter No. 2323052 at p.1 (cautioning against settlements to “advance claims or obtain orders that a court is highly unlikely to credit or grant in litigation,” as it may encourage the use of “questionable or misguided theories or cases.”) [hereinafter, “Holyoak Dissent”].

20) Dissenting Statement of Commissioner Andrew N. Ferguson, Joined by Commissioner Melissa Holyoak, In re Rytr LLC, FTC Matter No. 2323052 at p.1 [hereinafter, “Ferguson Dissent”].

21) See 15 U.S.C. § 45(n) (prohibiting the FTC from declaring an act or practice unfair unless it “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.”).

22) Ferguson Dissent at p.6; see also Holyoak Dissent at p.2.

23) Holyoak Dissent at p.3; see also Ferguson Dissent at p.7 (noting the challenges of writing a thoughtful review and that “a tool that produces a well-written first draft of a review based on some keyword inputs can make the task more accessible.”).

24) Ferguson Dissent at p.5.

25) Holyoak Dissent at p.4 (emphasis original).

26) Complaint ¶¶ 15-16.

27) Holyoak Dissent at p.4 (emphasis original) (cleaned up with citations omitted); see also Ferguson Dissent at pp.3-5 (discussing the circumstances in which means-and-instrumentalities liability arises).

28) Ferguson Dissent at p.5.

29) Ferguson Dissent at p.7; see also Holyoak Dissent at p.5 (“Section 5 does not categorically prohibit a product or service merely because someone might use it to deceive someone else.”).

30)  Holyoak Dissent at p.5 (“Today’s misguided complaint and its erroneous application of Section 5 will likely undermine innovation in the AI space.”); Ferguson Dissent at p.10 (“But we should not bend the law to get at AI. And we certainly should not chill innovation by threatening to hold AI companies liable for whatever illegal use some clever fraudster might find for their technology.”).

31) Ferguson Dissent at p.10.

32) Id.

33) Id. at p.9 (citing Concurring and Dissenting Statement of Commissioner Andrew N. Ferguson, A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services, at pp.10-11 (Sept. 19, 2024)).

34) Operation AI Comply: Detecting AI-infused frauds and deceptions, supra.

35) A few key principles: An excerpt from Chair Khan’s Remarks at the January Tech Summit on AI, FTC (Feb. 8, 2024), available at https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/few-key-principles-excerpt-chair-khans-remarks-january-tech-summit-ai.

36) Prepared Remarks of Gary Gensler on Crypto Markets at Penn Law Capital Markets Association Annual Conference, Chair Gary Gensler, SEC (April 4, 2022), available at https://www.sec.gov/newsroom/speeches-statements/gensler-remarks-crypto-markets-040422.

37) Rytr LLC; Analysis of Proposed Consent Order To Aid Public Comment, Federal Register, available at https://www.federalregister.gov/documents/2024/10/03/2024-22767/rytr-llc-analysis-of-proposed-consent-order-to-aid-public-comment.

Artificial Intelligence and Intellectual Property Legal Frameworks in the Asia-Pacific Region

Globally, governments are grappling with the emergence of artificial intelligence (“AI”). AI technologies introduce exciting new opportunities but also bring challenges for regulators and companies across all industries. In the Asia-Pacific (“APAC”) region, there is no exception. APAC governments are adapting to AI and finding ways to encourage and regulate AI development through existing intellectual property (“IP”) regimes and new legal frameworks.

AI technologies aim to simulate human intelligence through developing smart machines capable of performing tasks that require human intelligence. The expanding market for AI ranges from machine learning to generative AI to virtual assistants to robotics, and this list merely scratches the surface.

When it comes to IP and AI, there are several critical questions for governments to consider: Can AI models be protected by existing legal frameworks within IP? Must copyright owners be human? Does a patent inventor have to be an individual? Do AI models’ training programs infringe on others’ copyrights?

To begin to answer these questions, regulators are drawing from existing IP regimes, including patent and copyright law. Some APAC countries have taken a non-binding approach, relying on existing principles to guide AI regulation. Others are drafting more specific AI regulations. The summary chart below provides a brief overview of current patent and copyright laws within APAC focused on AI and IP. Additional commentary concerning updates to AI laws and regulations is provided below the chart.

Country Patent Copyright
Korea A non-human cannot be the inventor under Korea’s Patent Act. There is a requirement for “a person.” The Copyright Act requires a human creator. Copyright is possible if the creator is a human using generative AI models as software tools and the human input is considered more than simple prompt inputs. For example, in Korea, copyright was granted to a movie produced by generative AI as a “compilation work” in December 29, 2023.
Japan Under Japan’s Patent Act, a natural person must be the inventor. This is the “requirement of shimei 氏名” (i.e. name of a natural person). Japan’s Copyright Act defines a copyright-protected work as “a creation expressing human thoughts and Emotions.” However, in February 29, 2024, the Agency for Cultural Affairs committee’s document on “Approach to AI and Copyright” provided that a joint work made up of both human input and AI generated content can be eligible for copyright protection.
Taiwan Taiwan’s Patent Law does not explicitly preclude a non-human inventor, however, the Patent Examination Guidelines require a natural person to be an inventor. Formalities in Taiwan also require an inventor’s name and nationality. The Copyright Act requires of “human creative expression.”
China The inventor needs to be a person under Patent Law and the Guidelines for Examination in China. Overall, Chinese courts have recognized that when AI-generated works involve human intellectual input, the user of the AI software is the copyright owner.
Hong Kong The Patents Ordinance in Hong Kong requires a human inventor. The Copyright Ordinance in Hong Kong attributes authorship to “the person by whom the arrangements necessary for the creation of the work are undertaken.”
Philippines Patent law in the Philippines requires a natural person to be the inventor. Generally, copyright law in the Philippines requires the author to be a natural person. The copyright in works that are partially AI-generated protects only those parts that are created by natural persons. The Philippines IP Office relies on the declarations of the creator claiming copyright to provide which part of the work is AI-generated and which part is not.
Vietnam AI cannot be an IP right owner in Vietnam. The user of AI is the owner, regardless of the degree of work carried out by AI. In terms of copyright, AI cannot be an IP right owner. Likewise, the user of AI is the owner, regardless of the degree of work carried out by AI.
Thailand Thailan’s Patent law in Thailand requires inventors to be individuals. Copyright law in Thailand requires an author to be an individual.
Malaysia Malaysia’s Patent law requires inventors to be individuals. Copyright law in Malaysia requires an author to be an individual.
Singapore Patent law requires inventors to be a natural person(s). However, the owner can be a natural person or a legal entity. In Singapore, it is implicit in provisions of the Copyright Act that the author must be a natural person.
Indonesia Under Indonesia’s patent law, the inventor may be an individual or legal entity. Under copyright law in Indonesia, the author of a work may be an individual or legal entity.
India India’s patent law requires inventors to be a natural person(s). The copyright law contains a requirement of “originality” – which the courts interpret as “intellectual effort by humans.”
Australia The Full Federal Court in Australia ruled that an inventor must be a natural person. Copyright law in Australia requires the author to be a human.
New Zealand One court in New Zealand has ruled that AI cannot be an inventor under the Patents Act. A court in New Zealand has ruled that AI cannot be the author under the provisions of the Copyright Act. There is updated legislation clarifying that the ownership of computer-generated works is the person who “made the arrangements necessary” for the creation of the work.

AI Regulation and Infringement

KOREA: Court decisions have ruled that web scraping or pulling information from a competitor’s website or database infringes on competitor’s database rights under the Copyright Act and the UCPA. In Koria, parties must obtain permission for use of copyrighted work for training AI emphasized in guidelines. The Copyright Commission published guidelines on copyright and AI in December 2023. The guidelines noted the growing need for legislation on AI generated works. The English version of the guidelines was released in April 2024.

JAPAN: The January 1, 2019 Copyright Act provides very broad rights to use copyrighted works without permission for training AI, as long as the training is for the purpose of technological development. The committee aims to introduce checks to this freedom, and also to provide more protection for Japan-based content creators and copyright holders. The Japan Agency for Cultural Affairs (ACA) released its draft “Approach to AI and Copyright” for public comment on January 23, 2024. Additional changes have been made to the draft after considering 25,000 comments as of February 29, 2025. Also, the Ministry of Internal Affairs and Communications, Ministry of Economy, Trade and compiled the AI Guidelines for Business Ver1.0 in Japan on April 19, 2024.

TAIWAN: Using copyrighted works to train AI models involves “reproduction”, which constitutes an infringement, unless there is consent or a license to use the work. Taiwan’s IPRO released an interpretation to clarify AI issues in June 2023. Under the IPO interpretation circular of June 2023, the Taiwan cabinet approved draft guidelines for the use of generative AI by the executive branch of the Taiwan government in August 2023. The executive branch of the Taiwan government also confirmed that it is in the process of formulating the government’s version of the Draft AI Law, which is expected to be published this year.

CHINA: Interim Measures for the Management of Generative Artificial Intelligence Services, promulgated in July 2023, require that generative AI services “respect intellectual property rights and commercial ethics” and that “intellectual property rights must not be infringed.” The consultation draft on Basic Security Requirements for Generative Artificial Intelligence Service, which was published in October 2023, provides detailed guidance on how to avoid IP infringement. The requirements, for example, provide specific processes concerning model training data that Chinese AI companies must adopt. Moreover, China’s draft Artificial Intelligence Law, proposed on March 16, 2024, outlines the use of copyrighted material for training purposes, and it serves as a complement to China’s current AI regulations.

HONG KONG: A review of copyright law in Hong Kong is underway. There is currently no overarching legislation regulating the use of AI, and the existing guidelines and principles mainly provide guidance on the use of personal data.

VIETNAM: AI cannot have responsibility for infringement, and there are no provisions under existing laws in Vietnam regarding the extent of responsibility of AI users for infringing acts. The Law on Protection of Consumers’ Rights will take effect on July 1, 2024. This law requires operators of large digital platforms to periodically evaluate the use of AI and fully or partially automated solutions.

THAILAND: Infringement in Thailand requires intent or implied intent, for example, from the prompts made to the AI. Thai law also provides for liability arising out of the helping or encouraging of infringement by another. Importantly, the AI user may also be exposed to liability in that way.

MALAYSIA: An informal comment from February 2024 by the Chairman of the Malaysia IP Office provides that there may be infringement through the training and/or use of AI programs.

SINGAPORE: Singapore has a hybrid regime. The regime provides a general fair use exception, which is likely guided by US jurisprudence, per the Singapore Court of Appeal. The regime also provides exceptions for specific types of permitted uses, for example, the computational data analysis exception. A Landscape Report on Issues at the Intersection of AI and IP issued by IPOS on February 28, 2024 provided a Model AI Governance Framework for Generative AI, which was published May 30, 2024.

INDONESIA: A “circular,” a government issued document similar to a white paper, implies that infringement is possible in Indonesia. The nonbinding Communications and Information Ministry Circular No. 9/2023 on AI was signed in December 2023.

INDIA: Under the Copyright Act of 1957, a Generative AI user has an obligation to obtain permission to use the copyright owner’s works for commercial purposes. In February 2024, the Ministry of Commerce and Industry’s Statement provided that India’s existing IPR regime is “well-equipped to protect AI-generated works” and therefore, it does not require a separate category of rights. MeitY issued a revised advisory on March 15, 2024 providing that platforms and intermediaries should ensure that the use of AI models, large language models, or generative AI software or algorithms by end users does not facilitate any unlawful content stipulated under Rule 3(1)(b) of the IT Rules, in addition to any other laws.

AUSTRALIA: Any action seeking compensation for infringement of a copyright work by an AI system would need to rely on the Copyright Act of 1968. It is an infringement of copyright to reproduce or communicate works digitally without the copyright owner’s permission. Australia does not have a general “fair use” defense to copyright infringement.

NEW ZEALAND: While infringement by AI users has not yet considered by New Zealand courts, New Zealand has more restricted “fair dealing” exceptions. Copyright review is underway in New Zealand.

Illinois Enacts Requirements for AI Use in Employment Decisions

On Aug. 9, 2024, Illinois Gov. Pritzker signed into law HB3733, which amends the Illinois Human Rights Act (IHRA) to cover employer use of artificial intelligence (AI). Effective Jan. 1, 2026, the amendments will add to existing requirements for employers that use AI to analyze video interviews of applicants for positions in Illinois.

Illinois is the latest jurisdiction to pass legislation aimed at preventing discrimination caused by AI tools that aid in making employment decisions. The state joins jurisdictions such as Colorado and New York City in regulating the use of AI in this context.

Restrictions on the Use of AI in Employment Decisions

The amendments expressly prohibit the use of AI in a manner that results in illegal discrimination in employment decisions and employee recruitment. Specifically, covered employers are barred from using AI in a way that has the effect of subjecting employees to discrimination on the basis of any class protected by the IHRA, including if zip codes are used as a proxy for such protected classes.

These new requirements will apply to any employer with one or more employees in Illinois during 20 or more calendar weeks within the calendar year of, or preceding, the alleged violation. They also apply to any employer with one or more employees when unlawful discrimination based on physical or mental disability unrelated to ability, pregnancy, or sexual harassment is alleged.

The amendments define AI as a “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” AI also includes “generative artificial intelligence.”

The amendments further define generative AI as “an automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including, but not limited to”:

  • Textual outputs, such as short answers, essays, poetry, or longer compositions or answers;
  • Image outputs, such as fine art, photographs, conceptual art, diagrams, and other images;
  • Multimedia outputs, such as audio or video in the form of compositions, songs, or short-form or long-form audio or video; and
  • Other content that would be otherwise produced by human means.

Employer Notice Requirements

The amendments require a covered employer to provide notice to employees if the organization uses AI for the following employment-related purposes:

  • Recruitment
  • Hiring
  • Promotion
  • Renewal of employment
  • Selection for training or apprenticeship
  • Discharge
  • Discipline
  • Tenure
  • The terms, privileges, or conditions of employment

While the amendments do not provide specific direction regarding the notice, such as when and how the notice should be provided, they direct the Illinois Department of Labor to adopt rules necessary to implement the notice requirement. Thus, additional guidance should be forthcoming.

Although not required, Illinois employers and AI technology developers may wish to consider conducting audits or taking other measures to help avoid biased outcomes and to further protect against liability.

Enforcement

The IHRA establishes a two-part enforcement procedure. The Illinois Department of Human Rights (IDHR) is the administrative agency that investigates charges of discrimination, while the Illinois Human Rights Commission (IHRC) is an administrative court that adjudicates complaints of unlawful discrimination. Complainants have the option to proceed before the IHRC or file a civil action directly in circuit court after exhausting their administrative remedies before the IDHR.

Practical Considerations

Before the effective date, covered employers should consider:

  • Assessing which platforms and tools in use (or under consideration) incorporate AI, including generative AI, components.
  • Drafting employee notices and developing a plan for notifying employees.
  • Training AI users and quality control reviewers/auditors on anti-discrimination/anti-bias laws and policies that will impact their interaction with the tool(s).
  • Partnering with legal counsel and experienced vendors to identify or create privileged processes to evaluate, mitigate, and monitor potential discriminatory or biased impacts of AI use.
  • Reviewing any rules published by the Illinois Department of Labor, including on the circumstances and conditions that require notice and the timeframe and means for providing notice.
  • Multi-state employers should continue to monitor for additional requirements. For instance, California’s legislature is considering a range of AI-related bills, including some aimed at workplace discrimination.

“Is SEO Dead?” Why AI Isn’t the End of Law Firm Marketing

With the emergence of Artificial Intelligence (AI) technology, many business owners have feared that marketing as we know it is coming to an end. After all, Google Gemini is routinely surfacing AI-generated responses over organic search results, AI content is abundant, and AI-driven tools are being used more than ever to automate tasks previously performed by human marketers.

But it’s not all doom and gloom over here—there are many ways in which digital marketing, including Search Engine Optimization (SEO) —is alive and well. This is particularly true for the legal industry, where there are many limits to what AI can do in terms of content creation and client acquisition.

Here’s how the world of SEO is being impacted by AI, and what this means for your law firm marketing.

Law Firm Marketing in the Age of AI

The Economist put it best: the development of AI has resulted in a “tsunami of digital innovation”. From ChatGPT’s world-changing AI model to the invention of “smart” coffee machines, AI appears to be everything. And it has certainly shaken up the world of law firm marketing.

Some of these innovations include AI chatbots for client engagement, tools like Lex Machina and Premonition that use predictive analytics to generate better leads, and AI-assisted legal research. Countless more tools and formulas have emerged to help law firms streamline their operations, optimize their marketing campaigns, create content, and even reduce overhead.

So, what’s the impact? 

With AI, law firms have reduced their costs, leveraging automated tools instead of manual efforts. Legal professionals have access to more data to identify (and convert) quality leads. And it’s now easier than ever to create content at volume.

At the same time, though, many people question the quality and accuracy of AI content. Some argue that AI cannot capture the nuance of the human experience or understanding complex (and often emotional) legal issues. Even more, AI-generated images and content often lack a personalized touch.

One area of marketing that’s particularly impacted by this is SEO, as it is largely driven by real human behavior, interactions, and needs.

So, is SEO Dead?

Even though many of the tools and techniques of SEO for lawyers have changed, the impact of SEO is still alive and well. Businesses continue to benefit from SEO strategies, allowing their brands to surface in the search results and attract new customers. In fact, there may even be more opportunities to rank than ever before.

For instance, Google showcases not only organic results but paid search results, Google Map Pack, Images, News, Knowledge Panel, Shopping, and many more pieces of digital real estate. This gives businesses different content formats and keyword opportunities to choose from.

Also, evolution in the SEO landscape is nothing new. There have been countless algorithm changes over the years, often in response to user behavior and new technology. SEO may be different, but it’s not dead.

Why SEO Still Matters for Law Firms

With the SEO industry alive and well, it’s still important for law firms to have a strong organic presence. This is because Google remains the leading medium through which people search for legal services. If you aren’t ranking high in Google, it will be difficult to get found by potential clients.

Here are some of the many ways SEO still matters for law firms, even in the age of AI.

1. Prospective clients still use search engines

Despite the rise of AI-based tools, your potential clients rely heavily on search engines when searching for your services. Whether they’re looking for legal counsel or content related to specific legal issues, search engines remain a primary point of entry.

Now, AI tools can often assist in this search process, but they rarely replace it entirely. SEO ensures your firm is visible when potential clients search for these services.

2. Your competitors are ranking in Search

Conduct a quick Google search of “law firm near me,” and you’ll likely see a few of your competitors in the search results. Whether they’re implementing SEO or not, their presence is a clear indication that you’ll need some organic momentum in order to compete.

Again, potential clients are using Google to search for the types of services you offer, but if they encounter your competitors first, they’re likely to inquire with a different firm. With SEO, you help your law firm stand out in the search results and become the obvious choice for potential clients.

3. AI relies on search engine data

The reality is that AI tools actually harness search engine data to train their models. This means the success of AI largely depends on people using search engines on a regular basis. Google isn’t going anywhere, so AI isn’t likely to go anywhere, either!

Whether it’s voice search through virtual assistants or AI-driven legal content suggestions, these systems still rely on the vast resources that search engines like Google organize. Strong SEO practices are essential to ensure your law firm’s website is part of that data pool. AI can’t bypass search engines entirely, so optimizing for search ensures your firm remains discoverable.

4. AI can’t replace personalized content

Only as a lawyer do you have the experience and training to advise clients on complex legal issues. AI content — even if only in your marketing — will only take you so far. Potential clients want to read content that’s helpful, relatable, and applicable to their needs.

While AI can generate content and provide answers, legal services are inherently personal. Writing your own content or hiring a writer might be your best bet for creating informative, well-researched content. AI can’t replicate the nuanced understanding that comes from a real lawyer, as your firm is best equipped to address clients’ specific legal issues.

5. SEO is more than just “content”

In the field of SEO, a lot of focus is put on content creation. And while content is certainly important (in terms of providing information and targeting keywords), it’s only one piece of the pie. AI tools are not as skilled at the various aspects of SEO, such as technical SEO and local search strategies.

Local SEO is essential for law firms, as most law firms serve clients within specific geographical areas. Google’s algorithm uses localized signals to determine which businesses to show in search results. This requires an intentional targeting strategy, optimizing your Google Business Profile, submitting your business information to online directories, and other activities AI tools have yet to master.

AI doesn’t replace the need for local SEO—if anything, AI-enhanced local search algorithms make these optimizations even more critical!

Goodbye AI, hello SEO?

Overall, the legal industry is a trust-based business. Clients want to know they work with reputable attorneys who understand their issues. AI is often ill-equipped to provide that level of expertise and personalized service.

Further, AI tools have limitations regarding what they can optimize, create, and manage. AI has not done away with SEO but has undoubtedly changed the landscape. SEO is an essential part of any law firm’s online marketing strategy.

AI is unlikely to disappear any time soon, and neither is SEO!

A Look at the Evolving Scope of Transatlantic AI Regulations

There have been significant changes to the regulations surrounding artificial intelligence (AI) on a global scale. New measures from governments worldwide are coming online, including the United States (U.S.) government’s executive order on AI, California’s upcoming regulations, the European Union’s AI Act, and emerging developments in the United Kingdom that contribute to this evolving environment.

The European Union (EU) AI Act and the U.S. Executive Order on AI aim to develop and utilize AI safely, securely, and with respect for fundamental rights, yet their approaches are markedly different. The EU AI Act establishes a binding legal framework across EU member states, directly applies to businesses involved in the AI value chain, classifies AI systems by risk, and imposes significant fines for violations. In contrast, the U.S. Executive Order is more of a guideline as federal agencies develop AI standards and policies. It prioritizes AI safety and trustworthiness but lacks specific penalties, instead relying on voluntary compliance and agency collaboration.

The EU approach includes detailed oversight and enforcement, while the U.S. method encourages the adoption of new standards and international cooperation that aligns with global standards but is less prescriptive. Despite their shared objectives, differences in regulatory approach, scope, enforcement, and penalties could lead to contradictions in AI governance standards between the two regions.

There has also been some collaboration on an international scale. Recently, there has been an effort between antitrust officials at the U.S. Department of Justice (DOJ), U.S. Federal Trade Commission (FTC), the European Commission, and the UK’s Competition and Markets Authority to monitor AI and its risks to competition. The agencies have issued a joint statement, with all four antitrust enforcers pledging to “to remain vigilant for potential competition issues” and to use the powers of their agencies to provide safeguards against the utilization of AI to undermine competition or lead to unfair or deceptive practices.

The regulatory landscape for AI across the globe is evolving in real time as the technology develops at a record pace. As regulations strive to keep up with the technology, there are real challenges and risks that exist for companies involved in the development or utilization of AI. Therefore, it is critical that business leaders understand regulatory changes on an international scale, adapt, and stay compliant to avoid what could be significant penalties and reputational damage.

The U.S. Federal Executive Order on AI

In October 2023, the Biden Administration issued an executive order to foster responsible AI innovation. This order outlines several key initiatives, including promoting ethical, trustworthy, and lawful AI technologies. It also calls for collaboration between federal agencies, private companies, academia, and international partners to advance AI capabilities and realize its myriad benefits. The order emphasizes the need for robust frameworks to address potential AI risks such as bias, privacy concerns, and security vulnerabilities. In addition, the order directs that various sweeping actions be taken, including the establishment of new standards for AI safety and security, the passing of bipartisan data privacy legislation to protect Americans’ privacy from the risks posed by AI, the promotion of the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, and the implementation of actions to ensure responsible government deployment of AI and modernization of the federal AI infrastructure through the rapid hiring of AI professionals.

At the state level, Colorado and California are leading the way. Colorado enacted the first comprehensive regulation of AI at the state level with The Colorado Artificial Intelligence Act (Senate Bill (SB) 24-205), signed into law by Governor Jared Polis on May 17, 2024. As our team previously outlined, The Colorado AI Act is comprehensive, establishing requirements for developers and deployers of “high-risk artificial intelligence systems,” to adhere to a host of obligations, including disclosures, risk management practices, and consumer protections. The Colorado law goes into effect on February 1, 2026, giving companies over a year to thoroughly adapt.

In California, a host of proposed AI regulations focusing on transparency, accountability, and consumer protection would require the disclosure of information such as AI systems’ functions, data sources, and decision-making processes. For example, AB2013 was introduced on January 31, 2024, and would require that developers of an AI system or service made available to Californians to post on the developer’s website documentation of the datasets used to train the AI system or service.

SB970 is another bill that was introduced in January 2024 and would require any person or entity that sells or provides access to any AI technology that is designed to create synthetic images, video, or voice to give a consumer warning that misuse of the technology may result in civil or criminal liability for the user.

Finally, on July 2, 2024 the California State Assembly Judiciary Committee passed SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), which regulates AI models based on complexity.

The European Union’s AI Act

The EU is leading the way in AI regulation through its AI Act, which establishes a framework and represents Europe’s first comprehensive attempt to regulate AI. The AI Act was adopted to promote the uptake of human-centric and trustworthy AI while ensuring high level protections of health, safety, and fundamental rights against the harmful effects of AI systems in the EU and supporting innovation.

The AI Act sets forth harmonized rules for the release and use of AI systems in the EU; prohibitions of certain AI practices; specific requirements for high-risk AI systems and obligations for operators of such systems; harmonized transparency rules for certain AI systems; harmonized rules for the release of general-purpose AI models; rules on market monitoring, market surveillance, governance, and enforcement; and measures to support innovation, with a particular focus on SMEs, including startups.

The AI Act classifies AI systems into four risk levels: unacceptable, high, limited, and minimal. Applications that pose an unacceptable risk, such as government social scoring systems, are outright banned. High-risk applications, including CV-scanning tools, face stringent regulations to ensure safety and accountability. Limited risk applications lack full transparency as to AI usage, and the AI Act imposes transparency obligations. For example, humans should be informed when they are using AI systems (such as chatbots) that they are interacting with a machine and not a human so as to enable the user to make an informed decision whether or not to continue. The AI Act allows the free use of minimal-risk AI, including applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

The adoption of the AI Act has not come without criticism from major European companies. In an open letter signed by 150 executives, they raised concerns over the heavy regulation of generative AI and foundation models. The fear is that the increased compliance costs and hindered productivity would drive companies away from the EU. Despite these concerns, the AI Act is here to stay, and it would be wise for companies to prepare for compliance by assessing their systems.

Recommendations for Global Businesses

As governments and regulatory bodies worldwide implement diverse AI regulations, companies have the power to adopt strategies that both ensure compliance and mitigate risks proactively. Global businesses should consider the following recommendations:

  1. Risk Assessments: Conducting thorough risk assessments of AI systems is important for companies to align with the EU’s classification scheme and the U.S.’s focus on safety and security. There must also be an assessment of the safety and security of your AI systems, particularly those categorized as high-risk under the EU’s AI Act. This proactive approach will not only help you meet regulatory requirements but also protect your business from potential sanctions as the legal landscape evolves.
  2. Compliance Strategy: Develop a compliance strategy that specifically addresses the most stringent aspects of the EU and U.S. regulations.
  3. Legal Monitoring: Stay on top of evolving best practices and guidelines. Monitor regulatory developments in regions in which your company operates to adapt to new requirements and avoid penalties and engage with policymakers and industry groups to stay ahead of compliance requirements. Participation in public consultations and industry forums can provide valuable insights and influence regulatory outcomes.
  4. Transparency and Accountability: To meet ethical and regulatory expectations, transparency and accountability should be prioritized in AI development. This means ensuring AI systems are transparent, with clear documentation of data sources, decision-making processes, and system functionalities. There should also be accountability measures in place, such as regular audits and impact assessments.
  5. Data Governance: Implement robust data governance measures to meet the EU’s requirements and align with the U.S.’s emphasis on trustworthy AI. Establish governance structures that ensure compliance with federal, state, and international AI regulations, including appointing compliance officers and developing internal policies.
  6. Invest in Ethical AI Practices: Develop and deploy AI systems that adhere to ethical guidelines, focusing on fairness, privacy, and user rights. Ethical AI practices ensure compliance, build public trust, and enhance brand reputation.

AI-Generated Content and Trademarks

The rapid evolution of artificial intelligence has undeniably transformed the digital landscape, with AI-generated content becoming increasingly common. This shift has profound implications for brand owners introducing both challenges and opportunities.

One of the most pressing concerns is trademark infringement. In a recent example, the Walt Disney Company, a company fiercely protective of its intellectual property, raised concerns about AI-generated content potentially infringing on its trademarks.  Social media users were having fun using Microsoft’s Bing AI imaging tool, powered by DALL-E 3 technology, to create images of pets in a “Pixar” style.  However, Disney’s concern wasn’t the artwork itself, but the possibility of the AI inadvertently generating the iconic Disney-Pixar logo within the images, constituting a trademark infringement. This incident highlights the potential for AI-generated content to unintentionally infringe upon established trademarks, requiring brand owners to stay vigilant in protecting their intellectual property in the digital age.

Dilution of trademarks is another critical issue. A recent lawsuit filed by Getty Images against Stability AI sheds light on this concern. Getty Images, a leading provider of stock photos, accused Stability AI of using millions of its copyrighted images to train its AI image generation software. This alleged use, according to Getty Images, involved Stability AI’s incorporation of Getty Images’ marks into low-quality, unappealing, or offensive images which dilutes those marks in further violation of federal and state trademark laws. The lawsuit highlights the potential for AI, through the sheer volume of content it generates, to blur the lines between inspiration and infringement, weakening the association between a trademark and its source.

In addition, the ownership of copyrights in AI-generated marketing can cause problems. While AI tools can create impressive content, questions about who owns the intellectual property rights persist.  Recent disputes over AI-generated artwork and music have highlighted the challenges of determining ownership and copyright in this new digital frontier.

However, AI also presents opportunities for trademark owners. For example, AI can be employed to monitor online platforms for trademark infringements, providing an early warning system. Luxury brands have used AI to authenticate products and combat counterfeiting. For instance, Entrupy has developed a mobile device-based authentication system that uses AI and microscopy to analyze materials and detect subtle irregularities indicative of counterfeit products. Brands can integrate Entrupy’s technology into their retail stores or customer-facing apps.

Additionally, AI can be a powerful tool for brand building. By analyzing consumer data and preferences, AI can help create highly targeted marketing campaigns. For example, cosmetic brands have successfully leveraged AI to personalize product recommendations, enhancing customer engagement and loyalty.

The intersection of AI and trademarks is a dynamic and evolving landscape. As technology continues to advance, so too will the challenges and opportunities for trademark owners. Proactive measures, such as robust trademark portfolios, AI-powered monitoring tools, and clear internal guidelines, are essential for safeguarding brand integrity in this new era.

American Bar Association Issues Formal Opinion on Use of Generative AI Tools

On July 29, 2024, the American Bar Association issued ABA Formal Opinion 512 titled “Generative Artificial Intelligence Tools.”

The opinion addresses the ethical considerations lawyers are required to consider when using generative AI (GenAI) tools in the practice of law.

The opinion sets forth the ethical rules to consider, including the duties of competence, confidentiality, client communication, raising only meritorious claims, candor toward the tribunal, supervisory responsibilities of others, and setting of fees.

Competence

The opinion reiterates previous ABA opinions that lawyers are required to have a reasonable understanding of the capabilities and limitations of specific technologies used, including remaining “vigilant” about the benefits and risks of the use of technology, including GenAI tools. It specifically mentions that attorneys must be aware of the risk of inaccurate output or hallucinations of GenAI tools and that independent verification is necessary when using GenAI tools. According to the opinion, users must evaluate the tool being used, analyze the output, not solely rely on the tool’s conclusions, and cannot replace their judgment with that of the tool.

Confidentiality

The opinion reminds lawyers that they are ethically required to make reasonable efforts to prevent inadvertent or unauthorized access or disclosure of client information or their representation of a client. It suggests that, before inputting data into a GenAI tool, a lawyer must evaluate not only the risk of unauthorized disclosure outside the firm, but also possible internal unauthorized disclosure in violation of an ethical wall or access controls. The opinion stressed that if client information is uploaded to a GenAI tool within the firm, the client data may be disclosed to and used by other lawyers in the firm, without the client’s consent, to benefit other clients. The client data input into the GenAI tool may be used for self-learning or teaching an algorithm that then discloses the client data without the client’s consent.

The opinion suggests that before submitting client data to a GenAI tool, lawyers must review the tool’s privacy policy, terms of use, and all contractual terms to determine how the GenAI tool will collect and use the data in the context of the ethical duty of confidentiality with clients.

Further, the opinion suggests that if lawyers intend to use GenAI tools to provide legal services to clients, lawyers are required to obtain informed client consent before using the tool. The lawyer is required to inform the client of the use of the GenAI tool, the risk of use of the tool and then obtain the client’s informed consent prior to use. Importantly, the opinion states that “general, boiler-plate provisions [in an] engagement letter” are not sufficient” to meet this requirement.

Communication

With regard to lawyers’ duty to effectively communicate information that is in the best interest of their client, the opinion notes that—depending on the circumstances—it may be in the best interest of the client to disclose the use of GenAI tools, particularly if the use will affect the fee charged to the client, or the output of the GenAI tool will influence a significant decision in the representation of the client. This communication can be included in the engagement letter, though it may be appropriate to communicate directly with the client before including it in the engagement letter.

Meritorious Claims + Candor Toward Tribunal

Lawyers are officers of the court and have an ethical obligation to put forth meritorious claims and to be candid with the tribunal before which such claims are presented. In the context of the use of GenAI tools, as stated above, there is a risk that without appropriate evaluation and supervision (including the use of independent professional judgment), the output of a GenAI tool can sometimes be erroneous or considered a “hallucination.” Therefore, to reiterate the ethical duty of competence, lawyers are advised to independently evaluate any output provided by a GenAI tool.

In addition, some courts require that attorneys disclose whether GenAI tools have been used in court filings. It is important to research and follow local court rules and practices regarding disclosure of the use of GenAI tools before submitting filings.

Supervisory Responsibilities

Consistent with other ABA Opinions relevant to the use of technology, the opinion stresses that managerial responsibilities include providing clear policies to lawyers, non-lawyers, and staff about the use of GenAI in the practice of law. I think this is one of the most important messages of the opinion. Firms and law practices are required to develop and implement a GenAI governance program, evaluate the risk and benefit of the use of a GenAI tool, educate all individuals in the firm on the policies and guardrails put in place to use such tools, and supervise their use. This is a clear message that lawyers and law firms need to evaluate the use of GenAI tools and start working on developing and implementing their own AI governance program for all internal users.

Fees

The key takeaway of the fees section of Opinion 512 is that a lawyer can’t bill a client to learn how to use a GenAI tool. Consistent with other opinions relating to fees, only extraordinary costs associated with the use of GenAI tools are permitted to be billed to the client, with the client’s knowledge and consent. In addition, the opinion points out that any efficiencies gained by the use of GenAI tools, with the client’s consent, should benefit the client through reduced fees.

Conclusion

Although consistent with other ABA opinions related to the use of technology, an understanding of ABA Opinion 512 is important as GenAI tools become more ubiquitous. It is clear that there will be additional opinions related to the use of GenAI tools from the ABA as well as state bar associations and that it is a topic of interest in the context of adherence with ethical obligations. A clear message from Opinion 512 is that now is a good time to consider developing an AI governance program.