California Poised to Further Regulate Artificial Intelligence by Focusing on Safety

Looking to cement the state near the forefront of artificial intelligence (AI) regulation in the United States, on August 28, 2024, the California State Assembly passed the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (SB 1047), also referred to as the AI Safety Act. The measure awaits the signature of Governor Gavin Newsom. This development comes effectively on the heels of the passage of the “first comprehensive regulation on AI by a major regulator anywhere” — the EU Artificial Intelligence Act (EU AI Act) — which concluded with political agreement in late 2023 and entered into force on August 1, 2024. It also follows the first comprehensive US AI law from Colorado (Colorado AI Act), enacted on May 17, 2024. And while the United States lacks a comprehensive federal AI framework, there have been developments regarding AI at the federal level, including the late 2023 Executive Order on AI from the Biden White House and other AI-related regulatory guidance.

We have seen this sequence play out before in the world of privacy. Europe has long led on privacy regulation, stemming in large part from its recognition of privacy as a fundamental right — an approach that differs from how privacy is viewed in the United States. When the European General Data Protection Act (GDPR) became effective in May 2018, it was not the world’s first comprehensive privacy framework (not even in Europe), but it did highlight increasing awareness and market attention around the use and protection of personal data, setting off a multitude of copycat privacy regulatory regimes globally. Not long after GDPR, California became the first US state with a comprehensive privacy regulation when then-California Governor Jerry Brown signed the California Consumer Privacy Act (CCPA) into law on June 28, 2018. While the CCPA, since amended by the California Privacy Rights Act of 2020 (CPRA), is assuredly not a GDPR clone, it nevertheless felt familiar to many organizations that had begun to develop privacy compliance programs centered on GDPR standards and definitions. The CCPA preceded the passage of comprehensive privacy regulations in many other US states that, while not necessarily based on CCPA, did not diverge dramatically from the approach taken by California. These privacy laws also generally apply to AI systems when they process personal data, with some (including CCPA/CPRA) already contemplating automated decision-making that can be, but is not necessarily, based on AI.

AI Safety Act Overview

Distinct from the privacy sphere, the AI Safety Act lacks the same degree of familiarity when compared to the EU AI Act (and to its domestic predecessor, the Colorado AI Act). Europe has taken a risk-based approach that defines different types of AI and applies differing rules based on these definitions, while Colorado primarily focuses on “algorithmic discrimination” by AI systems determined to be “high-risk.” Both Europe and Colorado distinguish between “providers” or “developers” (those that develop an AI system) and “deployers” (those that use AI systems) and include provisions that apply to both. The AI Safety Act, however, principally focuses on AI developers and attempts to solve for potential critical harms (largely centered on catastrophic mass casualty events) created by (i) large-scale AI systems with extensive computing power of greater than 10^26 integer or floating-point operations and with a development cost of greater than $100 million, or (ii) a model created by fine-tuning a covered AI system using computing power equal to or greater than three times 10^25 integer or floating-point operations with a cost in excess of $10 million. Key requirements of the AI Safety Act include:

  • “Full Shutdown” Capability. Developers would be required to implement capabilities to enact a full shutdown of a covered AI system, considering the risk that a shutdown could cause disruption to critical infrastructure and implementing a written safety and security protocol that, among other things, details the conditions under which such a shutdown would be enacted.
  • Safety Assessments. Prior to release, testing would need to be undertaken to determine whether the covered model is “reasonably capable of causing or materially enabling a critical harm,” with details around such testing procedures and the nature of implemented safeguards.
  • Third-Party Auditing. Developers would be required to annually retain a third-party auditor to conduct audits on a covered AI system that are “consistent with best practices for auditors” to perform an independent audit to ensure compliance with the requirements of the AI Safety Act.
  • Safety Incident Reporting. If a safety incident affecting the covered model occurs, the AI Safety Act would require developers to notify the California Attorney General (AG) within 72 hours after the developer learns of the incident or learns of facts that cause a reasonable belief that a safety incident has occurred.
  • Developer Accountability. Notably, the AI Safety Act would empower the AG to bring civil actions against developers for harms caused by covered AI systems. The AG may also seek injunctive relief to prevent potential harms.
  • Whistleblower Protections. The AI Safety Act would also provide for additional whistleblower protections, including by prohibiting developers of a covered AI system from preventing employees from disclosing information or retaliating against employees for disclosing information regarding the AI system, including noncompliance of any such AI system.

The Path Forward

California may not want to cede its historical position as one of the principal US states that regularly establishes precedent in emerging technology and market-driven areas of importance. This latest effort, however, may have been motivated at least in part by widely covered prognostications of doom and the potential for the destruction of civilization at AI’s collective hands. Some members of Congress, however, have opposed the AI Safety Act, stating in part that it should “ensure restrictions are proportionate to real-world risks and harms.” To be sure, California’s approach to regulating AI under the AI Safety Act is not “wrong.” It does, however, represent a different approach than other AI regulations, which generally focus on the riskiness of use and address areas such as discrimination, transparency, and human oversight.

While the AI Safety Act focuses on sophisticated AI systems with the largest processing power and biggest development budgets and, thus, presumably those with a greater potential for harm as a result, developers of AI systems of all sizes and capabilities already largely engage in testing and assessments, even if only motivated by market considerations. What is new is that the AI Safety Act creates standards for such evaluations that, with history as the guide, would likely materially influence standards included in other US AI regulations if signed into law by Governor Newsom (who has already signed an executive generative AI order of his own that predated President Biden’s) even though the range of covered AI systems would be somewhat limited.

With the potential to transform every industry, regulation of AI in one form or another is critical to navigate the ongoing sea change. The extent and nature of that regulation in California and elsewhere is certain to be fiercely debated, whether or not the AI Safety Act is signed into law. Currently, the risks attendant to AI development and use in the United States are still largely reputational, but comprehensive regulation is approaching. It is thus critical to be thoughtful and proactive about how your organization intends to leverage AI tools and to fully understand the risks and benefits associated with any such use

Artificial Intelligence and Intellectual Property Legal Frameworks in the Asia-Pacific Region

Globally, governments are grappling with the emergence of artificial intelligence (“AI”). AI technologies introduce exciting new opportunities but also bring challenges for regulators and companies across all industries. In the Asia-Pacific (“APAC”) region, there is no exception. APAC governments are adapting to AI and finding ways to encourage and regulate AI development through existing intellectual property (“IP”) regimes and new legal frameworks.

AI technologies aim to simulate human intelligence through developing smart machines capable of performing tasks that require human intelligence. The expanding market for AI ranges from machine learning to generative AI to virtual assistants to robotics, and this list merely scratches the surface.

When it comes to IP and AI, there are several critical questions for governments to consider: Can AI models be protected by existing legal frameworks within IP? Must copyright owners be human? Does a patent inventor have to be an individual? Do AI models’ training programs infringe on others’ copyrights?

To begin to answer these questions, regulators are drawing from existing IP regimes, including patent and copyright law. Some APAC countries have taken a non-binding approach, relying on existing principles to guide AI regulation. Others are drafting more specific AI regulations. The summary chart below provides a brief overview of current patent and copyright laws within APAC focused on AI and IP. Additional commentary concerning updates to AI laws and regulations is provided below the chart.

Country Patent Copyright
Korea A non-human cannot be the inventor under Korea’s Patent Act. There is a requirement for “a person.” The Copyright Act requires a human creator. Copyright is possible if the creator is a human using generative AI models as software tools and the human input is considered more than simple prompt inputs. For example, in Korea, copyright was granted to a movie produced by generative AI as a “compilation work” in December 29, 2023.
Japan Under Japan’s Patent Act, a natural person must be the inventor. This is the “requirement of shimei 氏名” (i.e. name of a natural person). Japan’s Copyright Act defines a copyright-protected work as “a creation expressing human thoughts and Emotions.” However, in February 29, 2024, the Agency for Cultural Affairs committee’s document on “Approach to AI and Copyright” provided that a joint work made up of both human input and AI generated content can be eligible for copyright protection.
Taiwan Taiwan’s Patent Law does not explicitly preclude a non-human inventor, however, the Patent Examination Guidelines require a natural person to be an inventor. Formalities in Taiwan also require an inventor’s name and nationality. The Copyright Act requires of “human creative expression.”
China The inventor needs to be a person under Patent Law and the Guidelines for Examination in China. Overall, Chinese courts have recognized that when AI-generated works involve human intellectual input, the user of the AI software is the copyright owner.
Hong Kong The Patents Ordinance in Hong Kong requires a human inventor. The Copyright Ordinance in Hong Kong attributes authorship to “the person by whom the arrangements necessary for the creation of the work are undertaken.”
Philippines Patent law in the Philippines requires a natural person to be the inventor. Generally, copyright law in the Philippines requires the author to be a natural person. The copyright in works that are partially AI-generated protects only those parts that are created by natural persons. The Philippines IP Office relies on the declarations of the creator claiming copyright to provide which part of the work is AI-generated and which part is not.
Vietnam AI cannot be an IP right owner in Vietnam. The user of AI is the owner, regardless of the degree of work carried out by AI. In terms of copyright, AI cannot be an IP right owner. Likewise, the user of AI is the owner, regardless of the degree of work carried out by AI.
Thailand Thailan’s Patent law in Thailand requires inventors to be individuals. Copyright law in Thailand requires an author to be an individual.
Malaysia Malaysia’s Patent law requires inventors to be individuals. Copyright law in Malaysia requires an author to be an individual.
Singapore Patent law requires inventors to be a natural person(s). However, the owner can be a natural person or a legal entity. In Singapore, it is implicit in provisions of the Copyright Act that the author must be a natural person.
Indonesia Under Indonesia’s patent law, the inventor may be an individual or legal entity. Under copyright law in Indonesia, the author of a work may be an individual or legal entity.
India India’s patent law requires inventors to be a natural person(s). The copyright law contains a requirement of “originality” – which the courts interpret as “intellectual effort by humans.”
Australia The Full Federal Court in Australia ruled that an inventor must be a natural person. Copyright law in Australia requires the author to be a human.
New Zealand One court in New Zealand has ruled that AI cannot be an inventor under the Patents Act. A court in New Zealand has ruled that AI cannot be the author under the provisions of the Copyright Act. There is updated legislation clarifying that the ownership of computer-generated works is the person who “made the arrangements necessary” for the creation of the work.

AI Regulation and Infringement

KOREA: Court decisions have ruled that web scraping or pulling information from a competitor’s website or database infringes on competitor’s database rights under the Copyright Act and the UCPA. In Koria, parties must obtain permission for use of copyrighted work for training AI emphasized in guidelines. The Copyright Commission published guidelines on copyright and AI in December 2023. The guidelines noted the growing need for legislation on AI generated works. The English version of the guidelines was released in April 2024.

JAPAN: The January 1, 2019 Copyright Act provides very broad rights to use copyrighted works without permission for training AI, as long as the training is for the purpose of technological development. The committee aims to introduce checks to this freedom, and also to provide more protection for Japan-based content creators and copyright holders. The Japan Agency for Cultural Affairs (ACA) released its draft “Approach to AI and Copyright” for public comment on January 23, 2024. Additional changes have been made to the draft after considering 25,000 comments as of February 29, 2025. Also, the Ministry of Internal Affairs and Communications, Ministry of Economy, Trade and compiled the AI Guidelines for Business Ver1.0 in Japan on April 19, 2024.

TAIWAN: Using copyrighted works to train AI models involves “reproduction”, which constitutes an infringement, unless there is consent or a license to use the work. Taiwan’s IPRO released an interpretation to clarify AI issues in June 2023. Under the IPO interpretation circular of June 2023, the Taiwan cabinet approved draft guidelines for the use of generative AI by the executive branch of the Taiwan government in August 2023. The executive branch of the Taiwan government also confirmed that it is in the process of formulating the government’s version of the Draft AI Law, which is expected to be published this year.

CHINA: Interim Measures for the Management of Generative Artificial Intelligence Services, promulgated in July 2023, require that generative AI services “respect intellectual property rights and commercial ethics” and that “intellectual property rights must not be infringed.” The consultation draft on Basic Security Requirements for Generative Artificial Intelligence Service, which was published in October 2023, provides detailed guidance on how to avoid IP infringement. The requirements, for example, provide specific processes concerning model training data that Chinese AI companies must adopt. Moreover, China’s draft Artificial Intelligence Law, proposed on March 16, 2024, outlines the use of copyrighted material for training purposes, and it serves as a complement to China’s current AI regulations.

HONG KONG: A review of copyright law in Hong Kong is underway. There is currently no overarching legislation regulating the use of AI, and the existing guidelines and principles mainly provide guidance on the use of personal data.

VIETNAM: AI cannot have responsibility for infringement, and there are no provisions under existing laws in Vietnam regarding the extent of responsibility of AI users for infringing acts. The Law on Protection of Consumers’ Rights will take effect on July 1, 2024. This law requires operators of large digital platforms to periodically evaluate the use of AI and fully or partially automated solutions.

THAILAND: Infringement in Thailand requires intent or implied intent, for example, from the prompts made to the AI. Thai law also provides for liability arising out of the helping or encouraging of infringement by another. Importantly, the AI user may also be exposed to liability in that way.

MALAYSIA: An informal comment from February 2024 by the Chairman of the Malaysia IP Office provides that there may be infringement through the training and/or use of AI programs.

SINGAPORE: Singapore has a hybrid regime. The regime provides a general fair use exception, which is likely guided by US jurisprudence, per the Singapore Court of Appeal. The regime also provides exceptions for specific types of permitted uses, for example, the computational data analysis exception. A Landscape Report on Issues at the Intersection of AI and IP issued by IPOS on February 28, 2024 provided a Model AI Governance Framework for Generative AI, which was published May 30, 2024.

INDONESIA: A “circular,” a government issued document similar to a white paper, implies that infringement is possible in Indonesia. The nonbinding Communications and Information Ministry Circular No. 9/2023 on AI was signed in December 2023.

INDIA: Under the Copyright Act of 1957, a Generative AI user has an obligation to obtain permission to use the copyright owner’s works for commercial purposes. In February 2024, the Ministry of Commerce and Industry’s Statement provided that India’s existing IPR regime is “well-equipped to protect AI-generated works” and therefore, it does not require a separate category of rights. MeitY issued a revised advisory on March 15, 2024 providing that platforms and intermediaries should ensure that the use of AI models, large language models, or generative AI software or algorithms by end users does not facilitate any unlawful content stipulated under Rule 3(1)(b) of the IT Rules, in addition to any other laws.

AUSTRALIA: Any action seeking compensation for infringement of a copyright work by an AI system would need to rely on the Copyright Act of 1968. It is an infringement of copyright to reproduce or communicate works digitally without the copyright owner’s permission. Australia does not have a general “fair use” defense to copyright infringement.

NEW ZEALAND: While infringement by AI users has not yet considered by New Zealand courts, New Zealand has more restricted “fair dealing” exceptions. Copyright review is underway in New Zealand.

Illinois Enacts Requirements for AI Use in Employment Decisions

On Aug. 9, 2024, Illinois Gov. Pritzker signed into law HB3733, which amends the Illinois Human Rights Act (IHRA) to cover employer use of artificial intelligence (AI). Effective Jan. 1, 2026, the amendments will add to existing requirements for employers that use AI to analyze video interviews of applicants for positions in Illinois.

Illinois is the latest jurisdiction to pass legislation aimed at preventing discrimination caused by AI tools that aid in making employment decisions. The state joins jurisdictions such as Colorado and New York City in regulating the use of AI in this context.

Restrictions on the Use of AI in Employment Decisions

The amendments expressly prohibit the use of AI in a manner that results in illegal discrimination in employment decisions and employee recruitment. Specifically, covered employers are barred from using AI in a way that has the effect of subjecting employees to discrimination on the basis of any class protected by the IHRA, including if zip codes are used as a proxy for such protected classes.

These new requirements will apply to any employer with one or more employees in Illinois during 20 or more calendar weeks within the calendar year of, or preceding, the alleged violation. They also apply to any employer with one or more employees when unlawful discrimination based on physical or mental disability unrelated to ability, pregnancy, or sexual harassment is alleged.

The amendments define AI as a “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” AI also includes “generative artificial intelligence.”

The amendments further define generative AI as “an automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including, but not limited to”:

  • Textual outputs, such as short answers, essays, poetry, or longer compositions or answers;
  • Image outputs, such as fine art, photographs, conceptual art, diagrams, and other images;
  • Multimedia outputs, such as audio or video in the form of compositions, songs, or short-form or long-form audio or video; and
  • Other content that would be otherwise produced by human means.

Employer Notice Requirements

The amendments require a covered employer to provide notice to employees if the organization uses AI for the following employment-related purposes:

  • Recruitment
  • Hiring
  • Promotion
  • Renewal of employment
  • Selection for training or apprenticeship
  • Discharge
  • Discipline
  • Tenure
  • The terms, privileges, or conditions of employment

While the amendments do not provide specific direction regarding the notice, such as when and how the notice should be provided, they direct the Illinois Department of Labor to adopt rules necessary to implement the notice requirement. Thus, additional guidance should be forthcoming.

Although not required, Illinois employers and AI technology developers may wish to consider conducting audits or taking other measures to help avoid biased outcomes and to further protect against liability.

Enforcement

The IHRA establishes a two-part enforcement procedure. The Illinois Department of Human Rights (IDHR) is the administrative agency that investigates charges of discrimination, while the Illinois Human Rights Commission (IHRC) is an administrative court that adjudicates complaints of unlawful discrimination. Complainants have the option to proceed before the IHRC or file a civil action directly in circuit court after exhausting their administrative remedies before the IDHR.

Practical Considerations

Before the effective date, covered employers should consider:

  • Assessing which platforms and tools in use (or under consideration) incorporate AI, including generative AI, components.
  • Drafting employee notices and developing a plan for notifying employees.
  • Training AI users and quality control reviewers/auditors on anti-discrimination/anti-bias laws and policies that will impact their interaction with the tool(s).
  • Partnering with legal counsel and experienced vendors to identify or create privileged processes to evaluate, mitigate, and monitor potential discriminatory or biased impacts of AI use.
  • Reviewing any rules published by the Illinois Department of Labor, including on the circumstances and conditions that require notice and the timeframe and means for providing notice.
  • Multi-state employers should continue to monitor for additional requirements. For instance, California’s legislature is considering a range of AI-related bills, including some aimed at workplace discrimination.

“Is SEO Dead?” Why AI Isn’t the End of Law Firm Marketing

With the emergence of Artificial Intelligence (AI) technology, many business owners have feared that marketing as we know it is coming to an end. After all, Google Gemini is routinely surfacing AI-generated responses over organic search results, AI content is abundant, and AI-driven tools are being used more than ever to automate tasks previously performed by human marketers.

But it’s not all doom and gloom over here—there are many ways in which digital marketing, including Search Engine Optimization (SEO) —is alive and well. This is particularly true for the legal industry, where there are many limits to what AI can do in terms of content creation and client acquisition.

Here’s how the world of SEO is being impacted by AI, and what this means for your law firm marketing.

Law Firm Marketing in the Age of AI

The Economist put it best: the development of AI has resulted in a “tsunami of digital innovation”. From ChatGPT’s world-changing AI model to the invention of “smart” coffee machines, AI appears to be everything. And it has certainly shaken up the world of law firm marketing.

Some of these innovations include AI chatbots for client engagement, tools like Lex Machina and Premonition that use predictive analytics to generate better leads, and AI-assisted legal research. Countless more tools and formulas have emerged to help law firms streamline their operations, optimize their marketing campaigns, create content, and even reduce overhead.

So, what’s the impact? 

With AI, law firms have reduced their costs, leveraging automated tools instead of manual efforts. Legal professionals have access to more data to identify (and convert) quality leads. And it’s now easier than ever to create content at volume.

At the same time, though, many people question the quality and accuracy of AI content. Some argue that AI cannot capture the nuance of the human experience or understanding complex (and often emotional) legal issues. Even more, AI-generated images and content often lack a personalized touch.

One area of marketing that’s particularly impacted by this is SEO, as it is largely driven by real human behavior, interactions, and needs.

So, is SEO Dead?

Even though many of the tools and techniques of SEO for lawyers have changed, the impact of SEO is still alive and well. Businesses continue to benefit from SEO strategies, allowing their brands to surface in the search results and attract new customers. In fact, there may even be more opportunities to rank than ever before.

For instance, Google showcases not only organic results but paid search results, Google Map Pack, Images, News, Knowledge Panel, Shopping, and many more pieces of digital real estate. This gives businesses different content formats and keyword opportunities to choose from.

Also, evolution in the SEO landscape is nothing new. There have been countless algorithm changes over the years, often in response to user behavior and new technology. SEO may be different, but it’s not dead.

Why SEO Still Matters for Law Firms

With the SEO industry alive and well, it’s still important for law firms to have a strong organic presence. This is because Google remains the leading medium through which people search for legal services. If you aren’t ranking high in Google, it will be difficult to get found by potential clients.

Here are some of the many ways SEO still matters for law firms, even in the age of AI.

1. Prospective clients still use search engines

Despite the rise of AI-based tools, your potential clients rely heavily on search engines when searching for your services. Whether they’re looking for legal counsel or content related to specific legal issues, search engines remain a primary point of entry.

Now, AI tools can often assist in this search process, but they rarely replace it entirely. SEO ensures your firm is visible when potential clients search for these services.

2. Your competitors are ranking in Search

Conduct a quick Google search of “law firm near me,” and you’ll likely see a few of your competitors in the search results. Whether they’re implementing SEO or not, their presence is a clear indication that you’ll need some organic momentum in order to compete.

Again, potential clients are using Google to search for the types of services you offer, but if they encounter your competitors first, they’re likely to inquire with a different firm. With SEO, you help your law firm stand out in the search results and become the obvious choice for potential clients.

3. AI relies on search engine data

The reality is that AI tools actually harness search engine data to train their models. This means the success of AI largely depends on people using search engines on a regular basis. Google isn’t going anywhere, so AI isn’t likely to go anywhere, either!

Whether it’s voice search through virtual assistants or AI-driven legal content suggestions, these systems still rely on the vast resources that search engines like Google organize. Strong SEO practices are essential to ensure your law firm’s website is part of that data pool. AI can’t bypass search engines entirely, so optimizing for search ensures your firm remains discoverable.

4. AI can’t replace personalized content

Only as a lawyer do you have the experience and training to advise clients on complex legal issues. AI content — even if only in your marketing — will only take you so far. Potential clients want to read content that’s helpful, relatable, and applicable to their needs.

While AI can generate content and provide answers, legal services are inherently personal. Writing your own content or hiring a writer might be your best bet for creating informative, well-researched content. AI can’t replicate the nuanced understanding that comes from a real lawyer, as your firm is best equipped to address clients’ specific legal issues.

5. SEO is more than just “content”

In the field of SEO, a lot of focus is put on content creation. And while content is certainly important (in terms of providing information and targeting keywords), it’s only one piece of the pie. AI tools are not as skilled at the various aspects of SEO, such as technical SEO and local search strategies.

Local SEO is essential for law firms, as most law firms serve clients within specific geographical areas. Google’s algorithm uses localized signals to determine which businesses to show in search results. This requires an intentional targeting strategy, optimizing your Google Business Profile, submitting your business information to online directories, and other activities AI tools have yet to master.

AI doesn’t replace the need for local SEO—if anything, AI-enhanced local search algorithms make these optimizations even more critical!

Goodbye AI, hello SEO?

Overall, the legal industry is a trust-based business. Clients want to know they work with reputable attorneys who understand their issues. AI is often ill-equipped to provide that level of expertise and personalized service.

Further, AI tools have limitations regarding what they can optimize, create, and manage. AI has not done away with SEO but has undoubtedly changed the landscape. SEO is an essential part of any law firm’s online marketing strategy.

AI is unlikely to disappear any time soon, and neither is SEO!

A Look at the Evolving Scope of Transatlantic AI Regulations

There have been significant changes to the regulations surrounding artificial intelligence (AI) on a global scale. New measures from governments worldwide are coming online, including the United States (U.S.) government’s executive order on AI, California’s upcoming regulations, the European Union’s AI Act, and emerging developments in the United Kingdom that contribute to this evolving environment.

The European Union (EU) AI Act and the U.S. Executive Order on AI aim to develop and utilize AI safely, securely, and with respect for fundamental rights, yet their approaches are markedly different. The EU AI Act establishes a binding legal framework across EU member states, directly applies to businesses involved in the AI value chain, classifies AI systems by risk, and imposes significant fines for violations. In contrast, the U.S. Executive Order is more of a guideline as federal agencies develop AI standards and policies. It prioritizes AI safety and trustworthiness but lacks specific penalties, instead relying on voluntary compliance and agency collaboration.

The EU approach includes detailed oversight and enforcement, while the U.S. method encourages the adoption of new standards and international cooperation that aligns with global standards but is less prescriptive. Despite their shared objectives, differences in regulatory approach, scope, enforcement, and penalties could lead to contradictions in AI governance standards between the two regions.

There has also been some collaboration on an international scale. Recently, there has been an effort between antitrust officials at the U.S. Department of Justice (DOJ), U.S. Federal Trade Commission (FTC), the European Commission, and the UK’s Competition and Markets Authority to monitor AI and its risks to competition. The agencies have issued a joint statement, with all four antitrust enforcers pledging to “to remain vigilant for potential competition issues” and to use the powers of their agencies to provide safeguards against the utilization of AI to undermine competition or lead to unfair or deceptive practices.

The regulatory landscape for AI across the globe is evolving in real time as the technology develops at a record pace. As regulations strive to keep up with the technology, there are real challenges and risks that exist for companies involved in the development or utilization of AI. Therefore, it is critical that business leaders understand regulatory changes on an international scale, adapt, and stay compliant to avoid what could be significant penalties and reputational damage.

The U.S. Federal Executive Order on AI

In October 2023, the Biden Administration issued an executive order to foster responsible AI innovation. This order outlines several key initiatives, including promoting ethical, trustworthy, and lawful AI technologies. It also calls for collaboration between federal agencies, private companies, academia, and international partners to advance AI capabilities and realize its myriad benefits. The order emphasizes the need for robust frameworks to address potential AI risks such as bias, privacy concerns, and security vulnerabilities. In addition, the order directs that various sweeping actions be taken, including the establishment of new standards for AI safety and security, the passing of bipartisan data privacy legislation to protect Americans’ privacy from the risks posed by AI, the promotion of the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, and the implementation of actions to ensure responsible government deployment of AI and modernization of the federal AI infrastructure through the rapid hiring of AI professionals.

At the state level, Colorado and California are leading the way. Colorado enacted the first comprehensive regulation of AI at the state level with The Colorado Artificial Intelligence Act (Senate Bill (SB) 24-205), signed into law by Governor Jared Polis on May 17, 2024. As our team previously outlined, The Colorado AI Act is comprehensive, establishing requirements for developers and deployers of “high-risk artificial intelligence systems,” to adhere to a host of obligations, including disclosures, risk management practices, and consumer protections. The Colorado law goes into effect on February 1, 2026, giving companies over a year to thoroughly adapt.

In California, a host of proposed AI regulations focusing on transparency, accountability, and consumer protection would require the disclosure of information such as AI systems’ functions, data sources, and decision-making processes. For example, AB2013 was introduced on January 31, 2024, and would require that developers of an AI system or service made available to Californians to post on the developer’s website documentation of the datasets used to train the AI system or service.

SB970 is another bill that was introduced in January 2024 and would require any person or entity that sells or provides access to any AI technology that is designed to create synthetic images, video, or voice to give a consumer warning that misuse of the technology may result in civil or criminal liability for the user.

Finally, on July 2, 2024 the California State Assembly Judiciary Committee passed SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), which regulates AI models based on complexity.

The European Union’s AI Act

The EU is leading the way in AI regulation through its AI Act, which establishes a framework and represents Europe’s first comprehensive attempt to regulate AI. The AI Act was adopted to promote the uptake of human-centric and trustworthy AI while ensuring high level protections of health, safety, and fundamental rights against the harmful effects of AI systems in the EU and supporting innovation.

The AI Act sets forth harmonized rules for the release and use of AI systems in the EU; prohibitions of certain AI practices; specific requirements for high-risk AI systems and obligations for operators of such systems; harmonized transparency rules for certain AI systems; harmonized rules for the release of general-purpose AI models; rules on market monitoring, market surveillance, governance, and enforcement; and measures to support innovation, with a particular focus on SMEs, including startups.

The AI Act classifies AI systems into four risk levels: unacceptable, high, limited, and minimal. Applications that pose an unacceptable risk, such as government social scoring systems, are outright banned. High-risk applications, including CV-scanning tools, face stringent regulations to ensure safety and accountability. Limited risk applications lack full transparency as to AI usage, and the AI Act imposes transparency obligations. For example, humans should be informed when they are using AI systems (such as chatbots) that they are interacting with a machine and not a human so as to enable the user to make an informed decision whether or not to continue. The AI Act allows the free use of minimal-risk AI, including applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

The adoption of the AI Act has not come without criticism from major European companies. In an open letter signed by 150 executives, they raised concerns over the heavy regulation of generative AI and foundation models. The fear is that the increased compliance costs and hindered productivity would drive companies away from the EU. Despite these concerns, the AI Act is here to stay, and it would be wise for companies to prepare for compliance by assessing their systems.

Recommendations for Global Businesses

As governments and regulatory bodies worldwide implement diverse AI regulations, companies have the power to adopt strategies that both ensure compliance and mitigate risks proactively. Global businesses should consider the following recommendations:

  1. Risk Assessments: Conducting thorough risk assessments of AI systems is important for companies to align with the EU’s classification scheme and the U.S.’s focus on safety and security. There must also be an assessment of the safety and security of your AI systems, particularly those categorized as high-risk under the EU’s AI Act. This proactive approach will not only help you meet regulatory requirements but also protect your business from potential sanctions as the legal landscape evolves.
  2. Compliance Strategy: Develop a compliance strategy that specifically addresses the most stringent aspects of the EU and U.S. regulations.
  3. Legal Monitoring: Stay on top of evolving best practices and guidelines. Monitor regulatory developments in regions in which your company operates to adapt to new requirements and avoid penalties and engage with policymakers and industry groups to stay ahead of compliance requirements. Participation in public consultations and industry forums can provide valuable insights and influence regulatory outcomes.
  4. Transparency and Accountability: To meet ethical and regulatory expectations, transparency and accountability should be prioritized in AI development. This means ensuring AI systems are transparent, with clear documentation of data sources, decision-making processes, and system functionalities. There should also be accountability measures in place, such as regular audits and impact assessments.
  5. Data Governance: Implement robust data governance measures to meet the EU’s requirements and align with the U.S.’s emphasis on trustworthy AI. Establish governance structures that ensure compliance with federal, state, and international AI regulations, including appointing compliance officers and developing internal policies.
  6. Invest in Ethical AI Practices: Develop and deploy AI systems that adhere to ethical guidelines, focusing on fairness, privacy, and user rights. Ethical AI practices ensure compliance, build public trust, and enhance brand reputation.

AI-Generated Content and Trademarks

The rapid evolution of artificial intelligence has undeniably transformed the digital landscape, with AI-generated content becoming increasingly common. This shift has profound implications for brand owners introducing both challenges and opportunities.

One of the most pressing concerns is trademark infringement. In a recent example, the Walt Disney Company, a company fiercely protective of its intellectual property, raised concerns about AI-generated content potentially infringing on its trademarks.  Social media users were having fun using Microsoft’s Bing AI imaging tool, powered by DALL-E 3 technology, to create images of pets in a “Pixar” style.  However, Disney’s concern wasn’t the artwork itself, but the possibility of the AI inadvertently generating the iconic Disney-Pixar logo within the images, constituting a trademark infringement. This incident highlights the potential for AI-generated content to unintentionally infringe upon established trademarks, requiring brand owners to stay vigilant in protecting their intellectual property in the digital age.

Dilution of trademarks is another critical issue. A recent lawsuit filed by Getty Images against Stability AI sheds light on this concern. Getty Images, a leading provider of stock photos, accused Stability AI of using millions of its copyrighted images to train its AI image generation software. This alleged use, according to Getty Images, involved Stability AI’s incorporation of Getty Images’ marks into low-quality, unappealing, or offensive images which dilutes those marks in further violation of federal and state trademark laws. The lawsuit highlights the potential for AI, through the sheer volume of content it generates, to blur the lines between inspiration and infringement, weakening the association between a trademark and its source.

In addition, the ownership of copyrights in AI-generated marketing can cause problems. While AI tools can create impressive content, questions about who owns the intellectual property rights persist.  Recent disputes over AI-generated artwork and music have highlighted the challenges of determining ownership and copyright in this new digital frontier.

However, AI also presents opportunities for trademark owners. For example, AI can be employed to monitor online platforms for trademark infringements, providing an early warning system. Luxury brands have used AI to authenticate products and combat counterfeiting. For instance, Entrupy has developed a mobile device-based authentication system that uses AI and microscopy to analyze materials and detect subtle irregularities indicative of counterfeit products. Brands can integrate Entrupy’s technology into their retail stores or customer-facing apps.

Additionally, AI can be a powerful tool for brand building. By analyzing consumer data and preferences, AI can help create highly targeted marketing campaigns. For example, cosmetic brands have successfully leveraged AI to personalize product recommendations, enhancing customer engagement and loyalty.

The intersection of AI and trademarks is a dynamic and evolving landscape. As technology continues to advance, so too will the challenges and opportunities for trademark owners. Proactive measures, such as robust trademark portfolios, AI-powered monitoring tools, and clear internal guidelines, are essential for safeguarding brand integrity in this new era.

American Bar Association Issues Formal Opinion on Use of Generative AI Tools

On July 29, 2024, the American Bar Association issued ABA Formal Opinion 512 titled “Generative Artificial Intelligence Tools.”

The opinion addresses the ethical considerations lawyers are required to consider when using generative AI (GenAI) tools in the practice of law.

The opinion sets forth the ethical rules to consider, including the duties of competence, confidentiality, client communication, raising only meritorious claims, candor toward the tribunal, supervisory responsibilities of others, and setting of fees.

Competence

The opinion reiterates previous ABA opinions that lawyers are required to have a reasonable understanding of the capabilities and limitations of specific technologies used, including remaining “vigilant” about the benefits and risks of the use of technology, including GenAI tools. It specifically mentions that attorneys must be aware of the risk of inaccurate output or hallucinations of GenAI tools and that independent verification is necessary when using GenAI tools. According to the opinion, users must evaluate the tool being used, analyze the output, not solely rely on the tool’s conclusions, and cannot replace their judgment with that of the tool.

Confidentiality

The opinion reminds lawyers that they are ethically required to make reasonable efforts to prevent inadvertent or unauthorized access or disclosure of client information or their representation of a client. It suggests that, before inputting data into a GenAI tool, a lawyer must evaluate not only the risk of unauthorized disclosure outside the firm, but also possible internal unauthorized disclosure in violation of an ethical wall or access controls. The opinion stressed that if client information is uploaded to a GenAI tool within the firm, the client data may be disclosed to and used by other lawyers in the firm, without the client’s consent, to benefit other clients. The client data input into the GenAI tool may be used for self-learning or teaching an algorithm that then discloses the client data without the client’s consent.

The opinion suggests that before submitting client data to a GenAI tool, lawyers must review the tool’s privacy policy, terms of use, and all contractual terms to determine how the GenAI tool will collect and use the data in the context of the ethical duty of confidentiality with clients.

Further, the opinion suggests that if lawyers intend to use GenAI tools to provide legal services to clients, lawyers are required to obtain informed client consent before using the tool. The lawyer is required to inform the client of the use of the GenAI tool, the risk of use of the tool and then obtain the client’s informed consent prior to use. Importantly, the opinion states that “general, boiler-plate provisions [in an] engagement letter” are not sufficient” to meet this requirement.

Communication

With regard to lawyers’ duty to effectively communicate information that is in the best interest of their client, the opinion notes that—depending on the circumstances—it may be in the best interest of the client to disclose the use of GenAI tools, particularly if the use will affect the fee charged to the client, or the output of the GenAI tool will influence a significant decision in the representation of the client. This communication can be included in the engagement letter, though it may be appropriate to communicate directly with the client before including it in the engagement letter.

Meritorious Claims + Candor Toward Tribunal

Lawyers are officers of the court and have an ethical obligation to put forth meritorious claims and to be candid with the tribunal before which such claims are presented. In the context of the use of GenAI tools, as stated above, there is a risk that without appropriate evaluation and supervision (including the use of independent professional judgment), the output of a GenAI tool can sometimes be erroneous or considered a “hallucination.” Therefore, to reiterate the ethical duty of competence, lawyers are advised to independently evaluate any output provided by a GenAI tool.

In addition, some courts require that attorneys disclose whether GenAI tools have been used in court filings. It is important to research and follow local court rules and practices regarding disclosure of the use of GenAI tools before submitting filings.

Supervisory Responsibilities

Consistent with other ABA Opinions relevant to the use of technology, the opinion stresses that managerial responsibilities include providing clear policies to lawyers, non-lawyers, and staff about the use of GenAI in the practice of law. I think this is one of the most important messages of the opinion. Firms and law practices are required to develop and implement a GenAI governance program, evaluate the risk and benefit of the use of a GenAI tool, educate all individuals in the firm on the policies and guardrails put in place to use such tools, and supervise their use. This is a clear message that lawyers and law firms need to evaluate the use of GenAI tools and start working on developing and implementing their own AI governance program for all internal users.

Fees

The key takeaway of the fees section of Opinion 512 is that a lawyer can’t bill a client to learn how to use a GenAI tool. Consistent with other opinions relating to fees, only extraordinary costs associated with the use of GenAI tools are permitted to be billed to the client, with the client’s knowledge and consent. In addition, the opinion points out that any efficiencies gained by the use of GenAI tools, with the client’s consent, should benefit the client through reduced fees.

Conclusion

Although consistent with other ABA opinions related to the use of technology, an understanding of ABA Opinion 512 is important as GenAI tools become more ubiquitous. It is clear that there will be additional opinions related to the use of GenAI tools from the ABA as well as state bar associations and that it is a topic of interest in the context of adherence with ethical obligations. A clear message from Opinion 512 is that now is a good time to consider developing an AI governance program.

AI Regulation Continues to Grow as Illinois Amends its Human Rights Act

Following laws enacted in jurisdictions such as ColoradoNew York CityTennessee, and the state’s own Artificial Intelligence Video Interview Act, on August 9, 2024, Illinois’ Governor signed House Bill (HB) 3773, also known as the “Limit Predictive Analytics Use” bill. The bill amends the Illinois Human Rights Act (Act) by adding certain uses of artificial intelligence (AI), including generative AI, to the long list of actions by covered employers that could constitute civil rights violations.

The amendments made by HB3773 take effect January 1, 2026, and add two new definitions to the law.

“Artificial intelligence” – which according to the amendments means:

a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The definition of AI includes “generative AI,” which has its own definition:

an automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including, but not limited to, the following: (1) textual outputs, such as short answers, essays, poetry, or longer compositions or answers; (2) image outputs, such as fine art, photographs, conceptual art, diagrams, and other images; (3) multimedia outputs, such as audio or video in the form of compositions, songs, or short-form or long-form audio or video; and (4) other content that would be otherwise produced by human means.

The plethora of AI tools available for use in the workplace continues unabated as HR professionals and managers vie to adopt effective and efficient solutions for finding the best candidates, assessing their performance, and otherwise improving decision making concerning human capital. In addition to understanding whether an organization is covered by a regulation of AI, such as HB3773, it also is important to determine whether the technology being deployed also falls within the law’s scope. Assuming the tool or application is not being developed inhouse, this analysis will require, among other things, working closely with the third-party vendor providing the tool or application to understand its capabilities and risks.

According to the amendments, covered employers can violate the Act in two ways. First, an employer that uses AI with respect to – recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment – and which has the effect of subjecting employees to discrimination on the basis of protected classes under the Act may constitute a violation. The same may be true for employers that use zip codes as a proxy for protected classes under the Act.

Second, a covered employer that fails to provide notice to an employee that the employer is using AI for the purposes described above may be found to have violated the Act.

Unlike the Colorado or New York City laws, the amendments to the Act do not require a impact assessment or bias audit. They also do not provide any specifics concerning the notice requirement. However, the amendments require the Illinois Department of Human Rights (IDHR) to adopt regulations necessary for implementation and enforcement. These regulations will include rules concerning the notice, such as the time period and means for providing same.

We are sure to see more regulation in this space. While it is expected that some common threads will exist among the various rules and regulations concerning AI and generative AI, organizations leveraging these technologies will need to be aware of the differences and assess what additional compliance steps may be needed.

Top Competition Enforcers in the US, EU, and UK Release Joint Statement on AI Competition – AI: The Washington Report


On July 23, the top competition enforcers at the US Federal Trade Commission (FTC) and Department of Justice (DOJ), the UK Competition and Markets Authority (CMA), and the European Commission (EC) released a Joint Statement on Competition in Generative AI Foundation Models and AI products. The statement outlines risks in the AI ecosystem and shared principles for protecting and fostering competition.

While the statement does not lay out specific enforcement actions, the statement’s release suggests that the top competition enforcers in all three jurisdictions are focusing on AI’s effects on competition in general and competition within the AI ecosystem—and are likely to take concrete action in the near future.

A Shared Focus on AI

The competition enforcers did not just discover AI. In recent years, the top competition enforcers in the US, UK, and EU have all been examining both the effects AI may have on competition in various sectors as well as competition within the AI ecosystem. In September 2023, the CMA released a report on AI Foundation Models, which described the “significant impact” that AI technologies may have on competition and consumers, followed by an updated April 2024 report on AI. In June 2024, French competition authorities released a report on Generative AI, which focused on competition issues related to AI. At its January 2024 Tech Summit, the FTC examined the “real-world impacts of AI on consumers and competition.”

AI as a Technological Inflection Point

In the new joint statement, the top enforcers described the recent evolution of AI technologies, including foundational models and generative AI, as “a technological inflection point.” As “one of the most significant technological developments of the past couple decades,” AI has the potential to increase innovation and economic growth and benefit the lives of citizens around the world.

But with any technological inflection point, which may create “new means of competing” and catalyze innovation and growth, the enforcers must act “to ensure the public reaps the full benefits” of the AI evolution. The enforcers are concerned that several risks, described below, could undermine competition in the AI ecosystem. According to the enforcers, they are “committed to using our available powers to address any such risks before they become entrenched or irreversible harms.”

Risks to Competition in the AI Ecosystem

The top enforcers highlight three main risks to competition in the AI ecosystem.

  1. Concentrated control of key inputs – Because AI technologies rely on a few specific “critical ingredients,” including specialized chips and technical expertise, a number of firms may be “in a position to exploit existing or emerging bottlenecks across the AI stack and to have outside influence over the future development of these tools.” This concentration may stifle competition, disrupt innovation, or be exploited by certain firms.
  2. Entrenching or extending market power in AI-related markets – The recent advancements in AI technologies come “at a time when large incumbent digital firms already enjoy strong accumulated advantages.” The regulators are concerned that these firms, due to their power, may have “the ability to protect against AI-driven disruption, or harness it to their particular advantage,” potentially to extend or strengthen their positions.
  3. Arrangements involving key players could amplify risks – While arrangements between firms, including investments and partnerships, related to the development of AI may not necessarily harm competition, major firms may use these partnerships and investments to “undermine or coopt competitive threats and steer market outcomes” to their advantage.

Beyond these three main risks, the statement also acknowledges that other competition and consumer risks are also associated with AI. Algorithms may “allow competitors to share competitively sensitive information” and engage in price discrimination and fixing. Consumers may be harmed, too, by AI. As the CMA, DOJ, and the FTC have consumer protection authority, these authorities will “also be vigilant of any consumer protection threats that may derive from the use and application of AI.”

Sovereign Jurisdictions but Shared Concerns

While the enforcers share areas of concern, the joint statement recognizes that the EU, UK, and US’s “legal powers and jurisdictions contexts differ, and ultimately, our decisions will always remain sovereign and independent.” Nonetheless, the competition enforcers assert that “if the risks described [in the statement] materialize, they will likely do so in a way that does not respect international boundaries,” making it necessary for the different jurisdictions to “share an understanding of the issues” and be “committed to using our respective powers where appropriate.”

Three Unifying Principles

With the goal of acting together, the enforcers outline three shared principles that will “serve to enable competition and foster innovation.”

  1. Fair Dealing – Firms that engage in fair dealing will make the AI ecosystem as a whole better off. Exclusionary tactics often “discourage investments and innovation” and undermine competition.
  2. Interoperability – Interoperability, the ability of different systems to communicate and work together seamlessly, will increase competition and innovation around AI. The enforcers note that “any claims that interoperability requires sacrifice to privacy and security will be closely scrutinized.”
  3. Choice – Everyone in the AI ecosystem, from businesses to consumers, will benefit from having “choices among the diverse products and business models resulting from a competitive process.” Regulators may scrutinize three activities in particular: (1) company lock-in mechanisms that could limit choices for companies and individuals, (2) partnerships between incumbents and newcomers that could “sidestep merger enforcement” or provide “incumbents undue influence or control in ways that undermine competition,” and (3) for content creators, “choice among buyers,” which could be used to limit the “free flow of information in the marketplace of ideas.”

Conclusion: Potential Future Activity

While the statement does not address specific enforcement tools and actions the enforcers may take, the statement’s release suggests that the enforcers may all be gearing up to take action related to AI competition in the near future. Interested stakeholders, especially international ones, should closely track potential activity from these enforcers. We will continue to closely monitor and analyze activity by the DOJ and FTC on AI competition issues.

EU Publishes Groundbreaking AI Act, Initial Obligations Set to Take Effect on February 2, 2025

On July 12, 2024, the European Union published the language of its much-anticipated Artificial Intelligence Act (AI Act), which is the world’s first comprehensive legislation regulating the growing use of artificial intelligence (AI), including by employers.

Quick Hits

  • EU published the final AI Act, setting it into force on August 1, 2024.
  • The legislation treats employers’ use of AI in the workplace as potentially high-risk and imposes obligations for their use and potential penalties for violations.
  • The legislation will be incrementally implemented over the next three years.

The AI Act will “enter into force” on August 1, 2024 (or twenty days from the July 12, 2024, publication date). The legislation’s publication follows its adoption by the EU Parliament in March 2024 and approval by the EU Council in May 2024.

The groundbreaking AI legislation takes a risk-based approach that will subject AI applications to four different levels of increasing regulation: (1) “unacceptable risk,” which are banned; (2) “high risk”; (3) “limited risk”; and (4) “minimal risk.”

While it does not exclusively apply to employers, the law treats employers’ use of AI technologies in the workplace as potentially “high risk.” Violations of the law could result in hefty penalties.

Key Dates

The publication commences the timeline of implementation over the next three years, as well as outline when we should expect to see more guidance on how it will be applied. The most critical dates for employers are:

  • August 1, 2024 – The AI Act will enter into force.
  • February 2, 2025 – (Six months from the date of entry into force) – Provisions on banned AI systems will take effect, meaning use of such systems must be discontinued by that time.
  • May 2, 2025 – (Nine months from the date of entry into force) – “Codes of practice” should be ready, giving providers of general purpose AI systems further clarity on obligations under the AI Act, which could possibly offer some insight to employers.
  • August 2, 2025 – (Twelve months from the date of entry into force) – Provisions on notifying authorities, general-purpose AI models, governance, confidentiality, and most penalties will take effect.
  • February 2, 2025 – (Eighteen months from the date of entry into force) – Guidelines should be available specifying how to comply with the provisions on high-risk AI systems, including practical examples of high-risk versus not high-risk systems.
  • August 2, 2026 – (Twenty-four months from the date of entry into force) – The remainder of the legislation will take effect, except for a minor provision regarding specific types of high-risk AI systems that will go into effect on August 1, 2027, a year later.

Next Steps

Adopting the EU AI Act will set consistent standards across the EU nations. Further, the legislation is significant in that it is likely to serve as a framework for AI laws or regulations in other jurisdictions, similar to how the EU’s General Data Protection Regulation (GDPR) has served as a model in the area of data privacy.

In the United States, regulation of AI and automated decision-making systems has been a priority, particularly when the tools are used to make employment decisions. In October 2023, the Biden administration issued an executive order requiring federal agencies to balance the benefits of AI with legal risks. Several federal agencies have since updated guidance concerning the use of AI and several states and cities have been considering legislation or regulations.