California Poised to Further Regulate Artificial Intelligence by Focusing on Safety

Looking to cement the state near the forefront of artificial intelligence (AI) regulation in the United States, on August 28, 2024, the California State Assembly passed the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (SB 1047), also referred to as the AI Safety Act. The measure awaits the signature of Governor Gavin Newsom. This development comes effectively on the heels of the passage of the “first comprehensive regulation on AI by a major regulator anywhere” — the EU Artificial Intelligence Act (EU AI Act) — which concluded with political agreement in late 2023 and entered into force on August 1, 2024. It also follows the first comprehensive US AI law from Colorado (Colorado AI Act), enacted on May 17, 2024. And while the United States lacks a comprehensive federal AI framework, there have been developments regarding AI at the federal level, including the late 2023 Executive Order on AI from the Biden White House and other AI-related regulatory guidance.

We have seen this sequence play out before in the world of privacy. Europe has long led on privacy regulation, stemming in large part from its recognition of privacy as a fundamental right — an approach that differs from how privacy is viewed in the United States. When the European General Data Protection Act (GDPR) became effective in May 2018, it was not the world’s first comprehensive privacy framework (not even in Europe), but it did highlight increasing awareness and market attention around the use and protection of personal data, setting off a multitude of copycat privacy regulatory regimes globally. Not long after GDPR, California became the first US state with a comprehensive privacy regulation when then-California Governor Jerry Brown signed the California Consumer Privacy Act (CCPA) into law on June 28, 2018. While the CCPA, since amended by the California Privacy Rights Act of 2020 (CPRA), is assuredly not a GDPR clone, it nevertheless felt familiar to many organizations that had begun to develop privacy compliance programs centered on GDPR standards and definitions. The CCPA preceded the passage of comprehensive privacy regulations in many other US states that, while not necessarily based on CCPA, did not diverge dramatically from the approach taken by California. These privacy laws also generally apply to AI systems when they process personal data, with some (including CCPA/CPRA) already contemplating automated decision-making that can be, but is not necessarily, based on AI.

AI Safety Act Overview

Distinct from the privacy sphere, the AI Safety Act lacks the same degree of familiarity when compared to the EU AI Act (and to its domestic predecessor, the Colorado AI Act). Europe has taken a risk-based approach that defines different types of AI and applies differing rules based on these definitions, while Colorado primarily focuses on “algorithmic discrimination” by AI systems determined to be “high-risk.” Both Europe and Colorado distinguish between “providers” or “developers” (those that develop an AI system) and “deployers” (those that use AI systems) and include provisions that apply to both. The AI Safety Act, however, principally focuses on AI developers and attempts to solve for potential critical harms (largely centered on catastrophic mass casualty events) created by (i) large-scale AI systems with extensive computing power of greater than 10^26 integer or floating-point operations and with a development cost of greater than $100 million, or (ii) a model created by fine-tuning a covered AI system using computing power equal to or greater than three times 10^25 integer or floating-point operations with a cost in excess of $10 million. Key requirements of the AI Safety Act include:

  • “Full Shutdown” Capability. Developers would be required to implement capabilities to enact a full shutdown of a covered AI system, considering the risk that a shutdown could cause disruption to critical infrastructure and implementing a written safety and security protocol that, among other things, details the conditions under which such a shutdown would be enacted.
  • Safety Assessments. Prior to release, testing would need to be undertaken to determine whether the covered model is “reasonably capable of causing or materially enabling a critical harm,” with details around such testing procedures and the nature of implemented safeguards.
  • Third-Party Auditing. Developers would be required to annually retain a third-party auditor to conduct audits on a covered AI system that are “consistent with best practices for auditors” to perform an independent audit to ensure compliance with the requirements of the AI Safety Act.
  • Safety Incident Reporting. If a safety incident affecting the covered model occurs, the AI Safety Act would require developers to notify the California Attorney General (AG) within 72 hours after the developer learns of the incident or learns of facts that cause a reasonable belief that a safety incident has occurred.
  • Developer Accountability. Notably, the AI Safety Act would empower the AG to bring civil actions against developers for harms caused by covered AI systems. The AG may also seek injunctive relief to prevent potential harms.
  • Whistleblower Protections. The AI Safety Act would also provide for additional whistleblower protections, including by prohibiting developers of a covered AI system from preventing employees from disclosing information or retaliating against employees for disclosing information regarding the AI system, including noncompliance of any such AI system.

The Path Forward

California may not want to cede its historical position as one of the principal US states that regularly establishes precedent in emerging technology and market-driven areas of importance. This latest effort, however, may have been motivated at least in part by widely covered prognostications of doom and the potential for the destruction of civilization at AI’s collective hands. Some members of Congress, however, have opposed the AI Safety Act, stating in part that it should “ensure restrictions are proportionate to real-world risks and harms.” To be sure, California’s approach to regulating AI under the AI Safety Act is not “wrong.” It does, however, represent a different approach than other AI regulations, which generally focus on the riskiness of use and address areas such as discrimination, transparency, and human oversight.

While the AI Safety Act focuses on sophisticated AI systems with the largest processing power and biggest development budgets and, thus, presumably those with a greater potential for harm as a result, developers of AI systems of all sizes and capabilities already largely engage in testing and assessments, even if only motivated by market considerations. What is new is that the AI Safety Act creates standards for such evaluations that, with history as the guide, would likely materially influence standards included in other US AI regulations if signed into law by Governor Newsom (who has already signed an executive generative AI order of his own that predated President Biden’s) even though the range of covered AI systems would be somewhat limited.

With the potential to transform every industry, regulation of AI in one form or another is critical to navigate the ongoing sea change. The extent and nature of that regulation in California and elsewhere is certain to be fiercely debated, whether or not the AI Safety Act is signed into law. Currently, the risks attendant to AI development and use in the United States are still largely reputational, but comprehensive regulation is approaching. It is thus critical to be thoughtful and proactive about how your organization intends to leverage AI tools and to fully understand the risks and benefits associated with any such use

Illinois Enacts Requirements for AI Use in Employment Decisions

On Aug. 9, 2024, Illinois Gov. Pritzker signed into law HB3733, which amends the Illinois Human Rights Act (IHRA) to cover employer use of artificial intelligence (AI). Effective Jan. 1, 2026, the amendments will add to existing requirements for employers that use AI to analyze video interviews of applicants for positions in Illinois.

Illinois is the latest jurisdiction to pass legislation aimed at preventing discrimination caused by AI tools that aid in making employment decisions. The state joins jurisdictions such as Colorado and New York City in regulating the use of AI in this context.

Restrictions on the Use of AI in Employment Decisions

The amendments expressly prohibit the use of AI in a manner that results in illegal discrimination in employment decisions and employee recruitment. Specifically, covered employers are barred from using AI in a way that has the effect of subjecting employees to discrimination on the basis of any class protected by the IHRA, including if zip codes are used as a proxy for such protected classes.

These new requirements will apply to any employer with one or more employees in Illinois during 20 or more calendar weeks within the calendar year of, or preceding, the alleged violation. They also apply to any employer with one or more employees when unlawful discrimination based on physical or mental disability unrelated to ability, pregnancy, or sexual harassment is alleged.

The amendments define AI as a “machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” AI also includes “generative artificial intelligence.”

The amendments further define generative AI as “an automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including, but not limited to”:

  • Textual outputs, such as short answers, essays, poetry, or longer compositions or answers;
  • Image outputs, such as fine art, photographs, conceptual art, diagrams, and other images;
  • Multimedia outputs, such as audio or video in the form of compositions, songs, or short-form or long-form audio or video; and
  • Other content that would be otherwise produced by human means.

Employer Notice Requirements

The amendments require a covered employer to provide notice to employees if the organization uses AI for the following employment-related purposes:

  • Recruitment
  • Hiring
  • Promotion
  • Renewal of employment
  • Selection for training or apprenticeship
  • Discharge
  • Discipline
  • Tenure
  • The terms, privileges, or conditions of employment

While the amendments do not provide specific direction regarding the notice, such as when and how the notice should be provided, they direct the Illinois Department of Labor to adopt rules necessary to implement the notice requirement. Thus, additional guidance should be forthcoming.

Although not required, Illinois employers and AI technology developers may wish to consider conducting audits or taking other measures to help avoid biased outcomes and to further protect against liability.

Enforcement

The IHRA establishes a two-part enforcement procedure. The Illinois Department of Human Rights (IDHR) is the administrative agency that investigates charges of discrimination, while the Illinois Human Rights Commission (IHRC) is an administrative court that adjudicates complaints of unlawful discrimination. Complainants have the option to proceed before the IHRC or file a civil action directly in circuit court after exhausting their administrative remedies before the IDHR.

Practical Considerations

Before the effective date, covered employers should consider:

  • Assessing which platforms and tools in use (or under consideration) incorporate AI, including generative AI, components.
  • Drafting employee notices and developing a plan for notifying employees.
  • Training AI users and quality control reviewers/auditors on anti-discrimination/anti-bias laws and policies that will impact their interaction with the tool(s).
  • Partnering with legal counsel and experienced vendors to identify or create privileged processes to evaluate, mitigate, and monitor potential discriminatory or biased impacts of AI use.
  • Reviewing any rules published by the Illinois Department of Labor, including on the circumstances and conditions that require notice and the timeframe and means for providing notice.
  • Multi-state employers should continue to monitor for additional requirements. For instance, California’s legislature is considering a range of AI-related bills, including some aimed at workplace discrimination.

“Is SEO Dead?” Why AI Isn’t the End of Law Firm Marketing

With the emergence of Artificial Intelligence (AI) technology, many business owners have feared that marketing as we know it is coming to an end. After all, Google Gemini is routinely surfacing AI-generated responses over organic search results, AI content is abundant, and AI-driven tools are being used more than ever to automate tasks previously performed by human marketers.

But it’s not all doom and gloom over here—there are many ways in which digital marketing, including Search Engine Optimization (SEO) —is alive and well. This is particularly true for the legal industry, where there are many limits to what AI can do in terms of content creation and client acquisition.

Here’s how the world of SEO is being impacted by AI, and what this means for your law firm marketing.

Law Firm Marketing in the Age of AI

The Economist put it best: the development of AI has resulted in a “tsunami of digital innovation”. From ChatGPT’s world-changing AI model to the invention of “smart” coffee machines, AI appears to be everything. And it has certainly shaken up the world of law firm marketing.

Some of these innovations include AI chatbots for client engagement, tools like Lex Machina and Premonition that use predictive analytics to generate better leads, and AI-assisted legal research. Countless more tools and formulas have emerged to help law firms streamline their operations, optimize their marketing campaigns, create content, and even reduce overhead.

So, what’s the impact? 

With AI, law firms have reduced their costs, leveraging automated tools instead of manual efforts. Legal professionals have access to more data to identify (and convert) quality leads. And it’s now easier than ever to create content at volume.

At the same time, though, many people question the quality and accuracy of AI content. Some argue that AI cannot capture the nuance of the human experience or understanding complex (and often emotional) legal issues. Even more, AI-generated images and content often lack a personalized touch.

One area of marketing that’s particularly impacted by this is SEO, as it is largely driven by real human behavior, interactions, and needs.

So, is SEO Dead?

Even though many of the tools and techniques of SEO for lawyers have changed, the impact of SEO is still alive and well. Businesses continue to benefit from SEO strategies, allowing their brands to surface in the search results and attract new customers. In fact, there may even be more opportunities to rank than ever before.

For instance, Google showcases not only organic results but paid search results, Google Map Pack, Images, News, Knowledge Panel, Shopping, and many more pieces of digital real estate. This gives businesses different content formats and keyword opportunities to choose from.

Also, evolution in the SEO landscape is nothing new. There have been countless algorithm changes over the years, often in response to user behavior and new technology. SEO may be different, but it’s not dead.

Why SEO Still Matters for Law Firms

With the SEO industry alive and well, it’s still important for law firms to have a strong organic presence. This is because Google remains the leading medium through which people search for legal services. If you aren’t ranking high in Google, it will be difficult to get found by potential clients.

Here are some of the many ways SEO still matters for law firms, even in the age of AI.

1. Prospective clients still use search engines

Despite the rise of AI-based tools, your potential clients rely heavily on search engines when searching for your services. Whether they’re looking for legal counsel or content related to specific legal issues, search engines remain a primary point of entry.

Now, AI tools can often assist in this search process, but they rarely replace it entirely. SEO ensures your firm is visible when potential clients search for these services.

2. Your competitors are ranking in Search

Conduct a quick Google search of “law firm near me,” and you’ll likely see a few of your competitors in the search results. Whether they’re implementing SEO or not, their presence is a clear indication that you’ll need some organic momentum in order to compete.

Again, potential clients are using Google to search for the types of services you offer, but if they encounter your competitors first, they’re likely to inquire with a different firm. With SEO, you help your law firm stand out in the search results and become the obvious choice for potential clients.

3. AI relies on search engine data

The reality is that AI tools actually harness search engine data to train their models. This means the success of AI largely depends on people using search engines on a regular basis. Google isn’t going anywhere, so AI isn’t likely to go anywhere, either!

Whether it’s voice search through virtual assistants or AI-driven legal content suggestions, these systems still rely on the vast resources that search engines like Google organize. Strong SEO practices are essential to ensure your law firm’s website is part of that data pool. AI can’t bypass search engines entirely, so optimizing for search ensures your firm remains discoverable.

4. AI can’t replace personalized content

Only as a lawyer do you have the experience and training to advise clients on complex legal issues. AI content — even if only in your marketing — will only take you so far. Potential clients want to read content that’s helpful, relatable, and applicable to their needs.

While AI can generate content and provide answers, legal services are inherently personal. Writing your own content or hiring a writer might be your best bet for creating informative, well-researched content. AI can’t replicate the nuanced understanding that comes from a real lawyer, as your firm is best equipped to address clients’ specific legal issues.

5. SEO is more than just “content”

In the field of SEO, a lot of focus is put on content creation. And while content is certainly important (in terms of providing information and targeting keywords), it’s only one piece of the pie. AI tools are not as skilled at the various aspects of SEO, such as technical SEO and local search strategies.

Local SEO is essential for law firms, as most law firms serve clients within specific geographical areas. Google’s algorithm uses localized signals to determine which businesses to show in search results. This requires an intentional targeting strategy, optimizing your Google Business Profile, submitting your business information to online directories, and other activities AI tools have yet to master.

AI doesn’t replace the need for local SEO—if anything, AI-enhanced local search algorithms make these optimizations even more critical!

Goodbye AI, hello SEO?

Overall, the legal industry is a trust-based business. Clients want to know they work with reputable attorneys who understand their issues. AI is often ill-equipped to provide that level of expertise and personalized service.

Further, AI tools have limitations regarding what they can optimize, create, and manage. AI has not done away with SEO but has undoubtedly changed the landscape. SEO is an essential part of any law firm’s online marketing strategy.

AI is unlikely to disappear any time soon, and neither is SEO!

A Look at the Evolving Scope of Transatlantic AI Regulations

There have been significant changes to the regulations surrounding artificial intelligence (AI) on a global scale. New measures from governments worldwide are coming online, including the United States (U.S.) government’s executive order on AI, California’s upcoming regulations, the European Union’s AI Act, and emerging developments in the United Kingdom that contribute to this evolving environment.

The European Union (EU) AI Act and the U.S. Executive Order on AI aim to develop and utilize AI safely, securely, and with respect for fundamental rights, yet their approaches are markedly different. The EU AI Act establishes a binding legal framework across EU member states, directly applies to businesses involved in the AI value chain, classifies AI systems by risk, and imposes significant fines for violations. In contrast, the U.S. Executive Order is more of a guideline as federal agencies develop AI standards and policies. It prioritizes AI safety and trustworthiness but lacks specific penalties, instead relying on voluntary compliance and agency collaboration.

The EU approach includes detailed oversight and enforcement, while the U.S. method encourages the adoption of new standards and international cooperation that aligns with global standards but is less prescriptive. Despite their shared objectives, differences in regulatory approach, scope, enforcement, and penalties could lead to contradictions in AI governance standards between the two regions.

There has also been some collaboration on an international scale. Recently, there has been an effort between antitrust officials at the U.S. Department of Justice (DOJ), U.S. Federal Trade Commission (FTC), the European Commission, and the UK’s Competition and Markets Authority to monitor AI and its risks to competition. The agencies have issued a joint statement, with all four antitrust enforcers pledging to “to remain vigilant for potential competition issues” and to use the powers of their agencies to provide safeguards against the utilization of AI to undermine competition or lead to unfair or deceptive practices.

The regulatory landscape for AI across the globe is evolving in real time as the technology develops at a record pace. As regulations strive to keep up with the technology, there are real challenges and risks that exist for companies involved in the development or utilization of AI. Therefore, it is critical that business leaders understand regulatory changes on an international scale, adapt, and stay compliant to avoid what could be significant penalties and reputational damage.

The U.S. Federal Executive Order on AI

In October 2023, the Biden Administration issued an executive order to foster responsible AI innovation. This order outlines several key initiatives, including promoting ethical, trustworthy, and lawful AI technologies. It also calls for collaboration between federal agencies, private companies, academia, and international partners to advance AI capabilities and realize its myriad benefits. The order emphasizes the need for robust frameworks to address potential AI risks such as bias, privacy concerns, and security vulnerabilities. In addition, the order directs that various sweeping actions be taken, including the establishment of new standards for AI safety and security, the passing of bipartisan data privacy legislation to protect Americans’ privacy from the risks posed by AI, the promotion of the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, and the implementation of actions to ensure responsible government deployment of AI and modernization of the federal AI infrastructure through the rapid hiring of AI professionals.

At the state level, Colorado and California are leading the way. Colorado enacted the first comprehensive regulation of AI at the state level with The Colorado Artificial Intelligence Act (Senate Bill (SB) 24-205), signed into law by Governor Jared Polis on May 17, 2024. As our team previously outlined, The Colorado AI Act is comprehensive, establishing requirements for developers and deployers of “high-risk artificial intelligence systems,” to adhere to a host of obligations, including disclosures, risk management practices, and consumer protections. The Colorado law goes into effect on February 1, 2026, giving companies over a year to thoroughly adapt.

In California, a host of proposed AI regulations focusing on transparency, accountability, and consumer protection would require the disclosure of information such as AI systems’ functions, data sources, and decision-making processes. For example, AB2013 was introduced on January 31, 2024, and would require that developers of an AI system or service made available to Californians to post on the developer’s website documentation of the datasets used to train the AI system or service.

SB970 is another bill that was introduced in January 2024 and would require any person or entity that sells or provides access to any AI technology that is designed to create synthetic images, video, or voice to give a consumer warning that misuse of the technology may result in civil or criminal liability for the user.

Finally, on July 2, 2024 the California State Assembly Judiciary Committee passed SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), which regulates AI models based on complexity.

The European Union’s AI Act

The EU is leading the way in AI regulation through its AI Act, which establishes a framework and represents Europe’s first comprehensive attempt to regulate AI. The AI Act was adopted to promote the uptake of human-centric and trustworthy AI while ensuring high level protections of health, safety, and fundamental rights against the harmful effects of AI systems in the EU and supporting innovation.

The AI Act sets forth harmonized rules for the release and use of AI systems in the EU; prohibitions of certain AI practices; specific requirements for high-risk AI systems and obligations for operators of such systems; harmonized transparency rules for certain AI systems; harmonized rules for the release of general-purpose AI models; rules on market monitoring, market surveillance, governance, and enforcement; and measures to support innovation, with a particular focus on SMEs, including startups.

The AI Act classifies AI systems into four risk levels: unacceptable, high, limited, and minimal. Applications that pose an unacceptable risk, such as government social scoring systems, are outright banned. High-risk applications, including CV-scanning tools, face stringent regulations to ensure safety and accountability. Limited risk applications lack full transparency as to AI usage, and the AI Act imposes transparency obligations. For example, humans should be informed when they are using AI systems (such as chatbots) that they are interacting with a machine and not a human so as to enable the user to make an informed decision whether or not to continue. The AI Act allows the free use of minimal-risk AI, including applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

The adoption of the AI Act has not come without criticism from major European companies. In an open letter signed by 150 executives, they raised concerns over the heavy regulation of generative AI and foundation models. The fear is that the increased compliance costs and hindered productivity would drive companies away from the EU. Despite these concerns, the AI Act is here to stay, and it would be wise for companies to prepare for compliance by assessing their systems.

Recommendations for Global Businesses

As governments and regulatory bodies worldwide implement diverse AI regulations, companies have the power to adopt strategies that both ensure compliance and mitigate risks proactively. Global businesses should consider the following recommendations:

  1. Risk Assessments: Conducting thorough risk assessments of AI systems is important for companies to align with the EU’s classification scheme and the U.S.’s focus on safety and security. There must also be an assessment of the safety and security of your AI systems, particularly those categorized as high-risk under the EU’s AI Act. This proactive approach will not only help you meet regulatory requirements but also protect your business from potential sanctions as the legal landscape evolves.
  2. Compliance Strategy: Develop a compliance strategy that specifically addresses the most stringent aspects of the EU and U.S. regulations.
  3. Legal Monitoring: Stay on top of evolving best practices and guidelines. Monitor regulatory developments in regions in which your company operates to adapt to new requirements and avoid penalties and engage with policymakers and industry groups to stay ahead of compliance requirements. Participation in public consultations and industry forums can provide valuable insights and influence regulatory outcomes.
  4. Transparency and Accountability: To meet ethical and regulatory expectations, transparency and accountability should be prioritized in AI development. This means ensuring AI systems are transparent, with clear documentation of data sources, decision-making processes, and system functionalities. There should also be accountability measures in place, such as regular audits and impact assessments.
  5. Data Governance: Implement robust data governance measures to meet the EU’s requirements and align with the U.S.’s emphasis on trustworthy AI. Establish governance structures that ensure compliance with federal, state, and international AI regulations, including appointing compliance officers and developing internal policies.
  6. Invest in Ethical AI Practices: Develop and deploy AI systems that adhere to ethical guidelines, focusing on fairness, privacy, and user rights. Ethical AI practices ensure compliance, build public trust, and enhance brand reputation.

AI-Generated Content and Trademarks

The rapid evolution of artificial intelligence has undeniably transformed the digital landscape, with AI-generated content becoming increasingly common. This shift has profound implications for brand owners introducing both challenges and opportunities.

One of the most pressing concerns is trademark infringement. In a recent example, the Walt Disney Company, a company fiercely protective of its intellectual property, raised concerns about AI-generated content potentially infringing on its trademarks.  Social media users were having fun using Microsoft’s Bing AI imaging tool, powered by DALL-E 3 technology, to create images of pets in a “Pixar” style.  However, Disney’s concern wasn’t the artwork itself, but the possibility of the AI inadvertently generating the iconic Disney-Pixar logo within the images, constituting a trademark infringement. This incident highlights the potential for AI-generated content to unintentionally infringe upon established trademarks, requiring brand owners to stay vigilant in protecting their intellectual property in the digital age.

Dilution of trademarks is another critical issue. A recent lawsuit filed by Getty Images against Stability AI sheds light on this concern. Getty Images, a leading provider of stock photos, accused Stability AI of using millions of its copyrighted images to train its AI image generation software. This alleged use, according to Getty Images, involved Stability AI’s incorporation of Getty Images’ marks into low-quality, unappealing, or offensive images which dilutes those marks in further violation of federal and state trademark laws. The lawsuit highlights the potential for AI, through the sheer volume of content it generates, to blur the lines between inspiration and infringement, weakening the association between a trademark and its source.

In addition, the ownership of copyrights in AI-generated marketing can cause problems. While AI tools can create impressive content, questions about who owns the intellectual property rights persist.  Recent disputes over AI-generated artwork and music have highlighted the challenges of determining ownership and copyright in this new digital frontier.

However, AI also presents opportunities for trademark owners. For example, AI can be employed to monitor online platforms for trademark infringements, providing an early warning system. Luxury brands have used AI to authenticate products and combat counterfeiting. For instance, Entrupy has developed a mobile device-based authentication system that uses AI and microscopy to analyze materials and detect subtle irregularities indicative of counterfeit products. Brands can integrate Entrupy’s technology into their retail stores or customer-facing apps.

Additionally, AI can be a powerful tool for brand building. By analyzing consumer data and preferences, AI can help create highly targeted marketing campaigns. For example, cosmetic brands have successfully leveraged AI to personalize product recommendations, enhancing customer engagement and loyalty.

The intersection of AI and trademarks is a dynamic and evolving landscape. As technology continues to advance, so too will the challenges and opportunities for trademark owners. Proactive measures, such as robust trademark portfolios, AI-powered monitoring tools, and clear internal guidelines, are essential for safeguarding brand integrity in this new era.

EU Publishes Groundbreaking AI Act, Initial Obligations Set to Take Effect on February 2, 2025

On July 12, 2024, the European Union published the language of its much-anticipated Artificial Intelligence Act (AI Act), which is the world’s first comprehensive legislation regulating the growing use of artificial intelligence (AI), including by employers.

Quick Hits

  • EU published the final AI Act, setting it into force on August 1, 2024.
  • The legislation treats employers’ use of AI in the workplace as potentially high-risk and imposes obligations for their use and potential penalties for violations.
  • The legislation will be incrementally implemented over the next three years.

The AI Act will “enter into force” on August 1, 2024 (or twenty days from the July 12, 2024, publication date). The legislation’s publication follows its adoption by the EU Parliament in March 2024 and approval by the EU Council in May 2024.

The groundbreaking AI legislation takes a risk-based approach that will subject AI applications to four different levels of increasing regulation: (1) “unacceptable risk,” which are banned; (2) “high risk”; (3) “limited risk”; and (4) “minimal risk.”

While it does not exclusively apply to employers, the law treats employers’ use of AI technologies in the workplace as potentially “high risk.” Violations of the law could result in hefty penalties.

Key Dates

The publication commences the timeline of implementation over the next three years, as well as outline when we should expect to see more guidance on how it will be applied. The most critical dates for employers are:

  • August 1, 2024 – The AI Act will enter into force.
  • February 2, 2025 – (Six months from the date of entry into force) – Provisions on banned AI systems will take effect, meaning use of such systems must be discontinued by that time.
  • May 2, 2025 – (Nine months from the date of entry into force) – “Codes of practice” should be ready, giving providers of general purpose AI systems further clarity on obligations under the AI Act, which could possibly offer some insight to employers.
  • August 2, 2025 – (Twelve months from the date of entry into force) – Provisions on notifying authorities, general-purpose AI models, governance, confidentiality, and most penalties will take effect.
  • February 2, 2025 – (Eighteen months from the date of entry into force) – Guidelines should be available specifying how to comply with the provisions on high-risk AI systems, including practical examples of high-risk versus not high-risk systems.
  • August 2, 2026 – (Twenty-four months from the date of entry into force) – The remainder of the legislation will take effect, except for a minor provision regarding specific types of high-risk AI systems that will go into effect on August 1, 2027, a year later.

Next Steps

Adopting the EU AI Act will set consistent standards across the EU nations. Further, the legislation is significant in that it is likely to serve as a framework for AI laws or regulations in other jurisdictions, similar to how the EU’s General Data Protection Regulation (GDPR) has served as a model in the area of data privacy.

In the United States, regulation of AI and automated decision-making systems has been a priority, particularly when the tools are used to make employment decisions. In October 2023, the Biden administration issued an executive order requiring federal agencies to balance the benefits of AI with legal risks. Several federal agencies have since updated guidance concerning the use of AI and several states and cities have been considering legislation or regulations.

A Paradigm Shift in Legal Practice: Enhancing Civil Litigation with Artificial Intelligence

A paradigm shift in legal practice is occurring now. The integration of artificial intelligence (AI) has emerged as a transformative force, particularly in civil litigation. No longer is AI the stuff of science fiction – it’s a real tangible power that is reshaping the manner in which the world functions and, along with it, the manner in which the lawyer practices. From complex document review processes to predicting case outcomes, AI technologies are revolutionizing the way legal professions approach and navigate litigation and redefining traditional legal practice.

Streamlining Document Discovery and Review

One of the most time-consuming tasks in civil litigation is discovery document analysis and review. Traditionally, legal teams spend countless hours sifting through documents to identify relevant evidence, often reviewing the same material multiple times, depending on the task at hand. However, AI-powered document review platforms can now significantly expedite this process. By leveraging natural language processing (NLP) and machine learning algorithms, there platforms can quickly analyze and categorize documents based on relevance, reducing the time and resources required for document review while ensuring thoroughness and accuracy. AI in the civil discovery process offers a multitude of benefits for the practitioner and cost saving advantages for the client, such as:

• Efficiency: AI powered document review significantly reduces required discovery, allowing legal teams to focus their efforts on higher value tasks and strategic analysis;

• Accuracy: By automating the initial document review process AI helps minimize potential human error and ensures a greater consistency and accuracy in identifying relevant documents and evidence;

• Cost-effectiveness: AI driven platforms offer a cost-effective alternative to traditional manual review methods, helping to lower overall litigation costs for clients

• Scalability: AI technology can easily scale to handle large volumes of data making it ideal for complex litigation cases with extensive document discovery requirements;

• Insight Generation: AI algorithms can uncover hidden patterns, trends, and relationships within the closed data bases that might not be apparent through manual review, providing valuable strategy and decision-making.

Predictive Analytics for Case Strategy

Predicting case outcomes is inherently challenging, often relying on legal expertise, jurisdictional experience of the lawyer and analysis of the claimed damage. However, AI-driven predictive analytics tools are changing the game by providing hyper-accurate data-driven insights into case strategies. By analyzing past case law, court rulings, and other relevant data points, these tools can forecast-model the likely outcome of a given case, allowing legal teams and clients to make more informed decisions regarding jurisdictionally specific settlement negotiations, trial strategy and resource allocation.

Enhanced Legal Research and Due Diligence

AI-powered legal research tools have become powerful tools for legal professionals involved in civil litigation. These tools utilize advanced algorithms to sift through vast repositories in a closed system of case law, statutes, regulations and legal precedent, delivering relevant information in a fraction of the time it would take through manual research methods. Additionally, AI can assist in due diligence processes by automatically flagging potential legal risks and identifying critical issues within contracts and other legal documents.

Improving case Management and Workflow Efficiency

Managing multiple cases simultaneously can be daunting for legal practitioners and could lead to inefficiencies and oversight. AI-driven case management systems offer a solution by providing centralized case-related information, deadlines and communications. These systems can automate routine tasks, such as scheduling document filing and client communication schedules, freeing up valuable time for attorneys to focus on legal substantive tasks and proactive case movement .

Ethical Considerations and Challenges

While the benefits of AI in civil litigation are undeniable, they also raise important ethical considerations and challenges. Issues such as data privacy, algorithmic bias, and the ethical use of AI in decision-making processes must be carefully addressed to ensure fairness and transparency in the legal system. Additionally, there is a growing need for ongoing education and training to equip legal professionals with the necessary skills to effectively leverage AI tools while maintain ethical standards and preserving the integrity of the legal profession.

Take Away

The integration of AI technologies in civil litigation represents a paradigm shift in legal practice, offering unprecedented opportunities to streamline processes, enhance decision-making and improve client satisfaction. By harnessing the power of AI-driven solutions, legal professionals can navigate complex civil disputes more efficiently and effectively, ultimately delivering better outcomes for clients and advancing the pursuit of just outcomes in our rapidly evolving legal landscape.

For All Patent/Trademark Practitioners: USPTO Provides Guidance for Use of AI in Preparing USPTO Submissions

The USPTO expounds a clear message for patent and trademark attorneys, patent agents, and inventors: use of artificial intelligence (AI), including generative AI, in patent and trademark activities and filings before the USPTO entails risks to be mitigated, and you must disclose use of AI in creation of an invention or practice before the USPTO if the use of AI is material to patentability.

The USPTO’s new guidance issued on April 11, 2024 is a counterpart to its guidance issued on February 13, 2024, which addresses AI-assisted invention creation process. In the new guidance issued on April 11, 2024, USPTO officials communicate the risks of using AI in preparing USPTO submissions, including patent applications, affidavits, petitions, office action responses, information disclosure statements, Patent Trial and Appeal Board (PTAB) submissions, and trademark / Trademark Trial and Appeal Board (TTAB) submissions. The common theme between the February 13 and April 11 guidance is the duty to disclose to the USPTO all information known to be material to patentability.

Building on the USPTO’s existing rules and policies, the USPTO’s April 11 guidance discusses the following:

(A) The duty of candor and good faith – each individual associated with a proceeding at the USPTO owes the duty to disclose the USPTO all information known to be material to patentability, including on the use of AI by inventors, parties, and practitioners.

(B) Signature requirement and corresponding certifications – using AI to draft documents without verifying information risks “critical misstatements and omissions”. Any submission for the USPTO in which AI helped prepare must be carefully reviewed by practitioners, who are ultimately responsible, to ensure that they are true and submitted for a proper purpose.

(C) Confidentiality of information – sensitive and confidential client information risks being compromised if shared to third-party AI systems, some of which may be located outside of the United States.

(D) Foreign filing licenses and export regulations – a foreign filing license from the USPTO does not authorize the exporting of subject matter abroad for the preparation of patent applications to be filed in the United States. Practitioners must ensure data is not improperly exported when using AI.

(E) USPTO electronic systems’ policies – Practitioners using AI must be mindful of the terms and conditions for the USPTO’s electronic system, which prohibit the unauthorized access, actions, use, modifications, or disclosure of the data contained in the USPTO system in transit to/from the system.

(F) The USPTO Rules of Professional Conduct – when using the AI tools, practitioners must ensure that they are not violating the duties owed to clients. For example, practitioners must have the requisite legal, scientific, and technical knowledge to reasonably represent the client, without inappropriate reliance on AI. Practitioners also have duty to reasonably consult with the client, including about the use of AI in accomplishing the client’s objectives.

The USPTO’s April 11 guidance overall shares principles with the ethics guidelines that multiple state bars have issued related to generative AI use in practice of law, and addresses them in the patent- and trademark-specific context. Importantly, in addition to ethics considerations, the USPTO guidance reminds us that knowing or willful withholding of information about AI use under (A), overlooking AI’s misstatements leading to false certification under (B), or AI-mediated improper or unauthorized exporting of data or unauthorized access to data under (D) and (E) may lead to criminal or civil liability under federal law or penalties or sanctions by the USPTO.

On the positive side, the USPTO guidance describes the possible favorable aspects of AI “to expand access to our innovation ecosystem and lower costs for parties and practitioners…. The USPTO continues to be actively involved in the development of domestic and international measures to address AI considerations at the intersection of innovation, creativity, and intellectual property.” We expect more USPTO AI guidance to be forthcoming, so please do watch for continued updates in this area.