California Poised to Further Regulate Artificial Intelligence by Focusing on Safety

Looking to cement the state near the forefront of artificial intelligence (AI) regulation in the United States, on August 28, 2024, the California State Assembly passed the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (SB 1047), also referred to as the AI Safety Act. The measure awaits the signature of Governor Gavin Newsom. This development comes effectively on the heels of the passage of the “first comprehensive regulation on AI by a major regulator anywhere” — the EU Artificial Intelligence Act (EU AI Act) — which concluded with political agreement in late 2023 and entered into force on August 1, 2024. It also follows the first comprehensive US AI law from Colorado (Colorado AI Act), enacted on May 17, 2024. And while the United States lacks a comprehensive federal AI framework, there have been developments regarding AI at the federal level, including the late 2023 Executive Order on AI from the Biden White House and other AI-related regulatory guidance.

We have seen this sequence play out before in the world of privacy. Europe has long led on privacy regulation, stemming in large part from its recognition of privacy as a fundamental right — an approach that differs from how privacy is viewed in the United States. When the European General Data Protection Act (GDPR) became effective in May 2018, it was not the world’s first comprehensive privacy framework (not even in Europe), but it did highlight increasing awareness and market attention around the use and protection of personal data, setting off a multitude of copycat privacy regulatory regimes globally. Not long after GDPR, California became the first US state with a comprehensive privacy regulation when then-California Governor Jerry Brown signed the California Consumer Privacy Act (CCPA) into law on June 28, 2018. While the CCPA, since amended by the California Privacy Rights Act of 2020 (CPRA), is assuredly not a GDPR clone, it nevertheless felt familiar to many organizations that had begun to develop privacy compliance programs centered on GDPR standards and definitions. The CCPA preceded the passage of comprehensive privacy regulations in many other US states that, while not necessarily based on CCPA, did not diverge dramatically from the approach taken by California. These privacy laws also generally apply to AI systems when they process personal data, with some (including CCPA/CPRA) already contemplating automated decision-making that can be, but is not necessarily, based on AI.

AI Safety Act Overview

Distinct from the privacy sphere, the AI Safety Act lacks the same degree of familiarity when compared to the EU AI Act (and to its domestic predecessor, the Colorado AI Act). Europe has taken a risk-based approach that defines different types of AI and applies differing rules based on these definitions, while Colorado primarily focuses on “algorithmic discrimination” by AI systems determined to be “high-risk.” Both Europe and Colorado distinguish between “providers” or “developers” (those that develop an AI system) and “deployers” (those that use AI systems) and include provisions that apply to both. The AI Safety Act, however, principally focuses on AI developers and attempts to solve for potential critical harms (largely centered on catastrophic mass casualty events) created by (i) large-scale AI systems with extensive computing power of greater than 10^26 integer or floating-point operations and with a development cost of greater than $100 million, or (ii) a model created by fine-tuning a covered AI system using computing power equal to or greater than three times 10^25 integer or floating-point operations with a cost in excess of $10 million. Key requirements of the AI Safety Act include:

  • “Full Shutdown” Capability. Developers would be required to implement capabilities to enact a full shutdown of a covered AI system, considering the risk that a shutdown could cause disruption to critical infrastructure and implementing a written safety and security protocol that, among other things, details the conditions under which such a shutdown would be enacted.
  • Safety Assessments. Prior to release, testing would need to be undertaken to determine whether the covered model is “reasonably capable of causing or materially enabling a critical harm,” with details around such testing procedures and the nature of implemented safeguards.
  • Third-Party Auditing. Developers would be required to annually retain a third-party auditor to conduct audits on a covered AI system that are “consistent with best practices for auditors” to perform an independent audit to ensure compliance with the requirements of the AI Safety Act.
  • Safety Incident Reporting. If a safety incident affecting the covered model occurs, the AI Safety Act would require developers to notify the California Attorney General (AG) within 72 hours after the developer learns of the incident or learns of facts that cause a reasonable belief that a safety incident has occurred.
  • Developer Accountability. Notably, the AI Safety Act would empower the AG to bring civil actions against developers for harms caused by covered AI systems. The AG may also seek injunctive relief to prevent potential harms.
  • Whistleblower Protections. The AI Safety Act would also provide for additional whistleblower protections, including by prohibiting developers of a covered AI system from preventing employees from disclosing information or retaliating against employees for disclosing information regarding the AI system, including noncompliance of any such AI system.

The Path Forward

California may not want to cede its historical position as one of the principal US states that regularly establishes precedent in emerging technology and market-driven areas of importance. This latest effort, however, may have been motivated at least in part by widely covered prognostications of doom and the potential for the destruction of civilization at AI’s collective hands. Some members of Congress, however, have opposed the AI Safety Act, stating in part that it should “ensure restrictions are proportionate to real-world risks and harms.” To be sure, California’s approach to regulating AI under the AI Safety Act is not “wrong.” It does, however, represent a different approach than other AI regulations, which generally focus on the riskiness of use and address areas such as discrimination, transparency, and human oversight.

While the AI Safety Act focuses on sophisticated AI systems with the largest processing power and biggest development budgets and, thus, presumably those with a greater potential for harm as a result, developers of AI systems of all sizes and capabilities already largely engage in testing and assessments, even if only motivated by market considerations. What is new is that the AI Safety Act creates standards for such evaluations that, with history as the guide, would likely materially influence standards included in other US AI regulations if signed into law by Governor Newsom (who has already signed an executive generative AI order of his own that predated President Biden’s) even though the range of covered AI systems would be somewhat limited.

With the potential to transform every industry, regulation of AI in one form or another is critical to navigate the ongoing sea change. The extent and nature of that regulation in California and elsewhere is certain to be fiercely debated, whether or not the AI Safety Act is signed into law. Currently, the risks attendant to AI development and use in the United States are still largely reputational, but comprehensive regulation is approaching. It is thus critical to be thoughtful and proactive about how your organization intends to leverage AI tools and to fully understand the risks and benefits associated with any such use

EU Publishes Groundbreaking AI Act, Initial Obligations Set to Take Effect on February 2, 2025

On July 12, 2024, the European Union published the language of its much-anticipated Artificial Intelligence Act (AI Act), which is the world’s first comprehensive legislation regulating the growing use of artificial intelligence (AI), including by employers.

Quick Hits

  • EU published the final AI Act, setting it into force on August 1, 2024.
  • The legislation treats employers’ use of AI in the workplace as potentially high-risk and imposes obligations for their use and potential penalties for violations.
  • The legislation will be incrementally implemented over the next three years.

The AI Act will “enter into force” on August 1, 2024 (or twenty days from the July 12, 2024, publication date). The legislation’s publication follows its adoption by the EU Parliament in March 2024 and approval by the EU Council in May 2024.

The groundbreaking AI legislation takes a risk-based approach that will subject AI applications to four different levels of increasing regulation: (1) “unacceptable risk,” which are banned; (2) “high risk”; (3) “limited risk”; and (4) “minimal risk.”

While it does not exclusively apply to employers, the law treats employers’ use of AI technologies in the workplace as potentially “high risk.” Violations of the law could result in hefty penalties.

Key Dates

The publication commences the timeline of implementation over the next three years, as well as outline when we should expect to see more guidance on how it will be applied. The most critical dates for employers are:

  • August 1, 2024 – The AI Act will enter into force.
  • February 2, 2025 – (Six months from the date of entry into force) – Provisions on banned AI systems will take effect, meaning use of such systems must be discontinued by that time.
  • May 2, 2025 – (Nine months from the date of entry into force) – “Codes of practice” should be ready, giving providers of general purpose AI systems further clarity on obligations under the AI Act, which could possibly offer some insight to employers.
  • August 2, 2025 – (Twelve months from the date of entry into force) – Provisions on notifying authorities, general-purpose AI models, governance, confidentiality, and most penalties will take effect.
  • February 2, 2025 – (Eighteen months from the date of entry into force) – Guidelines should be available specifying how to comply with the provisions on high-risk AI systems, including practical examples of high-risk versus not high-risk systems.
  • August 2, 2026 – (Twenty-four months from the date of entry into force) – The remainder of the legislation will take effect, except for a minor provision regarding specific types of high-risk AI systems that will go into effect on August 1, 2027, a year later.

Next Steps

Adopting the EU AI Act will set consistent standards across the EU nations. Further, the legislation is significant in that it is likely to serve as a framework for AI laws or regulations in other jurisdictions, similar to how the EU’s General Data Protection Regulation (GDPR) has served as a model in the area of data privacy.

In the United States, regulation of AI and automated decision-making systems has been a priority, particularly when the tools are used to make employment decisions. In October 2023, the Biden administration issued an executive order requiring federal agencies to balance the benefits of AI with legal risks. Several federal agencies have since updated guidance concerning the use of AI and several states and cities have been considering legislation or regulations.