California AI regulation

California Poised to Further Regulate Artificial Intelligence by Focusing on Safety

Advertisement

Looking to cement the state near the forefront of artificial intelligence (AI) regulation in the United States, on August 28, 2024, the California State Assembly passed the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (SB 1047), also referred to as the AI Safety Act. The measure awaits the signature of Governor Gavin Newsom. This development comes effectively on the heels of the passage of the “first comprehensive regulation on AI by a major regulator anywhere” — the EU Artificial Intelligence Act (EU AI Act) — which concluded with political agreement in late 2023 and entered into force on August 1, 2024. It also follows the first comprehensive US AI law from Colorado (Colorado AI Act), enacted on May 17, 2024. And while the United States lacks a comprehensive federal AI framework, there have been developments regarding AI at the federal level, including the late 2023 Executive Order on AI from the Biden White House and other AI-related regulatory guidance.

We have seen this sequence play out before in the world of privacy. Europe has long led on privacy regulation, stemming in large part from its recognition of privacy as a fundamental right — an approach that differs from how privacy is viewed in the United States. When the European General Data Protection Act (GDPR) became effective in May 2018, it was not the world’s first comprehensive privacy framework (not even in Europe), but it did highlight increasing awareness and market attention around the use and protection of personal data, setting off a multitude of copycat privacy regulatory regimes globally. Not long after GDPR, California became the first US state with a comprehensive privacy regulation when then-California Governor Jerry Brown signed the California Consumer Privacy Act (CCPA) into law on June 28, 2018. While the CCPA, since amended by the California Privacy Rights Act of 2020 (CPRA), is assuredly not a GDPR clone, it nevertheless felt familiar to many organizations that had begun to develop privacy compliance programs centered on GDPR standards and definitions. The CCPA preceded the passage of comprehensive privacy regulations in many other US states that, while not necessarily based on CCPA, did not diverge dramatically from the approach taken by California. These privacy laws also generally apply to AI systems when they process personal data, with some (including CCPA/CPRA) already contemplating automated decision-making that can be, but is not necessarily, based on AI.

Advertisement

AI Safety Act Overview

Distinct from the privacy sphere, the AI Safety Act lacks the same degree of familiarity when compared to the EU AI Act (and to its domestic predecessor, the Colorado AI Act). Europe has taken a risk-based approach that defines different types of AI and applies differing rules based on these definitions, while Colorado primarily focuses on “algorithmic discrimination” by AI systems determined to be “high-risk.” Both Europe and Colorado distinguish between “providers” or “developers” (those that develop an AI system) and “deployers” (those that use AI systems) and include provisions that apply to both. The AI Safety Act, however, principally focuses on AI developers and attempts to solve for potential critical harms (largely centered on catastrophic mass casualty events) created by (i) large-scale AI systems with extensive computing power of greater than 10^26 integer or floating-point operations and with a development cost of greater than $100 million, or (ii) a model created by fine-tuning a covered AI system using computing power equal to or greater than three times 10^25 integer or floating-point operations with a cost in excess of $10 million. Key requirements of the AI Safety Act include:

Advertisement
  • “Full Shutdown” Capability. Developers would be required to implement capabilities to enact a full shutdown of a covered AI system, considering the risk that a shutdown could cause disruption to critical infrastructure and implementing a written safety and security protocol that, among other things, details the conditions under which such a shutdown would be enacted.
  • Safety Assessments. Prior to release, testing would need to be undertaken to determine whether the covered model is “reasonably capable of causing or materially enabling a critical harm,” with details around such testing procedures and the nature of implemented safeguards.
  • Third-Party Auditing. Developers would be required to annually retain a third-party auditor to conduct audits on a covered AI system that are “consistent with best practices for auditors” to perform an independent audit to ensure compliance with the requirements of the AI Safety Act.
  • Safety Incident Reporting. If a safety incident affecting the covered model occurs, the AI Safety Act would require developers to notify the California Attorney General (AG) within 72 hours after the developer learns of the incident or learns of facts that cause a reasonable belief that a safety incident has occurred.
  • Developer Accountability. Notably, the AI Safety Act would empower the AG to bring civil actions against developers for harms caused by covered AI systems. The AG may also seek injunctive relief to prevent potential harms.
  • Whistleblower Protections. The AI Safety Act would also provide for additional whistleblower protections, including by prohibiting developers of a covered AI system from preventing employees from disclosing information or retaliating against employees for disclosing information regarding the AI system, including noncompliance of any such AI system.

The Path Forward

Advertisement

California may not want to cede its historical position as one of the principal US states that regularly establishes precedent in emerging technology and market-driven areas of importance. This latest effort, however, may have been motivated at least in part by widely covered prognostications of doom and the potential for the destruction of civilization at AI’s collective hands. Some members of Congress, however, have opposed the AI Safety Act, stating in part that it should “ensure restrictions are proportionate to real-world risks and harms.” To be sure, California’s approach to regulating AI under the AI Safety Act is not “wrong.” It does, however, represent a different approach than other AI regulations, which generally focus on the riskiness of use and address areas such as discrimination, transparency, and human oversight.

While the AI Safety Act focuses on sophisticated AI systems with the largest processing power and biggest development budgets and, thus, presumably those with a greater potential for harm as a result, developers of AI systems of all sizes and capabilities already largely engage in testing and assessments, even if only motivated by market considerations. What is new is that the AI Safety Act creates standards for such evaluations that, with history as the guide, would likely materially influence standards included in other US AI regulations if signed into law by Governor Newsom (who has already signed an executive generative AI order of his own that predated President Biden’s) even though the range of covered AI systems would be somewhat limited.

With the potential to transform every industry, regulation of AI in one form or another is critical to navigate the ongoing sea change. The extent and nature of that regulation in California and elsewhere is certain to be fiercely debated, whether or not the AI Safety Act is signed into law. Currently, the risks attendant to AI development and use in the United States are still largely reputational, but comprehensive regulation is approaching. It is thus critical to be thoughtful and proactive about how your organization intends to leverage AI tools and to fully understand the risks and benefits associated with any such use

Advertisement

Published by

National Law Forum

A group of in-house attorneys developed the National Law Review on-line edition to create an easy to use resource to capture legal trends and news as they first start to emerge. We were looking for a better way to organize, vet and easily retrieve all the updates that were being sent to us on a daily basis.In the process, we’ve become one of the highest volume business law websites in the U.S. Today, the National Law Review’s seasoned editors screen and classify breaking news and analysis authored by recognized legal professionals and our own journalists. There is no log in to access the database and new articles are added hourly. The National Law Review revolutionized legal publication in 1888 and this cutting-edge tradition continues today.