A Look at the Evolving Scope of Transatlantic AI Regulations

There have been significant changes to the regulations surrounding artificial intelligence (AI) on a global scale. New measures from governments worldwide are coming online, including the United States (U.S.) government’s executive order on AI, California’s upcoming regulations, the European Union’s AI Act, and emerging developments in the United Kingdom that contribute to this evolving environment.

The European Union (EU) AI Act and the U.S. Executive Order on AI aim to develop and utilize AI safely, securely, and with respect for fundamental rights, yet their approaches are markedly different. The EU AI Act establishes a binding legal framework across EU member states, directly applies to businesses involved in the AI value chain, classifies AI systems by risk, and imposes significant fines for violations. In contrast, the U.S. Executive Order is more of a guideline as federal agencies develop AI standards and policies. It prioritizes AI safety and trustworthiness but lacks specific penalties, instead relying on voluntary compliance and agency collaboration.

The EU approach includes detailed oversight and enforcement, while the U.S. method encourages the adoption of new standards and international cooperation that aligns with global standards but is less prescriptive. Despite their shared objectives, differences in regulatory approach, scope, enforcement, and penalties could lead to contradictions in AI governance standards between the two regions.

There has also been some collaboration on an international scale. Recently, there has been an effort between antitrust officials at the U.S. Department of Justice (DOJ), U.S. Federal Trade Commission (FTC), the European Commission, and the UK’s Competition and Markets Authority to monitor AI and its risks to competition. The agencies have issued a joint statement, with all four antitrust enforcers pledging to “to remain vigilant for potential competition issues” and to use the powers of their agencies to provide safeguards against the utilization of AI to undermine competition or lead to unfair or deceptive practices.

The regulatory landscape for AI across the globe is evolving in real time as the technology develops at a record pace. As regulations strive to keep up with the technology, there are real challenges and risks that exist for companies involved in the development or utilization of AI. Therefore, it is critical that business leaders understand regulatory changes on an international scale, adapt, and stay compliant to avoid what could be significant penalties and reputational damage.

The U.S. Federal Executive Order on AI

In October 2023, the Biden Administration issued an executive order to foster responsible AI innovation. This order outlines several key initiatives, including promoting ethical, trustworthy, and lawful AI technologies. It also calls for collaboration between federal agencies, private companies, academia, and international partners to advance AI capabilities and realize its myriad benefits. The order emphasizes the need for robust frameworks to address potential AI risks such as bias, privacy concerns, and security vulnerabilities. In addition, the order directs that various sweeping actions be taken, including the establishment of new standards for AI safety and security, the passing of bipartisan data privacy legislation to protect Americans’ privacy from the risks posed by AI, the promotion of the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, and the implementation of actions to ensure responsible government deployment of AI and modernization of the federal AI infrastructure through the rapid hiring of AI professionals.

At the state level, Colorado and California are leading the way. Colorado enacted the first comprehensive regulation of AI at the state level with The Colorado Artificial Intelligence Act (Senate Bill (SB) 24-205), signed into law by Governor Jared Polis on May 17, 2024. As our team previously outlined, The Colorado AI Act is comprehensive, establishing requirements for developers and deployers of “high-risk artificial intelligence systems,” to adhere to a host of obligations, including disclosures, risk management practices, and consumer protections. The Colorado law goes into effect on February 1, 2026, giving companies over a year to thoroughly adapt.

In California, a host of proposed AI regulations focusing on transparency, accountability, and consumer protection would require the disclosure of information such as AI systems’ functions, data sources, and decision-making processes. For example, AB2013 was introduced on January 31, 2024, and would require that developers of an AI system or service made available to Californians to post on the developer’s website documentation of the datasets used to train the AI system or service.

SB970 is another bill that was introduced in January 2024 and would require any person or entity that sells or provides access to any AI technology that is designed to create synthetic images, video, or voice to give a consumer warning that misuse of the technology may result in civil or criminal liability for the user.

Finally, on July 2, 2024 the California State Assembly Judiciary Committee passed SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), which regulates AI models based on complexity.

The European Union’s AI Act

The EU is leading the way in AI regulation through its AI Act, which establishes a framework and represents Europe’s first comprehensive attempt to regulate AI. The AI Act was adopted to promote the uptake of human-centric and trustworthy AI while ensuring high level protections of health, safety, and fundamental rights against the harmful effects of AI systems in the EU and supporting innovation.

The AI Act sets forth harmonized rules for the release and use of AI systems in the EU; prohibitions of certain AI practices; specific requirements for high-risk AI systems and obligations for operators of such systems; harmonized transparency rules for certain AI systems; harmonized rules for the release of general-purpose AI models; rules on market monitoring, market surveillance, governance, and enforcement; and measures to support innovation, with a particular focus on SMEs, including startups.

The AI Act classifies AI systems into four risk levels: unacceptable, high, limited, and minimal. Applications that pose an unacceptable risk, such as government social scoring systems, are outright banned. High-risk applications, including CV-scanning tools, face stringent regulations to ensure safety and accountability. Limited risk applications lack full transparency as to AI usage, and the AI Act imposes transparency obligations. For example, humans should be informed when they are using AI systems (such as chatbots) that they are interacting with a machine and not a human so as to enable the user to make an informed decision whether or not to continue. The AI Act allows the free use of minimal-risk AI, including applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

The adoption of the AI Act has not come without criticism from major European companies. In an open letter signed by 150 executives, they raised concerns over the heavy regulation of generative AI and foundation models. The fear is that the increased compliance costs and hindered productivity would drive companies away from the EU. Despite these concerns, the AI Act is here to stay, and it would be wise for companies to prepare for compliance by assessing their systems.

Recommendations for Global Businesses

As governments and regulatory bodies worldwide implement diverse AI regulations, companies have the power to adopt strategies that both ensure compliance and mitigate risks proactively. Global businesses should consider the following recommendations:

  1. Risk Assessments: Conducting thorough risk assessments of AI systems is important for companies to align with the EU’s classification scheme and the U.S.’s focus on safety and security. There must also be an assessment of the safety and security of your AI systems, particularly those categorized as high-risk under the EU’s AI Act. This proactive approach will not only help you meet regulatory requirements but also protect your business from potential sanctions as the legal landscape evolves.
  2. Compliance Strategy: Develop a compliance strategy that specifically addresses the most stringent aspects of the EU and U.S. regulations.
  3. Legal Monitoring: Stay on top of evolving best practices and guidelines. Monitor regulatory developments in regions in which your company operates to adapt to new requirements and avoid penalties and engage with policymakers and industry groups to stay ahead of compliance requirements. Participation in public consultations and industry forums can provide valuable insights and influence regulatory outcomes.
  4. Transparency and Accountability: To meet ethical and regulatory expectations, transparency and accountability should be prioritized in AI development. This means ensuring AI systems are transparent, with clear documentation of data sources, decision-making processes, and system functionalities. There should also be accountability measures in place, such as regular audits and impact assessments.
  5. Data Governance: Implement robust data governance measures to meet the EU’s requirements and align with the U.S.’s emphasis on trustworthy AI. Establish governance structures that ensure compliance with federal, state, and international AI regulations, including appointing compliance officers and developing internal policies.
  6. Invest in Ethical AI Practices: Develop and deploy AI systems that adhere to ethical guidelines, focusing on fairness, privacy, and user rights. Ethical AI practices ensure compliance, build public trust, and enhance brand reputation.

5 Trends to Watch: 2024 Artificial Intelligence

  1. Banner Year for Artificial Intelligence (AI) in Health – With AI-designed drugs entering clinical trials, growing adoption of generative AI tools in medical practices, increasing FDA approvals for AI-enabled devices, and new FDA guidance on AI usage, 2023 was a banner year for advancements in AI for medtech, healthtech, and techbio—even with the industry-wide layoffs that also hit digital and AI teams. The coming year should see continued innovation and investment in AI in areas from drug design to new devices to clinical decision support to documentation and revenue cycle management (RCM) to surgical augmented reality (AR) and more, together with the arrival of more new U.S. government guidance on and best practices for use of this fast-evolving technology.
  2. Congress and AI Regulation – Congress continues to grapple with the proper regulatory structure for AI. At a minimum, expect Congress in 2024 to continue funding AI research and the development of standards required under the Biden Administration’s October 2023 Executive Order. Congress will also debate legislation relating to the use of AI in elections, intelligence operations, military weapons systems, surveillance and reconnaissance, logistics, cybersecurity, health care, and education.
  3. New State and City Laws Governing AI’s Use in HR Decisions – Look for additional state and city laws to be enacted governing an employer’s use of AI in hiring and performance software, similar to New York City’s Local Law 144, known as the Automated Employment Decisions Tools law. More than 200 AI-related laws have been introduced in state legislatures across the country, as states move forward with their own regulation while debate over federal law continues. GT expects 2024 to bring continued guidance from the EEOC and other federal agencies, mandating notice to employees regarding the use of AI in HR-function software as well as restricting its use absent human oversight.
  4. Data Privacy Rules Collide with Use of AI – Application of existing laws to AI, both within the United States and internationally, will be a key issue as companies apply transparency, consent, automated decision making, and risk assessment requirements in existing privacy laws to AI personal information processing. U.S. states will continue to propose new privacy legislation in 2024, with new implementing regulations for previously passed laws also expected. Additionally, there’s a growing trend towards the adoption of “privacy by design” principles in AI development, ensuring privacy considerations are integrated into algorithms and platforms from the ground up. These evolving legal landscapes are not only shaping AI development but also compelling organizations to reevaluate their data strategies, balancing innovation with the imperative to protect individual privacy rights, all while trying to “future proof” AI personal information processing from privacy regulatory changes.
  5. Continued Rise in AI-Related Copyright & Patent Filings, Litigation – Expect the Patent and Copyright Offices to develop and publish guidance on issues at the intersection of AI and IP, including patent eligibility and inventorship for AI-related innovations, the scope of protection for works produced using AI, and the treatment of copyrighted works in AI training, as mandated in the Biden Administration Executive Order. IP holders are likely to become more sophisticated in how they integrate AI into their innovation and authorship workflows. And expect to see a surge in litigation around AI-generated IP, particularly given the ongoing denial of IP protection for AI-generated content and the lack of precedent in this space in general.