Top Competition Enforcers in the US, EU, and UK Release Joint Statement on AI Competition – AI: The Washington Report


On July 23, the top competition enforcers at the US Federal Trade Commission (FTC) and Department of Justice (DOJ), the UK Competition and Markets Authority (CMA), and the European Commission (EC) released a Joint Statement on Competition in Generative AI Foundation Models and AI products. The statement outlines risks in the AI ecosystem and shared principles for protecting and fostering competition.

While the statement does not lay out specific enforcement actions, the statement’s release suggests that the top competition enforcers in all three jurisdictions are focusing on AI’s effects on competition in general and competition within the AI ecosystem—and are likely to take concrete action in the near future.

A Shared Focus on AI

The competition enforcers did not just discover AI. In recent years, the top competition enforcers in the US, UK, and EU have all been examining both the effects AI may have on competition in various sectors as well as competition within the AI ecosystem. In September 2023, the CMA released a report on AI Foundation Models, which described the “significant impact” that AI technologies may have on competition and consumers, followed by an updated April 2024 report on AI. In June 2024, French competition authorities released a report on Generative AI, which focused on competition issues related to AI. At its January 2024 Tech Summit, the FTC examined the “real-world impacts of AI on consumers and competition.”

AI as a Technological Inflection Point

In the new joint statement, the top enforcers described the recent evolution of AI technologies, including foundational models and generative AI, as “a technological inflection point.” As “one of the most significant technological developments of the past couple decades,” AI has the potential to increase innovation and economic growth and benefit the lives of citizens around the world.

But with any technological inflection point, which may create “new means of competing” and catalyze innovation and growth, the enforcers must act “to ensure the public reaps the full benefits” of the AI evolution. The enforcers are concerned that several risks, described below, could undermine competition in the AI ecosystem. According to the enforcers, they are “committed to using our available powers to address any such risks before they become entrenched or irreversible harms.”

Risks to Competition in the AI Ecosystem

The top enforcers highlight three main risks to competition in the AI ecosystem.

  1. Concentrated control of key inputs – Because AI technologies rely on a few specific “critical ingredients,” including specialized chips and technical expertise, a number of firms may be “in a position to exploit existing or emerging bottlenecks across the AI stack and to have outside influence over the future development of these tools.” This concentration may stifle competition, disrupt innovation, or be exploited by certain firms.
  2. Entrenching or extending market power in AI-related markets – The recent advancements in AI technologies come “at a time when large incumbent digital firms already enjoy strong accumulated advantages.” The regulators are concerned that these firms, due to their power, may have “the ability to protect against AI-driven disruption, or harness it to their particular advantage,” potentially to extend or strengthen their positions.
  3. Arrangements involving key players could amplify risks – While arrangements between firms, including investments and partnerships, related to the development of AI may not necessarily harm competition, major firms may use these partnerships and investments to “undermine or coopt competitive threats and steer market outcomes” to their advantage.

Beyond these three main risks, the statement also acknowledges that other competition and consumer risks are also associated with AI. Algorithms may “allow competitors to share competitively sensitive information” and engage in price discrimination and fixing. Consumers may be harmed, too, by AI. As the CMA, DOJ, and the FTC have consumer protection authority, these authorities will “also be vigilant of any consumer protection threats that may derive from the use and application of AI.”

Sovereign Jurisdictions but Shared Concerns

While the enforcers share areas of concern, the joint statement recognizes that the EU, UK, and US’s “legal powers and jurisdictions contexts differ, and ultimately, our decisions will always remain sovereign and independent.” Nonetheless, the competition enforcers assert that “if the risks described [in the statement] materialize, they will likely do so in a way that does not respect international boundaries,” making it necessary for the different jurisdictions to “share an understanding of the issues” and be “committed to using our respective powers where appropriate.”

Three Unifying Principles

With the goal of acting together, the enforcers outline three shared principles that will “serve to enable competition and foster innovation.”

  1. Fair Dealing – Firms that engage in fair dealing will make the AI ecosystem as a whole better off. Exclusionary tactics often “discourage investments and innovation” and undermine competition.
  2. Interoperability – Interoperability, the ability of different systems to communicate and work together seamlessly, will increase competition and innovation around AI. The enforcers note that “any claims that interoperability requires sacrifice to privacy and security will be closely scrutinized.”
  3. Choice – Everyone in the AI ecosystem, from businesses to consumers, will benefit from having “choices among the diverse products and business models resulting from a competitive process.” Regulators may scrutinize three activities in particular: (1) company lock-in mechanisms that could limit choices for companies and individuals, (2) partnerships between incumbents and newcomers that could “sidestep merger enforcement” or provide “incumbents undue influence or control in ways that undermine competition,” and (3) for content creators, “choice among buyers,” which could be used to limit the “free flow of information in the marketplace of ideas.”

Conclusion: Potential Future Activity

While the statement does not address specific enforcement tools and actions the enforcers may take, the statement’s release suggests that the enforcers may all be gearing up to take action related to AI competition in the near future. Interested stakeholders, especially international ones, should closely track potential activity from these enforcers. We will continue to closely monitor and analyze activity by the DOJ and FTC on AI competition issues.

EU Publishes Groundbreaking AI Act, Initial Obligations Set to Take Effect on February 2, 2025

On July 12, 2024, the European Union published the language of its much-anticipated Artificial Intelligence Act (AI Act), which is the world’s first comprehensive legislation regulating the growing use of artificial intelligence (AI), including by employers.

Quick Hits

  • EU published the final AI Act, setting it into force on August 1, 2024.
  • The legislation treats employers’ use of AI in the workplace as potentially high-risk and imposes obligations for their use and potential penalties for violations.
  • The legislation will be incrementally implemented over the next three years.

The AI Act will “enter into force” on August 1, 2024 (or twenty days from the July 12, 2024, publication date). The legislation’s publication follows its adoption by the EU Parliament in March 2024 and approval by the EU Council in May 2024.

The groundbreaking AI legislation takes a risk-based approach that will subject AI applications to four different levels of increasing regulation: (1) “unacceptable risk,” which are banned; (2) “high risk”; (3) “limited risk”; and (4) “minimal risk.”

While it does not exclusively apply to employers, the law treats employers’ use of AI technologies in the workplace as potentially “high risk.” Violations of the law could result in hefty penalties.

Key Dates

The publication commences the timeline of implementation over the next three years, as well as outline when we should expect to see more guidance on how it will be applied. The most critical dates for employers are:

  • August 1, 2024 – The AI Act will enter into force.
  • February 2, 2025 – (Six months from the date of entry into force) – Provisions on banned AI systems will take effect, meaning use of such systems must be discontinued by that time.
  • May 2, 2025 – (Nine months from the date of entry into force) – “Codes of practice” should be ready, giving providers of general purpose AI systems further clarity on obligations under the AI Act, which could possibly offer some insight to employers.
  • August 2, 2025 – (Twelve months from the date of entry into force) – Provisions on notifying authorities, general-purpose AI models, governance, confidentiality, and most penalties will take effect.
  • February 2, 2025 – (Eighteen months from the date of entry into force) – Guidelines should be available specifying how to comply with the provisions on high-risk AI systems, including practical examples of high-risk versus not high-risk systems.
  • August 2, 2026 – (Twenty-four months from the date of entry into force) – The remainder of the legislation will take effect, except for a minor provision regarding specific types of high-risk AI systems that will go into effect on August 1, 2027, a year later.

Next Steps

Adopting the EU AI Act will set consistent standards across the EU nations. Further, the legislation is significant in that it is likely to serve as a framework for AI laws or regulations in other jurisdictions, similar to how the EU’s General Data Protection Regulation (GDPR) has served as a model in the area of data privacy.

In the United States, regulation of AI and automated decision-making systems has been a priority, particularly when the tools are used to make employment decisions. In October 2023, the Biden administration issued an executive order requiring federal agencies to balance the benefits of AI with legal risks. Several federal agencies have since updated guidance concerning the use of AI and several states and cities have been considering legislation or regulations.

European Union | Latest Immigration Updates

The adopted revision to the 2011 single-permit directive has been published in the Official Journal of the European Union, and the EU Council has temporarily suspended certain elements of EU law that regulate visa issuance to Ethiopian nationals.

Key Points:

  • The single-permit directive enters into force on May 21, 2024, and EU member states have until May 21, 2026, to implement the terms of the directive domestically.
    •  Member states will maintain the ability to decide which and how many third-country workers to admit to their labor market.
  • For Ethiopian nationals, the standard visa-processing period has been changed to 45 calendar days instead of 15. In addition, EU member states will no longer be able to waive certain requirements when issuing visas to Ethiopian nationals, including evidence that must be submitted to issue multiple-entry visas and visa fees for holders of diplomatic and service passports.

Background: As BAL previously reported, the directive currently in place was designed to attract additional skills and talent to the EU to address shortcomings within the legal migration system, provide an application process for EU countries to issue a single permit and establish common rights for workers from third countries. The revised law shortens the application procedure for a permit to reside for the purpose of work in a member state’s territory and aims to strengthen the rights of third-country workers by allowing a change of employer and a limited period of unemployment. The new agreement is part of the “skills and talent” package, which addresses shortcomings in legal migration policy and aims to attract greater foreign skilled talent.

The decision to tighten visa guidelines for Ethiopia is in response to an assessment by the EU Commission, which found that Ethiopian authorities have not fully cooperated with officials regarding readmission requests and difficulties persist in issuing emergency travel documents. The commission cited the organization of both voluntary and non-voluntary return operations as a determining factor in altering Ethiopia’s visa privileges within the European Union.

BAL Analysis: The single-permit directive is directed at non-EU nationals working in the EU and aims to create an environment where these individuals are treated equally regarding their working conditions, social security and tax benefits, and recognizing their unique qualifications.

Navigating the EU AI Act from a US Perspective: A Timeline for Compliance

After extensive negotiations, the European Parliament, Commission, and Council came to a consensus on the EU Artificial Intelligence Act (the “AI Act”) on Dec. 8, 2023. This marks a significant milestone, as the AI Act is expected to be the most far-reaching regulation on AI globally. The AI Act is poised to significantly impact how companies develop, deploy, and manage AI systems. In this post, NM’s AI Task Force breaks down the key compliance timelines to offer a roadmap for U.S. companies navigating the AI Act.

The AI Act will have a staged implementation process. While it will officially enter into force 20 days after publication in the EU’s Official Journal (“Entry into Force”), most provisions won’t be directly applicable for an additional 24 months. This provides a grace period for businesses to adapt their AI systems and practices to comply with the AI Act. To bridge this gap, the European Commission plans to launch an AI Pact. This voluntary initiative allows AI developers to commit to implementing key obligations outlined in the AI Act even before they become legally enforceable.

With the impending enforcement of the AI Act comes the crucial question for U.S. companies that operate in the EU or whose AI systems interact with EU citizens: How can they ensure compliance with the new regulations? To start, U.S. companies should understand the key risk categories established by the AI Act and their associated compliance timelines.

I. Understanding the Risk Categories
The AI Act categorizes AI systems based on their potential risk. The risk level determines the compliance obligations a company must meet.  Here’s a simplified breakdown:

  • Unacceptable Risk: These systems are banned entirely within the EU. This includes applications that threaten people’s safety, livelihood, and fundamental rights. Examples may include social credit scoring, emotion recognition systems at work and in education, and untargeted scraping of facial images for facial recognition.
  • High Risk: These systems pose a significant risk and require strict compliance measures. Examples may include AI used in critical infrastructure (e.g., transport, water, electricity), essential services (e.g., insurance, banking), and areas with high potential for bias (e.g., education, medical devices, vehicles, recruitment).
  • Limited Risk: These systems require some level of transparency to ensure user awareness. Examples include chatbots and AI-powered marketing tools where users should be informed that they’re interacting with a machine.
  • Minimal Risk: These systems pose minimal or no identified risk and face no specific regulations.

II. Key Compliance Timelines (as of March 2024):

Time Frame  Anticipated Milestones
6 months after Entry into Force
  • Prohibitions on Unacceptable Risk Systems will come into effect.
12 months after Entry into Force
  • This marks the start of obligations for companies that provide general-purpose AI models (those designed for widespread use across various applications). These companies will need to comply with specific requirements outlined in the AI Act.
  • Member states will appoint competent authorities responsible for overseeing the implementation of the AI Act within their respective countries.
  • The European Commission will conduct annual reviews of the list of AI systems categorized as “unacceptable risk” and banned under the AI Act.
  • The European Commission will issue guidance on high-risk AI incident reporting.
18 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
24 months after Entry into Force
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.
36 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
By the end of 2030
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.

In addition to the above, we can expect further rulemaking and guidance from the European Commission to come forth regarding aspects of the AI Act such as use cases, requirements, delegated powers, assessments, thresholds, and technical documentation.

Even before the AI Act’s Entry into Force, there are crucial steps U.S. companies operating in the EU can take to ensure a smooth transition. The priority is familiarization. Once the final version of the Act is published, carefully review it to understand the regulations and how they might apply to your AI systems. Next, classify your AI systems according to their risk level (high, medium, minimal, or unacceptable). This will help you determine the specific compliance obligations you’ll need to meet. Finally, conduct a thorough gap analysis. Identify any areas where your current practices for developing, deploying, or managing AI systems might not comply with the Act. By taking these proactive steps before the official enactment, you’ll gain valuable time to address potential issues and ensure your AI systems remain compliant in the EU market.

Can Artificial Intelligence Assist with Cybersecurity Management?

AI has great capability to both harm and to protect in a cybersecurity context. As with the development of any new technology, the benefits provided through correct and successful use of AI are inevitably coupled with the need to safeguard information and to prevent misuse.

Using AI for good – key themes from the European Union Agency for Cybersecurity (ENISA) guidance

ENISA published a set of reports earlier last year focused on AI and the mitigation of cybersecurity risks. Here we consider the main themes raised and provide our thoughts on how AI can be used advantageously*.

Using AI to bolster cybersecurity

In Womble Bond Dickinson’s 2023 global data privacy law survey, half of respondents told us they were already using AI for everyday business activities ranging from data analytics to customer service assistance and product recommendations and more. However, alongside day-to-day tasks, AI’s ‘ability to detect and respond to cyber threats and the need to secure AI-based application’ makes it a powerful tool to defend against cyber-attacks when utilized correctly. In one report, ENISA recommended a multi-layered framework which guides readers on the operational processes to be followed by coupling existing knowledge with best practices to identify missing elements. The step-by-step approach for good practice looks to ensure the trustworthiness of cybersecurity systems.

Utilizing machine-learning algorithms, AI is able to detect both known and unknown threats in real time, continuously learning and scanning for potential threats. Cybersecurity software which does not utilize AI can only detect known malicious codes, making it insufficient against more sophisticated threats. By analyzing the behavior of malware, AI can pin-point specific anomalies that standard cybersecurity programs may overlook. Deep-learning based program NeuFuzz is considered a highly favorable platform for vulnerability searches in comparison to standard machine learning AI, demonstrating the rapidly evolving nature of AI itself and the products offered.

A key recommendation is that AI systems should be used as an additional element to existing ICT, security systems and practices. Businesses must be aware of the continuous responsibility to have effective risk management in place with AI assisting alongside for further mitigation. The reports do not set new standards or legislative perimeters but instead emphasize the need for targeted guidelines, best practices and foundations which help cybersecurity and in turn, the trustworthiness of AI as a tool.

Amongst other factors, cybersecurity management should consider accountability, accuracy, privacy, resiliency, safety and transparency. It is not enough to rely on traditional cybersecurity software especially where AI can be readily implemented for prevention, detection and mitigation of threats such as spam, intrusion and malware detection. Traditional models do exist, but as ENISA highlights they are usually designed to target or’address specific types of attack’ which, ‘makes it increasingly difficult for users to determine which are most appropriate for them to adopt/implement.’ The report highlights that businesses need to have a pre-existing foundation of cybersecurity processes which AI can work alongside to reveal additional vulnerabilities. A collaborative network of traditional methods and new AI based recommendations allow businesses to be best prepared against the ever-developing nature of malware and technology based threats.

In the US in October 2023, the Biden administration issued an executive order with significant data security implications. Amongst other things, the executive order requires that developers of the most powerful AI systems share safety test results with the US government, that the government will prepare guidance for content authentication and watermarking to clearly label AI-generated content and that the administration will establish an advanced cybersecurity program to develop AI tools and fix vulnerabilities in critical AI models. This order is the latest in a series of AI regulations designed to make models developed in the US more trustworthy and secure.

Implementing security by design

A security by design approach centers efforts around security protocols from the basic building blocks of IT infrastructure. Privacy-enhancing technologies, including AI, assist security by design structures and effectively allow businesses to integrate necessary safeguards for the protection of data and processing activity, but should not be considered as a ‘silver bullet’ to meet all requirements under data protection compliance.

This will be most effective for start-ups and businesses in the initial stages of developing or implementing their cybersecurity procedures, as conceiving a project built around security by design will take less effort than adding security to an existing one. However, we are seeing rapid growth in the number of businesses using AI. More than one in five of our survey respondents (22%), for instance, started to use AI in the past year alone.

However, existing structures should not be overlooked and the addition of AI into current cybersecurity system should improve functionality, processing and performance. This is evidenced by AI’s capability to analyze huge amounts of data at speed to provide a clear, granular assessment of key performance metrics. This high-level, high-speed analysis allows businesses to offer tailored products and improved accessibility, resulting in a smoother retail experience for consumers.

Risks

Despite the benefits, AI is by no-means a perfect solution. Machine-learning AI will act on what it has been told under its programming, leaving the potential for its results to reflect an unconscious bias in its interpretation of data. It is also important that businesses comply with regulations (where applicable) such as the EU GDPR, Data Protection Act 2018, the anticipated Artificial Intelligence Act and general consumer duty principles.

Cost benefits

Alongside reducing the cost of reputational damage from cybersecurity incidents, it is estimated that UK businesses who use some form of AI in their cybersecurity management reduced costs related to data breaches by £1.6m on average. Using AI or automated responses within cybersecurity systems was also found to have shortened the average ‘breach lifecycle’ by 108 days, saving time, cost and significant business resource. Further development of penetration testing tools which specifically focus on AI is required to explore vulnerabilities and assess behaviors, which is particularly important where personal data is involved as a company’s integrity and confidentiality is at risk.

Moving forward

AI can be used to our advantage but it should not been seen to entirely replace existing or traditional models to manage cybersecurity. While AI is an excellent long-term assistant to save users time and money, it cannot be relied upon alone to make decisions directly. In this transitional period from more traditional systems, it is important to have a secure IT foundation. As WBD suggests in our 2023 report, having established governance frameworks and controls for the use of AI tools is critical for data protection compliance and an effective cybersecurity framework.

Despite suggestions that AI’s reputation is degrading, it is a powerful and evolving tool which could not only improve your business’ approach to cybersecurity and privacy but with an analysis of data, could help to consider behaviors and predict trends. The use of AI should be exercised with caution, but if done correctly could have immeasurable benefits.

___

* While a portion of ENISA’s commentary is focused around the medical and energy sectors, the principles are relevant to all sectors.

European Commission Action on Climate Taxonomy and ESG Rating Provider Regulation

On June 13, 2023, the European Commission published “a new package of measures to build on and strengthen the foundations of the EU sustainable finance framework.” The aim is to ensure that the EU sustainable finance framework continues to support companies and the financial sector in connection with climate transition, including making the framework “easier to use” and providing guidance on climate-related disclosure, while encouraging the private funding of transition projects and technologies. These measures are summarized in a publication, “A sustainable finance framework that works on the ground.” Overall, according to the Commission, the package “is another step towards a globally leading legal framework facilitating the financing of the transition.”

The sustainable finance package includes the following measures:

  • EU Taxonomy Climate Delegated Act: amendments include (i) new criteria for economic activities that make a substantial contribution to one or more non-climate environmental objectives, namely, sustainable use and protection of water and marine resources, transition to a circular economy, pollution prevention and control, and protection and restoration of biodiversity and ecosystems; and (ii) changes expanding on economic activities that contribute to climate change mitigation and adaptation “not included so far – in particular in the manufacturing and transport sectors.” The EU Taxonomy Climate Delegated Act has been operative since January 2022 and includes 107 economic activities that are responsible for 64% of greenhouse gas emissions in the EU. In addition, “new economic sectors and activities will be added, and existing ones refined and updated, where needed in line with regulatory and technological developments.” “For large non-financial undertakings, disclosure of the degree of taxonomy alignment regarding climate objectives began in 2023. Disclosures will be phased-in over the coming years for other actors and environmental objectives.”
  • Proposed Regulation of ESG Rating Providers: the Commission adopted a proposed regulation, which was based on 2021 recommendations from the International Organization of Securities Commissioners, aimed at promoting operational integrity and increased transparency in the ESG ratings market through organizational principles and clear rules addressing conflicts of interest. Ratings providers would be authorized and supervised by the European Securities and Markets Authority. The regulation “provides requirements on disclosures around” ratings methodologies and objectives, and “introduces principle-based organizational requirements on” ratings providers activities. The Commission is also seeking advice from ESMA on the presentation of credit ratings, with the aim being to address shortcomings related to “how ESG factors are incorporated into methodologies and disclosures of how ESG factors impact credit ratings.”
  • Enhancing Usability: the Commission set out an overview of the measures and tools aimed at enhancing the usability of relevant rules and providing implementation guidance to stakeholders. The Commission Staff Working Document “Enhancing the usability of the EU Taxonomy and the overall EU sustainable finance framework” summarizes the Commission’s most recent initiatives and measures. The Commission also published a new FAQ document that provides guidance on the interpretation and implementation of certain legal provisions of the EU Taxonomy Regulation and on the interactions between the concepts of “taxonomy-aligned investment” and “sustainable investment” under the SFDR.

Taking the Temperature: As previously discussed, the Commission is increasingly taking steps to achieve the goal of reducing net greenhouse gas emissions by at least 55% by 2030, known as Fit for 55. Recent initiatives include the adoption of a carbon sinks goal, the launch of the greenwashing-focused Green Claims Directive, and now, the sustainable finance package.

Another objective of these regulatory initiatives is to provide increased transparency for investors as they assess sustainability and transition-related claims made by issuers. In this regard, the legislative proposal relating to the regulation of ESG rating agencies is significant. As noted in our longer survey, there is little consistency among ESG ratings providers and few established industry norms relating to disclosure, measurement methodologies, transparency and quality of underlying data. That has led to a number of jurisdictions proposing regulation, including (in addition to the EU) the UK, as well as to government inquiries to ratings providers in the U.S.

© Copyright 2023 Cadwalader, Wickersham & Taft LLP

For more financial legal news, click here to visit the National Law Review.

Under the GDPR, Are Companies that Utilize Personal Information to Train Artificial Intelligence (AI) Controllers or Processors?

The EU’s General Data Protection Regulation (GDPR) applies to two types of entities – “controllers” and “processors.”

A “controller” refers to an entity that “determines the purposes and means” of how personal information will be processed.[1] Determining the “means” of processing refers to deciding “how” information will be processed.[2] That does not necessitate, however, that a controller makes every decision with respect to information processing. The European Data Protection Board (EDPB) distinguishes between “essential means” and “non-essential means.[3] “Essential means” refers to those processing decisions that are closely linked to the purpose and the scope of processing and, therefore, are considered “traditionally and inherently reserved to the controller.”[4] “Non-essential means” refers to more practical aspects of implementing a processing activity that may be left to third parties – such as processors.[5]

A “processor” refers to a company (or a person such as an independent contractor) that “processes personal data on behalf of [a] controller.”[6]

Data typically is needed to train and fine-tune modern artificial intelligence models. They use data – including personal information – in order to recognize patterns and predict results.

Whether an organization that utilizes personal information to train an artificial intelligence engine is a controller or a processor depends on the degree to which the organization determines the purpose for which the data will be used and the essential means of processing. The following chart discusses these variables in the context of training AI:

The following chart discusses these variables in the context of training AI:

Function

Activities Indicative of a Controller

Activities Indicative of a Processor

Purpose of processing

Why the AI is being trained.

If an organization makes its own decision to utilize personal information to train an AI, then the organization will likely be considered a “controller.”

If an organization is using personal information provided by a third party to train an AI, and is doing so at the direction of the third party, then the organization may be considered a processor.

Essential means

Data types used in training.

If an organization selects which data fields will be used to train an AI, the organization will likely be considered a “controller.”

If an organization is instructed by a third party to utilize particular data types to train an AI, the organization may be a processor.

Duration personal information is held within the training engine

If an organization determines how long the AI can retain training data, it will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, and does not control how long the AI may access the training data, the organization may be a processor.

Recipients of the personal information

If an organization determines which third parties may access the training data that is provided to the AI, that organization will likely be considered a “controller.”

If an organization is instructed by a third party to use data to train an AI, but does not control who will be able to access the AI (and the training data to which the AI has access), the organization may be a processor.

Individuals whose information is included

If an organization is selecting whose personal information will be used as part of training an AI, the organization will likely be considered a “controller.”

If an organization is being instructed by a third party to utilize particular individuals’ data to train an AI, the organization may be a processor.

 

[1] GDPR, Article 4(7).

[1] GDPR, Article 4(7).

[2] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 33.

[3] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[4] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[5] EDPB, Guidelines 07/2020 on the concepts of controller and processor in the GDPR, Version 1, adopted 2 Sept. 2020, at ¶ 38.

[6] GDPR, Article 4(8).

©2023 Greenberg Traurig, LLP. All rights reserved.

For more Privacy Legal News, click here to visit the National Law Review.

The EU’s New Green Claims Directive – It’s Not Easy Being Green

Highlights

  • On March 22, 2023, the European Commission proposed the Green Claims Directive, which is intended to make green claims reliable, comparable and verifiable across the EU and protect consumers from greenwashing
  • Adding to the momentum generated by other EU green initiatives, this directive could be the catalyst that also spurs the U.S. to approve stronger regulatory enforcement mechanisms to crackdown on greenwashing
  • This proposed directive overlaps the FTC’s request for comments on its Green Guides, including whether the agency should initiate a rulemaking to establish enforceable requirements related to unfair and deceptive environmental claims. The deadline for comments has been extended to April 24, 2023

The European Commission (EC) proposed the Green Claims Directive (GCD) on March 22, 2023, to crack down on greenwashing and prevent businesses from misleading customers about the environmental characteristics of their products and services. This action was in response, at least in part, to a 2020 commission study that found more than 50 percent of green labels made environmental claims that were “vague, misleading or unfounded,” and 40 percent of these claims were “unsubstantiated.”

 

This definitive action by the European Union (EU) comes at a time when the U.S. is also considering options to curb greenwashing and could inspire the U.S. to implement stronger regulatory enforcement mechanisms, including promulgation of new enforceable rules by the Federal Trade Commission (FTC) defining and prohibiting unfair and deceptive environmental claims.

According to the EC, under this proposal, consumers “will have more clarity, stronger reassurance that when something is sold as green, it actually is green, and better quality information to choose environment-friendly products and services.”

Scope of the Green Claims Directive

The EC’s objectives in the proposed GCD are to:

  • Make green claims reliable, comparable and verifiable across the EU
  • Protect consumers from greenwashing
  • Contribute to creating a circular and green EU economy by enabling consumers to make informed purchasing decisions
  • Help establish a level playing field when it comes to environmental performance of products

The related proposal for a directive on empowering consumers for the green transition and annex, referenced in the proposed GCD, defines the green claims to be regulated as follows:

“any message or representation, which is not mandatory under Union law or national law, including text, pictorial, graphic or symbolic representation, in any form, including labels, brand names, company names or product names, in the context of a commercial communication, which states or implies that a product or trader has a positive or no impact on the environment or is less damaging to the environment than other products or traders, respectively, or has improved their impact over time.”

The GCD provides minimum requirements for valid, comparable and verifiable information about the environmental impacts of products that make green claims. The proposal sets clear criteria for companies to prove their environmental claims: “As part of the scientific analysis, companies will identify the environmental impacts that are actually relevant to their product, as well as identifying any possible trade-offs to give a full and accurate picture.” Businesses will be required to provide consumers information on the green claim, either with the product or online. The new rule will require verification by independent auditors before claims can be made and put on the market.

The GCD will also regulate environmental labels. The GCD is proposing to establish standard criteria for the more than 230 voluntary sustainability labels used across the EU, which are currently “subject to different levels of robustness, supervision and transparency.” The GCD will require environmental labels to be reliable, transparent, independently verified and regularly reviewed. Under the new proposal, adding an environmental label on products is still voluntary. The EU’s official EU Ecolabel is exempt from the new rules since it already adheres to a third-party verification standard.

Companies based outside the EU that make green claims or utilize environmental labels that target the consumers of the 27 member states also would be required to comply with the GCD. It will be up to member states to set up the substantiation process for products and labels’ green claims using independent and accredited auditors. The GCD has established the following process criteria:

  • Claims must be substantiated with scientific evidence that is widely recognised, identifying the relevant environmental impacts and any trade-offs between them
  • If products or organisations are compared with other products and organisations, these comparisons must be fair and based on equivalent information and data
  • Claims or labels that use aggregate scoring of the product’s overall environmental impact on, for example, biodiversity, climate, water consumption, soil, etc., shall not be permitted, unless set in EU rules
  • Environmental labelling schemes should be solid and reliable, and their proliferation must be controlled. EU level schemes should be encouraged, new public schemes, unless developed at EU level, will not be allowed, and new private schemes are only allowed if they can show higher environmental ambition than existing ones and get a pre-approval
  • Environmental labels must be transparent, verified by a third party, and regularly reviewed

Enforcement of the GCD will take place at the member state level, subject to the proviso in the GCD that “penalties must be ‘effective, proportionate and dissuasive.’” Penalties for violation range from fines to confiscation of revenues and temporary exclusion from public procurement processes and public funding. The directive requires that consumers should be able to bring an action as well.

The EC’s intent is for the GCD to work with the Directive on Empowering the Consumers for the Green Transition, which encourages sustainable consumption by providing understandable information about the environmental impact of products, and identifying the types of claims that are deemed unfair commercial practices. Together these new rules are intended to provide a clear regime for environmental claims and labels. According to the EC, the adoption of this proposed legislation will not only protect consumers and the environment but also give a competitive edge to companies committed to increasing their environmental sustainability.

Initial Public Reaction to the GCD and Next Steps

While some organizations, such as the International Chamber of Commerce, offered support, several interest groups quickly issued public critiques of the proposed GCD. The Sustainable Apparel Coalition asserted that: “The Directive does not mandate a standardized and clearly defined framework based on scientific foundations and fails to provide the legal certainty for companies and clarity to consumers.”

ECOS lamented that “After months of intense lobbying, what could have been legislation contributing to providing reliable environmental information to consumers was substantially watered down,” and added that “In order for claims to be robust and comparable, harmonised methodologies at the EU level will be crucial.” Carbon Market Watch was disappointed that “The draft directive fails to outlaw vague and disingenuous terms like ‘carbon neutrality’, which are a favoured marketing strategy for companies seeking to give their image a green makeover while continuing to pollute with impunity.”

The EC’s proposal will now go to the European Parliament and Council for consideration. This process usually takes about 18 months, during which there will be a public consultation process that will solicit comments, and amendments may be introduced. If the GCD is approved, each of the 27 member states will have 18 months after entry of the GCD to adopt national laws, and those laws will become effective six months after that. As a result, there is a reasonably good prospect that there will be variants in the final laws enacted.

Will the GCD Influence the U.S.’s Approach to Regulation of Greenwashing?

The timing and scope of the GCD is of no small interest in the U.S., where regulation of greenwashing has been ramping up as well. In May 2022, the Securities and Exchange Commission (SEC) issued the proposed Names Rule and ESG Disclosure Rule targeting greenwashing in the naming and purpose of claimed ESG funds. The SEC is expected to take final action on the Names Rule in April 2023.

Additionally, as part of a review process that occurs every 10 years, the FTC is receiving comments on its Green Guides for the Use of Environmental Claims, which also target greenwashing. However, the Green Guides are just that – guides that do not currently have the force of law that are used to help interpret what is “unfair and deceptive.”

It is particularly noteworthy that the FTC has asked the public to comment, for the first time, on whether the agency should initiate a rulemaking under the FTC Act to establish independently enforceable requirements related to unfair and deceptive environmental claims. If the FTC promulgates such a rule, it will have new enforcement authority to impose substantial penalties.

The deadline for comments on the Green Guides was recently extended to April 24, 2023. It is anticipated that there will be a substantial number of comments and it will take some time for the FTC to digest them. It will be interesting to watch the process unfold as the GCD moves toward finalization and the FTC decides whether to commence rulemaking in connection with its Green Guide updates. Once again there is a reasonable prospect that the European initiatives and momentum on green matters, including the GCD, could be a catalyst for the US to step up as well – in this case to implement stronger regulatory enforcement mechanisms to crackdown on greenwashing.

© 2023 BARNES & THORNBURG LLP

G7 Sanctions Enforcement Coordination Mechanism and Centralized EU Sanctions Watchdog Proposed

On Feb. 20, 2023, Dutch Minister of Foreign Affairs Wopke Hoekstra gave a speech titled “Building a secure European future” at the College of Europe in Bruges, Belgium where he made a plea to “(…) sail to the next horizon where sanctions are concerned.” The Dutch Foreign Minister said European Union (EU) “(…) sanctions are hurting the Russians like hell (…)” but at the same time the measures “(…) are being evaded on a massive scale.” Hoekstra believes this is in part because the EU has too little capacity to analyze, coordinate, and promote the sanctions. However, arguably, there is also a lack of capacity at the EU Member-State level to enforce sanctions.

Against this background the Dutch Foreign Minister proposed to set up a sanctions headquarters in Brussels, Belgium, i.e., a novel watchdog or body to tackle the circumvention of EU sanctions. Such a body might represent the nearest EU equivalent to the U.S. Office of Foreign Assets Control (OFAC). OFAC both implements and enforces U.S. economic sanctions (issuing regulations, licenses, and directives, as well as enforcing through issuing administrative subpoenas, civil and administrative monetary penalties, and making criminal referrals to the U.S. Department of Justice). In Hoekstra’s words:

“A place where [EU] Member States can pool information and resources on effectiveness and evasion. Where we do much more to fight circumvention by third countries. This new HQ would establish a watch list of sectors and trade flows with a high circumvention risk. Companies will be obliged to include end-use clauses in their contracts, so that their products don’t end up in the Russian war machine. And the EU should bring down the full force of its collective economic strength and criminal justice systems on those who assist in sanctions evasion. By naming, shaming, sanctioning, and prosecuting them.”

The Dutch Foreign Minister’s proposal – which is also set out in a separate non-paper – apparently is backed and supported by some 10 or so EU Member States, including Germany, France, Italy, and Spain.

Additionally, on Feb. 23, 2023, the press reported the international Group of Seven (G7) is set to create a new tool to coordinate their enforcement of existing sanctions against the Russian Federation (Russia). The aim of the tool, tentatively called the Enforcement Coordination Mechanism, would be to bolster information-sharing and other enforcement actions.

Background

Like other Members of the G7, the EU has adopted throughout 2022 many economic and other sanctions to target Russia’s economy and thwart its ability to continue with its aggression against Ukraine. Nevertheless, currently EU Member States have different definitions of what constitutes a breach of EU sanctions, and what penalties must be applied in case of a breach. This could lead to different degrees of enforcement and risk circumvention of EU sanctions.

As we have reported previously, on Nov. 28, 2022, the Council of the EU adopted a decision to add the violation of restrictive measures to the list of so-called “EU crimes” set out in the Treaty on the Functioning of the EU, which would uniformly criminalize sanctions violations across EU Member States. This proposal still needs the backing of EU Member States, which have traditionally been cautious about reforms that require amendments to their national criminal laws.

Next steps

The decision on when and how to enforce EU sanctions currently lies with individual EU Member States, who also decide on the introduction of the EU’s restrictive measures by unanimity. As such, the Dutch Foreign Minister’s proposal requires the backing and support of more EU Member States. If adopted, the new proposed body could send cases directly to the European Public Prosecutor’s Office (EPPO), assuming the separate “EU crimes” legislative piece was also adopted.

Notably, the Dutch Foreign Minister’s proposal appears to suggest a stronger targeting of third countries, which are not aligned with the EU’s sanctions against Russia or help in their circumvention (e.g., Turkey, China, etc.).

Whether or not an EU sanctions oversight body is established, the Dutch proposal signals the current appetite for enhanced multilateral coordination on economic sanctions implementation and tougher, more consistent enforcement of economic sanctions violations. The G7’s proposed Enforcement Coordination Mechanism points in the same direction.

©2023 Greenberg Traurig, LLP. All rights reserved.

EU PFAS Ban Should Raise U.S. Corporate Concerns

On February 7, 2023, the European Chemical Agency (ECHA) unveiled a 200 page proposal that would ban the use of any PFAS in the EU. While the proposal was anticipated by many, the scope of the ban nonetheless drew reactions from a myriad of sectors – from environmentalists to scientists to corporations. U.S. based companies that have any industrial or business interests in the EU must absolutely pay close attention to the EU PFAS ban and consider the impact on business interests.

EU PFAS Ban Proposal

The EU PFAS ban currently proposed would take effect 18 months from the date of enactment; however, the ECHA is contemplating phased-in restrictions of up to 12 years for uses that the group considers challenging to replace in certain applications. The proposal is only the inception of the ECHA regulatory process, which next turns to a public comment period that opens on March 22, 2023 and will run for at least six months. ECHA’s scientific committees to review the proposal and provide feedback. Given the magnitude of comments expected and the likely hurdles that the ECHA will face in finalizing the proposal, it is not expected that the proposal would be finalized prior to 2025.

The EU PFAS ban seeks to prohibit the use of over 10,000 PFAS types, excluding only a sub-class of PFAS that have been deemed “fully degradable.” The proposal indicates: “…the restriction proposal is tailored to address the manufactureplacing on the market, as well as the use of PFASs as such and as constituents in other substances, in mixtures and in articles above a certain concentration. All uses of PFASs are covered by this restriction proposal, regardless of whether they have been specifically assessed by the Dossier Submitters and/or are mentioned in this report or not, unless a specific derogation has been formulated.” (emphasis added) Several specific types of uses and consumer product applicability would be included in the first phase of the proposed ban, including cosmetics, food packaging, clothing and cookware. This first phase of the ban implementation would include uses where alternatives are known, but not yet widely available, which is the reason why the first phase would take effect within 5 years. The second phase of the ban anticipates a 12 year period of time for ban implementation and encompasses uses where alternatives to PFAS are not currently known. Significantly for U.S. business, the proposed ban includes imported goods.

Impact On U.S. Companies

In 2022, U.S. companies exported just shy of $350 billion in goods to the EU. In many instances, companies do not deliberately, intentionally, or knowingly add or utilize PFAS in finished products that are sent to the EU. However, PFAS may be used in manufacturing processes that inadvertently contaminate goods with PFAS. In addition, many U.S. companies rely on overseas companies for supply chain sourcing. Quite commonly, supply chain sources outside of the U.S. do not voluntarily provide chemical composition information for components or goods that they supply. Inquiring of those companies for such information, or certifications that the good contain no PFAS, can be extremely difficult. Getting overseas companies to provide such information often proves impossible and even when certifications are made, the devil may be in the details in terms of what is actually being certified. For example, certifying that goods contain “no hazardous substances” or “no hazardous PFAS” sound reassuring, but by what measure of “hazardous” is the statement being made? Under what country’s regulations? Using which scientific definition? The result of all of these complexities may be that many U.S. based companies need to test their products themselves, which not only increases time to market issues and financial costs associated with production, but also risks to the companies doing business in the U.S. that they may open themselves up to environmental pollution or personal injury lawsuits by conducting such testing. In addition, alternatives may not be as cost effective as PFAS, which impacts businesses and has the potential trickle-down impact of passing some of the costs on to consumers.

While debate continues in the U.S. as to the scientific validity of the “whole class” approach to regulating PFAS (of which there are over 12,000 types according to the EPA), the EU PFAS ban leapfrogs the U.S. debate stage and goes directly to proposing a regulation that would embrace such a “whole class” regulatory scheme. Without a doubt, chemical manufacturers, industrial and manufacturing companies, and some in the science community are expected to strenuously oppose such an approach to regulations for PFAS. The underlying arguments will follow ones advanced and debated already in the U.S. – i.e., not all chemicals act identically, nor have the vast majority of PFAS been shown to date to present health concerns. Proper scientific method does not permit sweeping attributions of testing on legacy PFAS like PFOA and PFOS to be extrapolated and applied to all PFAS. The EU’s response to this via their proposal is that the costs of remediating PFAS from the environment are significant enough that it warrants regulating PFAS as a class to avoid costly, decades-long, and potentially repetitive remediation work in the EU.

Conclusions

It is of the utmost importance for businesses to evaluate their PFAS risk. Public health and environmental groups urge legislators to regulate these compounds in the U.S. and abroad. One major point of contention among members of various industries is whether to regulate PFAS as a class or as individual compounds.  While each PFAS compound has a unique chemical makeup and impacts the environment and the human body in different ways, some groups argue PFAS should be regulated together as a class because they interact with each other in the body, thereby resulting in a collective impact. Other groups argue that the individual compounds are too diverse and that regulating them as a class would be over restrictive for some chemicals and not restrictive enough for others.

Companies should remain informed so they do not get caught off guard. States are increasingly passing PFAS product bills that differ in scope. For any manufacturers, especially those who sell goods overseas, it is important to understand how the various standards among countries will impact them, whether PFAS is regulated as individual compounds or as a class. Conducting regular self-audits for possible exposure to PFAS risk and potential regulatory violations can result in long term savings for companies and should be commonplace in their own risk assessment.

©2023 CMBG3 Law, LLC. All rights reserved.
For more Environmental Law news, click here to visit the National Law Review