Top Competition Enforcers in the US, EU, and UK Release Joint Statement on AI Competition – AI: The Washington Report


On July 23, the top competition enforcers at the US Federal Trade Commission (FTC) and Department of Justice (DOJ), the UK Competition and Markets Authority (CMA), and the European Commission (EC) released a Joint Statement on Competition in Generative AI Foundation Models and AI products. The statement outlines risks in the AI ecosystem and shared principles for protecting and fostering competition.

While the statement does not lay out specific enforcement actions, the statement’s release suggests that the top competition enforcers in all three jurisdictions are focusing on AI’s effects on competition in general and competition within the AI ecosystem—and are likely to take concrete action in the near future.

A Shared Focus on AI

The competition enforcers did not just discover AI. In recent years, the top competition enforcers in the US, UK, and EU have all been examining both the effects AI may have on competition in various sectors as well as competition within the AI ecosystem. In September 2023, the CMA released a report on AI Foundation Models, which described the “significant impact” that AI technologies may have on competition and consumers, followed by an updated April 2024 report on AI. In June 2024, French competition authorities released a report on Generative AI, which focused on competition issues related to AI. At its January 2024 Tech Summit, the FTC examined the “real-world impacts of AI on consumers and competition.”

AI as a Technological Inflection Point

In the new joint statement, the top enforcers described the recent evolution of AI technologies, including foundational models and generative AI, as “a technological inflection point.” As “one of the most significant technological developments of the past couple decades,” AI has the potential to increase innovation and economic growth and benefit the lives of citizens around the world.

But with any technological inflection point, which may create “new means of competing” and catalyze innovation and growth, the enforcers must act “to ensure the public reaps the full benefits” of the AI evolution. The enforcers are concerned that several risks, described below, could undermine competition in the AI ecosystem. According to the enforcers, they are “committed to using our available powers to address any such risks before they become entrenched or irreversible harms.”

Risks to Competition in the AI Ecosystem

The top enforcers highlight three main risks to competition in the AI ecosystem.

  1. Concentrated control of key inputs – Because AI technologies rely on a few specific “critical ingredients,” including specialized chips and technical expertise, a number of firms may be “in a position to exploit existing or emerging bottlenecks across the AI stack and to have outside influence over the future development of these tools.” This concentration may stifle competition, disrupt innovation, or be exploited by certain firms.
  2. Entrenching or extending market power in AI-related markets – The recent advancements in AI technologies come “at a time when large incumbent digital firms already enjoy strong accumulated advantages.” The regulators are concerned that these firms, due to their power, may have “the ability to protect against AI-driven disruption, or harness it to their particular advantage,” potentially to extend or strengthen their positions.
  3. Arrangements involving key players could amplify risks – While arrangements between firms, including investments and partnerships, related to the development of AI may not necessarily harm competition, major firms may use these partnerships and investments to “undermine or coopt competitive threats and steer market outcomes” to their advantage.

Beyond these three main risks, the statement also acknowledges that other competition and consumer risks are also associated with AI. Algorithms may “allow competitors to share competitively sensitive information” and engage in price discrimination and fixing. Consumers may be harmed, too, by AI. As the CMA, DOJ, and the FTC have consumer protection authority, these authorities will “also be vigilant of any consumer protection threats that may derive from the use and application of AI.”

Sovereign Jurisdictions but Shared Concerns

While the enforcers share areas of concern, the joint statement recognizes that the EU, UK, and US’s “legal powers and jurisdictions contexts differ, and ultimately, our decisions will always remain sovereign and independent.” Nonetheless, the competition enforcers assert that “if the risks described [in the statement] materialize, they will likely do so in a way that does not respect international boundaries,” making it necessary for the different jurisdictions to “share an understanding of the issues” and be “committed to using our respective powers where appropriate.”

Three Unifying Principles

With the goal of acting together, the enforcers outline three shared principles that will “serve to enable competition and foster innovation.”

  1. Fair Dealing – Firms that engage in fair dealing will make the AI ecosystem as a whole better off. Exclusionary tactics often “discourage investments and innovation” and undermine competition.
  2. Interoperability – Interoperability, the ability of different systems to communicate and work together seamlessly, will increase competition and innovation around AI. The enforcers note that “any claims that interoperability requires sacrifice to privacy and security will be closely scrutinized.”
  3. Choice – Everyone in the AI ecosystem, from businesses to consumers, will benefit from having “choices among the diverse products and business models resulting from a competitive process.” Regulators may scrutinize three activities in particular: (1) company lock-in mechanisms that could limit choices for companies and individuals, (2) partnerships between incumbents and newcomers that could “sidestep merger enforcement” or provide “incumbents undue influence or control in ways that undermine competition,” and (3) for content creators, “choice among buyers,” which could be used to limit the “free flow of information in the marketplace of ideas.”

Conclusion: Potential Future Activity

While the statement does not address specific enforcement tools and actions the enforcers may take, the statement’s release suggests that the enforcers may all be gearing up to take action related to AI competition in the near future. Interested stakeholders, especially international ones, should closely track potential activity from these enforcers. We will continue to closely monitor and analyze activity by the DOJ and FTC on AI competition issues.

Buy American and Buy European

The Buy American Act was originally passed by Congress in 1933 and has undergone numerous changes across several presidential administrations. While the core of the Act has essentially remained the same, requiring the U.S. government to purchase goods produced in the U.S. in certain circumstances, the domestic preference requirements have changed over the years. While the Buy American Act applies to direct government purchases, the separate (but similarly named) Buy America Act passed in 1982 imposes similar U.S. content requirements for certain federally funded infrastructure projects. Generally, the Buy American Act’s “produced in the U.S.” requirement ensures that federal government purchases of goods valued at more than $10,000 are 100% manufactured in the U.S. with a set percentage of the cost of components coming from the U.S. As of 2024, that set percentage has been increased to 65%. Therefore, the cost of domestic components must be at least 65% of the total cost of components to comply with the rule. Under the existing rules, the threshold will increase to 75% in 2029. These planned changes are consistent with the trend of increasing preferences for domestic goods over time (a trend that has continued across administrations from both sides of the political spectrum).

Unsurprisingly, protectionist policies favoring American production can produce similar protectionist measures enacted by foreign countries. The European Union’s (EU) European Green Deal Industrial Plan (sometimes referred to as the Buy European Act), which includes the Critical Raw Materials Act (CRMA) and the Net-Zero Industry Act (NZIA), were both formally adopted within the last few months. The NZIA, which was agreed upon in February, is aimed at the manufacture of clean technologies in Europe and sets two benchmarks for such manufacturing in the EU: (1) that 40% of the production needed to cover the EU will be domestic by 2030; and (2) that the EU’s production will account for at least 15% of the world’s production by 2040. The NZIA contains a list of net-zero technologies, including wind and heat pumps, battery and energy storage, hydropower, and solar technologies. The CRMA, adopted on March 18, sets forth objectives for the EU’s consumption of raw materials by 2030: that 10% come from local extractions; 40% to be processed in the EU; and 25% come from recycled materials. The CRMA also provides that “not more than 65% of the Union’s annual consumption of each strategic raw material at any relevant stage of processing from a single third country.”[1] While Europe’s new acts are perhaps more geared towards raw materials and clean technology, the U.S. and Europe’s concerted efforts to focus on domestic production will be something to watch for years to come. In particular, it is worth watching whether the recent EU measures generate a response from U.S. lawmakers. If so, it could accelerate the already increasing stringency of Buy American and Buy America requirements.


[1] https://www.consilium.europa.eu/en/policies/eu-industrial-policy/

by: Kevin P. DalyJeffrey J. White Sabrina M. Galli of Robinson & Cole LLP

For more news on the Buy American Act and the European Green Deal Industrial Plan, visit the NLR Antitrust & Trade Regulation section.

European Union | Latest Immigration Updates

The adopted revision to the 2011 single-permit directive has been published in the Official Journal of the European Union, and the EU Council has temporarily suspended certain elements of EU law that regulate visa issuance to Ethiopian nationals.

Key Points:

  • The single-permit directive enters into force on May 21, 2024, and EU member states have until May 21, 2026, to implement the terms of the directive domestically.
    •  Member states will maintain the ability to decide which and how many third-country workers to admit to their labor market.
  • For Ethiopian nationals, the standard visa-processing period has been changed to 45 calendar days instead of 15. In addition, EU member states will no longer be able to waive certain requirements when issuing visas to Ethiopian nationals, including evidence that must be submitted to issue multiple-entry visas and visa fees for holders of diplomatic and service passports.

Background: As BAL previously reported, the directive currently in place was designed to attract additional skills and talent to the EU to address shortcomings within the legal migration system, provide an application process for EU countries to issue a single permit and establish common rights for workers from third countries. The revised law shortens the application procedure for a permit to reside for the purpose of work in a member state’s territory and aims to strengthen the rights of third-country workers by allowing a change of employer and a limited period of unemployment. The new agreement is part of the “skills and talent” package, which addresses shortcomings in legal migration policy and aims to attract greater foreign skilled talent.

The decision to tighten visa guidelines for Ethiopia is in response to an assessment by the EU Commission, which found that Ethiopian authorities have not fully cooperated with officials regarding readmission requests and difficulties persist in issuing emergency travel documents. The commission cited the organization of both voluntary and non-voluntary return operations as a determining factor in altering Ethiopia’s visa privileges within the European Union.

BAL Analysis: The single-permit directive is directed at non-EU nationals working in the EU and aims to create an environment where these individuals are treated equally regarding their working conditions, social security and tax benefits, and recognizing their unique qualifications.

Regulation Round Up March 2024

Welcome to the UK Regulation Round Up, a regular bulletin highlighting the latest developments in UK and EU financial services regulation.

Key developments in March 2024:

28 March

FCA Regulation Round-up: The FCA published its regulation round-up for March 2024.

26 March

AIFMD IIDirective (EU) 2024/927 amending the Alternative Investment Fund Managers Directive (2011/61/EU) (“AIFMD”) and the UCITS Directive (2009/65/EC) (“UCITS Directive”) relating to delegation arrangements, liquidity risk management, supervisory reporting, provision of depositary and custody services, and loan origination by alternative investment funds has been published in the Official Journal of the European Union (“EU”). Please refer to our dedicated article on this topic here.

ELTIFs: The European Commission published a Communication to the Commission explaining that it intends to adopt, with amendments, ESMA’s proposed regulatory technical standards (“RTS”) under Regulations 9(3), 18(6), 19(5), 21(3) and 25(3) of the Regulation on European Long-Term Investment Funds ((EU) 2015/760) as amended by Regulation (EU) 2023/606.

Financial Promotions: The FCA published finalised guidance (FG24/1) on financial promotions on social media.

Cryptoassets: The Investment Association (“IA”) published its second report on UK fund tokenisation written by the technology working group to HM Treasury’s asset management taskforce.

25 March

Cryptoassets: ESMA published a final report (ESMA75-453128700-949) on draft technical standards specifying requirements for co-operation, exchange of information and notification between competent authorities, European Supervisory Authorities and third countries under the Regulation on markets in cryptoassets ((EU) 2023/1114) (“MiCA”).PRIIPS Regulation: the European Parliament’s Economic and Monetary Affairs Committee (“ECON”) published the report (PE753.665v02-00) it has adopted on the European Commission’s legislative proposal for a Regulation making amendments to the Regulation on key information documents (“KIDs”) for packaged retail and insurance-based investment products (1286/2014) (“PRIIPs Regulation”) (2023/0166(COD)).

Alternative Investment Funds: The FCA published the findings from a review it carried out in 2023 of alternative investment fund managers that use the host model to manage alternative investment funds.

AIFMD: Four Delegated and Implementing Regulations concerning cross-border marketing and management notifications relating to the UCITS Directive and the AIFMD have been published in the Official Journal of the European Union (hereherehere, and here).

22 March

Smarter Regulatory Framework: HM Treasury published a document on the next phase of the Smarter Regulatory Framework, its project to replace assimilated law relating to financial services.

21 March

Market Transparency: ESMA published a communication on the transition to the new rules under the Markets in Financial Instruments Regulation (600/2014) (“MiFIR”) to improve market access and transparency.

Retail Investment Package: ECON published a press release announcing it had adopted its draft report on the proposed Directive on retail investment protection (2023/0167(COD)). The proposed Directive will amend the MiFID II Directive (2014/65/EU) (“MiFID II”), the Insurance Distribution Directive ((EU) 2016/97), the Solvency II Directive (2009/138/EC), the UCITS Directive and the AIFMD.

19 March

ESG: The Council of the EU proposed a new compromise text for the Corporate Sustainability Due Diligence Directive, on which political agreement had previously been reached in December 2023.

FCA Business Plan: The FCA published its 2024/25 Business Plan, which sets out its business priorities for the year ahead.

15 March

Customer Duty: The FCA announced that it is to conduct a review into firms’ treatment of customers in vulnerable circumstances.

PRIIPS Regulation: The Joint Committee of the European Supervisory Authorities published an updated version of its Q&As (JC 2023 22) on the key information document requirements for packaged retail and insurance-based investment products (“PRIIPs”), as laid down in Commission Delegated Regulation (EU) 2017/653.

14 March

FCA Regulatory Approach: The FCA published a speech given by Nikhil Rathi, FCA Chief Executive, on its regulatory approach to deliver for consumers, markets and competitiveness and its shift to outcomes-focused regulation.

11 March

AML: HM Treasury launched a consultation on improving the effectiveness of the Money Laundering, Terrorist Financing and Transfer of Funds (Information on the Payer) Regulations 2017 (SI 2017/692). The consultation runs until 9 June 2024 and covers four distinct areas.

08 March

ESG: The IA published a report on insights and suggested actions for asset managers following the commencement of reporting obligations of climate-related disclosures under the ESG sourcebook.

ESG: The House of Commons Treasury Committee published a report on the findings from its “Sexism in the City” inquiry.

Cryptoassets: The EBA published a consultation paper (EBA/CP/2024/09) on draft guidelines on redemption plans under Articles 47 and 55 of the MiCA.

05 March

Financial Sanctions: The Foreign, Commonwealth and Development Office published Post-Legislative Scrutiny Memorandum: Sanctions and Anti-Money Laundering Act 2018.

AML: The FCA published a Dear CEO letter sent to Annex I financial institutions concerning common control failings identified in anti-money laundering (AML) frameworks.

ESG: The European Commission adopted a delegated regulation supplementing the Securitisation Regulation ((EU) 2017/2402) with regard to regulatory technical standards specifying, for simple, transparent and standardised non-ABCP traditional securitisation, and for simple, transparent and standardised on-balance-sheet securitisation, the content, methodologies and presentation of information related to the principal adverse impacts of the assets financed by the underlying exposures on sustainability factors.

CRD IV: The European Commission adopted a Commission Implementing Regulation that amends Commission Implementing Regulation (EU) 650/2014 containing ITS on supervisory disclosure under the CRD IV Directive (2013/36/EU) (“CRD IV”).

01 March

Alternative Investment Funds: The FCA published a portfolio letter providing an interim update on its supervisory strategy for the asset management and alternatives portfolios.

Corporate Transparency: The Economic Crime and Corporate Transparency Act 2023 (Commencement No. 2 and Transitional Provision) Regulations 2024 (SI 2024/269) have been made and published.

Financial Sanctions: The Treasury Committee launched an inquiry into the effectiveness of financial sanctions on Russia.

EMIR: The FCA published a consultation paperin which it, together with the Bank of England, seeks feedback on draft guidance in the form of Q&As on the revised reporting requirements under Article 9 of UK EMIR (648/2012).

FCA Handbook: The FCA published Handbook Notice 116 (dated February 2024), which sets out changes to the FCA Handbook made by the FCA board on 29 February 2024.

FCA Handbook: the FCA published its 43rd quarterly consultation paper (CP24/3), inviting comments on proposed changes to a number of FCA Handbook provisions.

Amar Unadkat, Sulaiman Malik & Michael Singh also contributed to this article.

Navigating the EU AI Act from a US Perspective: A Timeline for Compliance

After extensive negotiations, the European Parliament, Commission, and Council came to a consensus on the EU Artificial Intelligence Act (the “AI Act”) on Dec. 8, 2023. This marks a significant milestone, as the AI Act is expected to be the most far-reaching regulation on AI globally. The AI Act is poised to significantly impact how companies develop, deploy, and manage AI systems. In this post, NM’s AI Task Force breaks down the key compliance timelines to offer a roadmap for U.S. companies navigating the AI Act.

The AI Act will have a staged implementation process. While it will officially enter into force 20 days after publication in the EU’s Official Journal (“Entry into Force”), most provisions won’t be directly applicable for an additional 24 months. This provides a grace period for businesses to adapt their AI systems and practices to comply with the AI Act. To bridge this gap, the European Commission plans to launch an AI Pact. This voluntary initiative allows AI developers to commit to implementing key obligations outlined in the AI Act even before they become legally enforceable.

With the impending enforcement of the AI Act comes the crucial question for U.S. companies that operate in the EU or whose AI systems interact with EU citizens: How can they ensure compliance with the new regulations? To start, U.S. companies should understand the key risk categories established by the AI Act and their associated compliance timelines.

I. Understanding the Risk Categories
The AI Act categorizes AI systems based on their potential risk. The risk level determines the compliance obligations a company must meet.  Here’s a simplified breakdown:

  • Unacceptable Risk: These systems are banned entirely within the EU. This includes applications that threaten people’s safety, livelihood, and fundamental rights. Examples may include social credit scoring, emotion recognition systems at work and in education, and untargeted scraping of facial images for facial recognition.
  • High Risk: These systems pose a significant risk and require strict compliance measures. Examples may include AI used in critical infrastructure (e.g., transport, water, electricity), essential services (e.g., insurance, banking), and areas with high potential for bias (e.g., education, medical devices, vehicles, recruitment).
  • Limited Risk: These systems require some level of transparency to ensure user awareness. Examples include chatbots and AI-powered marketing tools where users should be informed that they’re interacting with a machine.
  • Minimal Risk: These systems pose minimal or no identified risk and face no specific regulations.

II. Key Compliance Timelines (as of March 2024):

Time Frame  Anticipated Milestones
6 months after Entry into Force
  • Prohibitions on Unacceptable Risk Systems will come into effect.
12 months after Entry into Force
  • This marks the start of obligations for companies that provide general-purpose AI models (those designed for widespread use across various applications). These companies will need to comply with specific requirements outlined in the AI Act.
  • Member states will appoint competent authorities responsible for overseeing the implementation of the AI Act within their respective countries.
  • The European Commission will conduct annual reviews of the list of AI systems categorized as “unacceptable risk” and banned under the AI Act.
  • The European Commission will issue guidance on high-risk AI incident reporting.
18 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
24 months after Entry into Force
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.
36 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
By the end of 2030
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.

In addition to the above, we can expect further rulemaking and guidance from the European Commission to come forth regarding aspects of the AI Act such as use cases, requirements, delegated powers, assessments, thresholds, and technical documentation.

Even before the AI Act’s Entry into Force, there are crucial steps U.S. companies operating in the EU can take to ensure a smooth transition. The priority is familiarization. Once the final version of the Act is published, carefully review it to understand the regulations and how they might apply to your AI systems. Next, classify your AI systems according to their risk level (high, medium, minimal, or unacceptable). This will help you determine the specific compliance obligations you’ll need to meet. Finally, conduct a thorough gap analysis. Identify any areas where your current practices for developing, deploying, or managing AI systems might not comply with the Act. By taking these proactive steps before the official enactment, you’ll gain valuable time to address potential issues and ensure your AI systems remain compliant in the EU market.

Recent Healthcare-Related Artificial Intelligence Developments

AI is here to stay. The development and use of artificial intelligence (“AI”) is rapidly growing in the healthcare landscape with no signs of slowing down.

From a governmental perspective, many federal agencies are embracing the possibilities of AI. The Centers for Disease Control and Prevention is exploring the ability of AI to estimate sentinel events and combat disease outbreaks and the National Institutes of Health is using AI for priority research areas. The Centers for Medicare and Medicaid Services is also assessing whether algorithms used by plans and providers to identify high risk patients and manage costs can introduce bias and restrictions. Additionally, as of December 2023, the U.S. Food & Drug Administration cleared more than 690 AI-enabled devices for market use.

From a clinical perspective, payers and providers are integrating AI into daily operations and patient care. Hospitals and payers are using AI tools to assist in billing. Physicians are using AI to take notes and a wide range of providers are grappling with which AI tools to use and how to deploy AI in the clinical setting. With the application of AI in clinical settings, the standard of patient care is evolving and no entity wants to be left behind.

From an industry perspective, the legal and business spheres are transforming as a result of new national and international regulations focused on establishing the safe and effective use of AI, as well as commercial responses to those regulations. Three such regulations are top of mind, including (i) President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI; (ii) the U.S. Department of Health and Human Services’ (“HHS”) Final Rule on Health Data, Technology, and Interoperability; and (iii) the World Health Organization’s (“WHO”) Guidance for Large Multi-Modal Models of Generative AI. In response to the introduction of regulations and the general advancement of AI, interested healthcare stakeholders, including many leading healthcare companies, have voluntarily committed to a shared goal of responsible AI use.

U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI

On October 30, 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (“Executive Order”). Though long-awaited, the Executive Order was a major development and is one of the most ambitious attempts to regulate this burgeoning technology. The Executive Order has eight guiding principles and priorities, which include (i) Safety and Security; (ii) Innovation and Competition; (iii) Commitment to U.S. Workforce; (iv) Equity and Civil Rights; (v) Consumer Protection; (vi) Privacy; (vii) Government Use of AI; and (viii) Global Leadership.

Notably for healthcare stakeholders, the Executive Order directs the National Institute of Standards and Technology to establish guidelines and best practices for the development and use of AI and directs HHS to develop an AI Task force that will engineer policies and frameworks for the responsible deployment of AI and AI-enabled tech in healthcare. In addition to those directives, the Executive Order highlights the duality of AI with the “promise” that it brings and the “peril” that it has the potential to cause. This duality is reflected in HHS directives to establish an AI safety program to prioritize the award of grants in support of AI development while ensuring standards of nondiscrimination are upheld.

U.S. Department of Health and Human Services Health Data, Technology, and Interoperability Rule

In the wake of the Executive Order, the HHS Office of the National Coordinator finalized its rule to increase algorithm transparency, widely known as HT-1, on December 13, 2023. With respect to AI, the rule promotes transparency by establishing transparency requirements for AI and other predictive algorithms that are part of certified health information technology. The rule also:

  • implements requirements to improve equity, innovation, and interoperability;
  • supports the access, exchange, and use of electronic health information;
  • addresses concerns around bias, data collection, and safety;
  • modifies the existing clinical decision support certification criteria and narrows the scope of impacted predictive decision support intervention; and
  • adopts requirements for certification of health IT through new Conditions and Maintenance of Certification requirements for developers.

Voluntary Commitments from Leading Healthcare Companies for Responsible AI Use

Immediately on the heels of the release of HT-1 came voluntary commitments from leading healthcare companies on responsible AI development and deployment. On December 14, 2023, the Biden Administration announced that 28 healthcare provider and payer organizations signed up to move toward the safe, secure, and trustworthy purchasing and use of AI technology. Specifically, the provider and payer organizations agreed to:

  • develop AI solutions to optimize healthcare delivery and payment;
  • work to ensure that the solutions are fair, appropriate, valid, effective, and safe (“F.A.V.E.S.”);
  • deploy trust mechanisms to inform users if content is largely AI-generated and not reviewed or edited by a human;
  • adhere to a risk management framework when utilizing AI; and use of AI technology. Specifically, the provider and payer organizations agreed to:
  • develop AI solutions to optimize healthcare delivery and payment;
  • work to ensure that the solutions are fair, appropriate, valid, effective, and safe (“F.A.V.E.S.”);
  • deploy trust mechanisms to inform users if content is largely AI-generated and not reviewed or edited by a human;
  • adhere to a risk management framework when utilizing AI; and
  • research, investigate, and develop AI swiftly but responsibly.

WHO Guidance for Large Multi-Modal Models of Generative AI

On January 18, 2024, the WHO released guidance for large multi-modal models (“LMM”) of generative AI, which can simultaneously process and understand multiple types of data modalities such as text, images, audio, and video. The WHO guidance contains 98 pages with over 40 recommendations for tech developers, providers and governments on LMMs, and names five potential applications of LMMs, such as (i) diagnosis and clinical care; (ii) patient-guided use; (iii) administrative tasks; (iv) medical education; and (v) scientific research. It also addresses the liability issues that may arise out of the use of LMMs.

Closely related to the WHO guidance, the European Council’s agreement to move forward with a European Union AI Act (“Act”), was a significant milestone in AI regulation in the European Union. As previewed in December 2023, the Act will inform how AI is regulated across the European Union, and other nations will likely take note of and follow suit.

Conclusion

There is no question that AI is here to stay. But how the healthcare industry will look when AI is more fully integrated still remains to be seen. The framework for regulating AI will continue to evolve as AI and the use of AI in healthcare settings changes. In the meantime, healthcare stakeholders considering or adopting AI solutions should stay abreast of developments in AI to ensure compliance with applicable laws and regulations.

2023 Foreign Direct Investment Year End Update: Continued Expansion of FDI Regulations

As previously reported, regulations and restrictions on Foreign Direct Investment (“FDI”) have expanded quickly in the United States and in many of its trading partner countries around the world. FDI has been further complicated in the U.S. by the passage of individual State laws – often focused the acquisition of “agricultural land,” and in Europe by the passage of screening regimes by the individual Member States of the European Union (E.U.).

In 2023, fifteen U.S. States enacted some form of FDI restrictions on real estate. Some States elected to incorporate U.S. Federal regulations regarding who is prohibited from acquiring certain real estate, while other States have focused on broadly protecting agricultural lands. State laws also vary from those that prevent foreign ownership, to those that only require reporting foreign ownership.

Thus far Alabama, Arkansas, Florida, Idaho, Illinois, Iowa, Kansas, Kentucky, Louisiana, Maine, Minnesota, Mississippi, Missouri, Montana, Nebraska, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia, and Wisconsin have passed laws related to FDI in real estate.

Alabama, Arkansas, Florida, Illinois, Iowa, Kansas, Maine, Missouri, Ohio and Texas all currently require foreign investors to disclose acquisitions of certain real estate, much like the U.S. Federal Agricultural Foreign Investment Disclosure Act of 1978 (AFIDA). Arkansas, Illinois, Maine, and Wisconsin, actually allow acquirers to fulfill their reporting requirements by simply submitting a copy of applicable federal AFIDA reports. Texas currently only limits Direct Foreign Investment in certain “critical infrastructure.”

Ohio, Pennsylvania, South Carolina, South Dakota, and Wisconsin limit foreign investment in real estate based on the number of acres; while Iowa, Minnesota, Missouri, Nebraska, North Dakota, and Oklahoma ban foreign ownership of certain land completely.

The Alabama Property Protection Act (“APPA”), which went into effect in 2023, is one of the most expansive of the U.S. State laws, and which also incorporates U.S. Federal law. The APPA restricts FDI by a “foreign principal” in real estate related to agriculture, critical infrastructure, or proximate to military installations.

The APPA broadly covers acquiring “title” or a “controlling interest.” The APPA also broadly defines “foreign principal” as a political party and its members, a government, and any government official of China, Iran, North Korea, and Russia, as well as countries or governments that are subject to any sanction list of the U.S. Office of Foreign Assets Control (“OFAC”). The APPA defines “agricultural and forest property” as “real property used for raising, harvesting, and selling crops or for the feeding, breeding, management, raising, sale of, or the production of livestock, or for the growing and sale of timber and forest products”; and it defines covered “critical infrastructure” as a chemical manufacturing facility, refinery, electric production facility, water treatment facility, LNG terminal, telecommunications switching facility, gas processing plant, seaport, airport, aerospace and spaceport infrastructure. The APPA also covers land that is located within 10 miles of a “military installation” (of at least 10 contiguous acres) or “critical infrastructure.”

Notably, APPA does not specifically address whether leases are considered a “controlling interest,” nor does it specify enforcement procedures.

U.S. Federal Real Estate FDI

Businesses involved in the U.S. defense industrial base have been historically protected from FDI by the Committee on Foreign Investment in the United States (“CFIUS”). The Foreign Investment Risk Review Modernization Act of 2018 (FIRRMA) expanded those historic protections to include certain Critical Technologies, Critical Infrastructure, and Sensitive Data – collectively referred to as covered “TID.”

FIRRMA specifically expanded CFIUS to address national security concerns arising from FDI impacting critical infrastructure and sensitive government installations. Part 802 of FIRRMA established CFIUS jurisdiction and review for certain covered real estate, including real estate in proximity to specified airports, maritime ports, military installations, and other critical infrastructure. Later in 2022, Executive Order 14083 further expanded CFIUS coverage for certain agricultural related real estate.

Covered installations are listed by name and location in appendixes to the CFIUS regulations. Early this year, CFIUS added eight additional government installations to the 100-mile “extended range” proximity coverage of Part 802. The update necessarily captured substantially more covered real estate. Unlike covered Section 1758 technologies that can trigger a mandatory CFIUS filing, CFIUS jurisdiction for covered real estate currently remains only a voluntary filing. Regardless, early diligence remains critical to any transaction in the United States that may result in foreign ownership or control of real estate.

U.S. Trading Partners FDI Regimes

The U.S. is not alone in regulating FDI, or the acquisition of real estate by foreign investors. Canada, United Kingdom and the European Union have legislative frameworks governing foreign investment in business sectors, technology, and real estate. Almost all European Union Member States have some similar form of FDI screening.

Key U.S. trading partners that have adopted FDI regimes include, Australia, Austria, Belgium, China, Germany, France, Hungary, Ireland, Italy, Japan, Luxembourg, Netherlands, Poland, Singapore, Spain, and Sweden. What foreign parties, economic sectors, or technologies are covered vary from country to country. They also vary as to the notification and approval requirements.

The UK National Security and Investment Act (NSI Act) came into effect on 4 January 2022, giving the UK government powers to intervene in transactions where assets or entities are acquired in a manner which may give rise to a national security risk. There were over 800 notifications under the NSI during the previous 12-month reporting period. In November 2023 the Deputy Prime Minister Oliver Dowden published a call for evidence on the legislation which aims to narrow and refine the scope of powers to be more ‘business friendly’, given that very few notified transactions have not been cleared within 30 working days. We will revisit developments on this in 2024.

The UK has continued to implement other reforms to improve transparency of foreign ownership of UK property. Part of the UK Economic Crime (Transparency and Enforcement) Act 2022 requires the register of overseas entities. The register is maintained by Companies House and requires overseas entities which own land in the UK to disclose details of their beneficial owners. Failure to comply with the new legislation will impact any registration of ownership details at the UK Land Registry (and thus the relevant legal and equitable ownership rights in any relevant property) and officers of any entity in breach will also be liable to criminal proceedings.

Recommendations

Whether a buyer or a seller, all transactions involving FDI should include an analysis of the citizenship of the interested parties, the nature of the business, land and products, and the applicability of laws and regulations that can impact the parties, timing, or transaction.

The EU’s New Green Claims Directive – It’s Not Easy Being Green

Highlights

  • On March 22, 2023, the European Commission proposed the Green Claims Directive, which is intended to make green claims reliable, comparable and verifiable across the EU and protect consumers from greenwashing
  • Adding to the momentum generated by other EU green initiatives, this directive could be the catalyst that also spurs the U.S. to approve stronger regulatory enforcement mechanisms to crackdown on greenwashing
  • This proposed directive overlaps the FTC’s request for comments on its Green Guides, including whether the agency should initiate a rulemaking to establish enforceable requirements related to unfair and deceptive environmental claims. The deadline for comments has been extended to April 24, 2023

The European Commission (EC) proposed the Green Claims Directive (GCD) on March 22, 2023, to crack down on greenwashing and prevent businesses from misleading customers about the environmental characteristics of their products and services. This action was in response, at least in part, to a 2020 commission study that found more than 50 percent of green labels made environmental claims that were “vague, misleading or unfounded,” and 40 percent of these claims were “unsubstantiated.”

 

This definitive action by the European Union (EU) comes at a time when the U.S. is also considering options to curb greenwashing and could inspire the U.S. to implement stronger regulatory enforcement mechanisms, including promulgation of new enforceable rules by the Federal Trade Commission (FTC) defining and prohibiting unfair and deceptive environmental claims.

According to the EC, under this proposal, consumers “will have more clarity, stronger reassurance that when something is sold as green, it actually is green, and better quality information to choose environment-friendly products and services.”

Scope of the Green Claims Directive

The EC’s objectives in the proposed GCD are to:

  • Make green claims reliable, comparable and verifiable across the EU
  • Protect consumers from greenwashing
  • Contribute to creating a circular and green EU economy by enabling consumers to make informed purchasing decisions
  • Help establish a level playing field when it comes to environmental performance of products

The related proposal for a directive on empowering consumers for the green transition and annex, referenced in the proposed GCD, defines the green claims to be regulated as follows:

“any message or representation, which is not mandatory under Union law or national law, including text, pictorial, graphic or symbolic representation, in any form, including labels, brand names, company names or product names, in the context of a commercial communication, which states or implies that a product or trader has a positive or no impact on the environment or is less damaging to the environment than other products or traders, respectively, or has improved their impact over time.”

The GCD provides minimum requirements for valid, comparable and verifiable information about the environmental impacts of products that make green claims. The proposal sets clear criteria for companies to prove their environmental claims: “As part of the scientific analysis, companies will identify the environmental impacts that are actually relevant to their product, as well as identifying any possible trade-offs to give a full and accurate picture.” Businesses will be required to provide consumers information on the green claim, either with the product or online. The new rule will require verification by independent auditors before claims can be made and put on the market.

The GCD will also regulate environmental labels. The GCD is proposing to establish standard criteria for the more than 230 voluntary sustainability labels used across the EU, which are currently “subject to different levels of robustness, supervision and transparency.” The GCD will require environmental labels to be reliable, transparent, independently verified and regularly reviewed. Under the new proposal, adding an environmental label on products is still voluntary. The EU’s official EU Ecolabel is exempt from the new rules since it already adheres to a third-party verification standard.

Companies based outside the EU that make green claims or utilize environmental labels that target the consumers of the 27 member states also would be required to comply with the GCD. It will be up to member states to set up the substantiation process for products and labels’ green claims using independent and accredited auditors. The GCD has established the following process criteria:

  • Claims must be substantiated with scientific evidence that is widely recognised, identifying the relevant environmental impacts and any trade-offs between them
  • If products or organisations are compared with other products and organisations, these comparisons must be fair and based on equivalent information and data
  • Claims or labels that use aggregate scoring of the product’s overall environmental impact on, for example, biodiversity, climate, water consumption, soil, etc., shall not be permitted, unless set in EU rules
  • Environmental labelling schemes should be solid and reliable, and their proliferation must be controlled. EU level schemes should be encouraged, new public schemes, unless developed at EU level, will not be allowed, and new private schemes are only allowed if they can show higher environmental ambition than existing ones and get a pre-approval
  • Environmental labels must be transparent, verified by a third party, and regularly reviewed

Enforcement of the GCD will take place at the member state level, subject to the proviso in the GCD that “penalties must be ‘effective, proportionate and dissuasive.’” Penalties for violation range from fines to confiscation of revenues and temporary exclusion from public procurement processes and public funding. The directive requires that consumers should be able to bring an action as well.

The EC’s intent is for the GCD to work with the Directive on Empowering the Consumers for the Green Transition, which encourages sustainable consumption by providing understandable information about the environmental impact of products, and identifying the types of claims that are deemed unfair commercial practices. Together these new rules are intended to provide a clear regime for environmental claims and labels. According to the EC, the adoption of this proposed legislation will not only protect consumers and the environment but also give a competitive edge to companies committed to increasing their environmental sustainability.

Initial Public Reaction to the GCD and Next Steps

While some organizations, such as the International Chamber of Commerce, offered support, several interest groups quickly issued public critiques of the proposed GCD. The Sustainable Apparel Coalition asserted that: “The Directive does not mandate a standardized and clearly defined framework based on scientific foundations and fails to provide the legal certainty for companies and clarity to consumers.”

ECOS lamented that “After months of intense lobbying, what could have been legislation contributing to providing reliable environmental information to consumers was substantially watered down,” and added that “In order for claims to be robust and comparable, harmonised methodologies at the EU level will be crucial.” Carbon Market Watch was disappointed that “The draft directive fails to outlaw vague and disingenuous terms like ‘carbon neutrality’, which are a favoured marketing strategy for companies seeking to give their image a green makeover while continuing to pollute with impunity.”

The EC’s proposal will now go to the European Parliament and Council for consideration. This process usually takes about 18 months, during which there will be a public consultation process that will solicit comments, and amendments may be introduced. If the GCD is approved, each of the 27 member states will have 18 months after entry of the GCD to adopt national laws, and those laws will become effective six months after that. As a result, there is a reasonably good prospect that there will be variants in the final laws enacted.

Will the GCD Influence the U.S.’s Approach to Regulation of Greenwashing?

The timing and scope of the GCD is of no small interest in the U.S., where regulation of greenwashing has been ramping up as well. In May 2022, the Securities and Exchange Commission (SEC) issued the proposed Names Rule and ESG Disclosure Rule targeting greenwashing in the naming and purpose of claimed ESG funds. The SEC is expected to take final action on the Names Rule in April 2023.

Additionally, as part of a review process that occurs every 10 years, the FTC is receiving comments on its Green Guides for the Use of Environmental Claims, which also target greenwashing. However, the Green Guides are just that – guides that do not currently have the force of law that are used to help interpret what is “unfair and deceptive.”

It is particularly noteworthy that the FTC has asked the public to comment, for the first time, on whether the agency should initiate a rulemaking under the FTC Act to establish independently enforceable requirements related to unfair and deceptive environmental claims. If the FTC promulgates such a rule, it will have new enforcement authority to impose substantial penalties.

The deadline for comments on the Green Guides was recently extended to April 24, 2023. It is anticipated that there will be a substantial number of comments and it will take some time for the FTC to digest them. It will be interesting to watch the process unfold as the GCD moves toward finalization and the FTC decides whether to commence rulemaking in connection with its Green Guide updates. Once again there is a reasonable prospect that the European initiatives and momentum on green matters, including the GCD, could be a catalyst for the US to step up as well – in this case to implement stronger regulatory enforcement mechanisms to crackdown on greenwashing.

© 2023 BARNES & THORNBURG LLP

G7 Sanctions Enforcement Coordination Mechanism and Centralized EU Sanctions Watchdog Proposed

On Feb. 20, 2023, Dutch Minister of Foreign Affairs Wopke Hoekstra gave a speech titled “Building a secure European future” at the College of Europe in Bruges, Belgium where he made a plea to “(…) sail to the next horizon where sanctions are concerned.” The Dutch Foreign Minister said European Union (EU) “(…) sanctions are hurting the Russians like hell (…)” but at the same time the measures “(…) are being evaded on a massive scale.” Hoekstra believes this is in part because the EU has too little capacity to analyze, coordinate, and promote the sanctions. However, arguably, there is also a lack of capacity at the EU Member-State level to enforce sanctions.

Against this background the Dutch Foreign Minister proposed to set up a sanctions headquarters in Brussels, Belgium, i.e., a novel watchdog or body to tackle the circumvention of EU sanctions. Such a body might represent the nearest EU equivalent to the U.S. Office of Foreign Assets Control (OFAC). OFAC both implements and enforces U.S. economic sanctions (issuing regulations, licenses, and directives, as well as enforcing through issuing administrative subpoenas, civil and administrative monetary penalties, and making criminal referrals to the U.S. Department of Justice). In Hoekstra’s words:

“A place where [EU] Member States can pool information and resources on effectiveness and evasion. Where we do much more to fight circumvention by third countries. This new HQ would establish a watch list of sectors and trade flows with a high circumvention risk. Companies will be obliged to include end-use clauses in their contracts, so that their products don’t end up in the Russian war machine. And the EU should bring down the full force of its collective economic strength and criminal justice systems on those who assist in sanctions evasion. By naming, shaming, sanctioning, and prosecuting them.”

The Dutch Foreign Minister’s proposal – which is also set out in a separate non-paper – apparently is backed and supported by some 10 or so EU Member States, including Germany, France, Italy, and Spain.

Additionally, on Feb. 23, 2023, the press reported the international Group of Seven (G7) is set to create a new tool to coordinate their enforcement of existing sanctions against the Russian Federation (Russia). The aim of the tool, tentatively called the Enforcement Coordination Mechanism, would be to bolster information-sharing and other enforcement actions.

Background

Like other Members of the G7, the EU has adopted throughout 2022 many economic and other sanctions to target Russia’s economy and thwart its ability to continue with its aggression against Ukraine. Nevertheless, currently EU Member States have different definitions of what constitutes a breach of EU sanctions, and what penalties must be applied in case of a breach. This could lead to different degrees of enforcement and risk circumvention of EU sanctions.

As we have reported previously, on Nov. 28, 2022, the Council of the EU adopted a decision to add the violation of restrictive measures to the list of so-called “EU crimes” set out in the Treaty on the Functioning of the EU, which would uniformly criminalize sanctions violations across EU Member States. This proposal still needs the backing of EU Member States, which have traditionally been cautious about reforms that require amendments to their national criminal laws.

Next steps

The decision on when and how to enforce EU sanctions currently lies with individual EU Member States, who also decide on the introduction of the EU’s restrictive measures by unanimity. As such, the Dutch Foreign Minister’s proposal requires the backing and support of more EU Member States. If adopted, the new proposed body could send cases directly to the European Public Prosecutor’s Office (EPPO), assuming the separate “EU crimes” legislative piece was also adopted.

Notably, the Dutch Foreign Minister’s proposal appears to suggest a stronger targeting of third countries, which are not aligned with the EU’s sanctions against Russia or help in their circumvention (e.g., Turkey, China, etc.).

Whether or not an EU sanctions oversight body is established, the Dutch proposal signals the current appetite for enhanced multilateral coordination on economic sanctions implementation and tougher, more consistent enforcement of economic sanctions violations. The G7’s proposed Enforcement Coordination Mechanism points in the same direction.

©2023 Greenberg Traurig, LLP. All rights reserved.

EDPB on Dark Patterns: Lessons for Marketing Teams

“Dark patterns” are becoming the target of EU data protection authorities, and the new guidelines of the European Data Protection Board (EDPB) on “dark patterns in social media platform interfaces” confirm their focus on such practices. While they are built around examples from social media platforms (real or fictitious), these guidelines contain lessons for all websites and applications. The bad news for marketers: the EDPB doesn’t like it when dry legal texts and interfaces are made catchier or more enticing.

To illustrate, in a section of the guidelines regarding the selection of an account profile photo, the EDPB considers the example of a “help/information” prompt saying “No need to go to the hairdresser’s first. Just pick a photo that says ‘this is me.’” According to the EDPB, such a practice “can impact the final decision made by users who initially decided not to share a picture for their account” and thus makes consent invalid under the General Data Protection Regulation (GDPR). Similarly, the EDPB criticises an extreme example of a cookie banner with a humourous link to a bakery cookies recipe that incidentally says, “we also use cookies”, stating that “users might think they just dismiss a funny message about cookies as a baked snack and not consider the technical meaning of the term “cookies.”” The EDPB even suggests that the data minimisation principle, and not security concerns, should ultimately guide an organisation’s choice of which two-factor authentication method to use.

Do these new guidelines reflect privacy paranoia or common sense? The answer should lie somewhere in between, but the whole document (64 pages long) in our view suggests an overly strict approach, one that we hope will move closer to commonsense as a result of a newly started public consultation process.

Let us take a closer look at what useful lessons – or warnings – can be drawn from these new guidelines.

What are “dark patterns” and when are they unlawful?

According to the EDPB, dark patterns are “interfaces and user experiences […] that lead users into making unintended, unwilling and potentially harmful decisions regarding the processing of their personal data” (p. 2). They “aim to influence users’ behaviour and can hinder their ability to effectively protect their personal data and make conscious choices.” The risk associated with dark patterns is higher for websites or applications meant for children, as “dark patterns raise additional concerns regarding potential impact on children” (p. 8).

While the EDPB takes a strongly negative view of dark patterns in general, it recognises that dark patterns do not automatically lead to an infringement of the GDPR. The EDPB acknowledges that “[d]ata protection authorities are responsible for sanctioning the use of dark patterns if these breach GDPR requirements” (emphasis ours; p. 2). Nevertheless, the EDPB guidance strongly links the concept of dark patterns with the data protection by design and by default principles of Art. 25 GDPR, suggesting that disregard for those principles could lead to a presumption that the language or a practice in fact creates a “dark pattern” (p. 11).

The EDPB refers here to its Guidelines 4/2019 on Article 25 Data Protection by Design and by Default and in particular to the following key principles:

  • “Autonomy – Data subjects should be granted the highest degree of autonomy possible to determine the use made of their personal data, as well as autonomy over the scope and conditions of that use or processing.
  • Interaction – Data subjects must be able to communicate and exercise their rights in respect of the personal data processed by the controller.
  • Expectation – Processing should correspond with data subjects’ reasonable expectations.
  • Consumer choice – The controllers should not “lock in” their users in an unfair manner. Whenever a service processing personal data is proprietary, it may create a lock-in to the service, which may not be fair, if it impairs the data subjects’ possibility to exercise their right of data portability in accordance with Article 20 GDPR.
  • Power balance – Power balance should be a key objective of the controller-data subject relationship. Power imbalances should be avoided. When this is not possible, they should be recognised and accounted for with suitable countermeasures.
  • No deception – Data processing information and options should be provided in an objective and neutral way, avoiding any deceptive or manipulative language or design.
  • Truthful – the controllers must make available information about how they process personal data, should act as they declare they will and not mislead data subjects.”

Is data minimisation compatible with the use of SMS two-factor authentication?

One of the EDPB’s positions, while grounded in the principle of data minimisation, undercuts a security practice that has grown significantly over the past few years. In effect, the EDPB seems to question the validity under the GDPR of requests for phone numbers for two-factor authentication where e-mail tokens would theoretically be possible:

“30. To observe the principle of data minimisation, [organisations] are required not to ask for additional data such as the phone number, when the data users already provided during the sign- up process are sufficient. For example, to ensure account security, enhanced authentication is possible without the phone number by simply sending a code to users’ email accounts or by several other means.
31. Social network providers should therefore rely on means for security that are easier for users to re[1]initiate. For example, the [organisation] can send users an authentication number via an additional communication channel, such as a security app, which users previously installed on their mobile phone, but without requiring the users’ mobile phone number. User authentication via email addresses is also less intrusive than via phone number because users could simply create a new email address specifically for the sign-up process and utilise that email address mainly in connection with the Social Network. A phone number, however, is not that easily interchangeable, given that it is highly unlikely that users would buy a new SIM card or conclude a new phone contract only for the reason of authentication.” 
(emphasis ours; p. 15)

The EDPB also appears to be highly critical of phone-based verification in the context of registration “because the email address constitutes the regular contact point with users during the registration process” (p. 15).

This position is unfortunate, as it suggests that data minimisation may preclude controllers from even assessing which method of two-factor authentication – in this case, e-mail versus SMS one-time passwords – better suits its requirements, taking into consideration the different security benefits and drawbacks of the two methods. The EDPB’s reasoning could even be used to exclude any form of stronger two-factor authentication, as additional forms inevitably require separate processing (e.g., phone number or third-party account linking for some app-based authentication methods).

For these reasons, organisations should view this aspect of the new EDPB guidelines with a healthy dose of skepticism. It likewise will be important for interested stakeholders to participate in the consultation to explain the security benefits of using phone numbers to keep the “two” in two-factor authentication.

Consent withdrawal: same number of clicks?

Recent decisions by EU regulators (notably two decisions by the French authority, the CNIL have led to speculation about whether EU rules effectively require website operators to make it possible for data subjects to withdraw consent to all cookies with one single click, just as most websites make it possible to give consent through a single click. The authorities themselves have not stated that this is unequivocally required, although privacy activists notably filed complaints against hundreds of websites, many of them for not including a “reject all” button on their cookie banner.

The EDPB now appears to side with the privacy activists in this respect, stating that “consent cannot be considered valid under the GDPR when consent is obtained through only one mouse-click, swipe or keystroke, but the withdrawal takes more steps, is more difficult to achieve or takes more time” (p. 14).

Operationally, however, it seems impossible to comply with a “one-click withdrawal” standard in absolute terms. Just pulling up settings after registration or after the first visit to a website will always require an extra click, purely to open those settings. We expect this issue to be examined by the courts eventually.

Is creative wording indicative of a “dark pattern”?

The EDPB’s guidelines contain several examples of wording that is intended to convince the user to take a specific action.

The photo example mentioned in the introduction above is an illustration, but other (likely fictitious) examples include the following:

  • For sharing geolocation data: “Hey, a lone wolf, are you? But sharing and connecting with others help make the world a better place! Share your geolocation! Let the places and people around you inspire you!” (p.17)
  • To prompt a user to provide a self-description: “Tell us about your amazing self! We can’t wait, so come on right now and let us know!” (p. 17)

The EDPB criticises the language used, stating that it is “emotional steering”:

“[S]uch techniques do not cultivate users’ free will to provide their data, since the prescriptive language used can make users feel obliged to provide a self-description because they have already put time into the registration and wish to complete it. When users are in the process of registering to an account, they are less likely to take time to consider the description they give or even if they would like to give one at all. This is particularly the case when the language used delivers a sense of urgency or sounds like an imperative. If users feel this obligation, even when in reality providing the data is not mandatory, this can have an impact on their “free will”” (pp. 17-18).

Similarly, in a section about account deletion and deactivation, the EDPB criticises interfaces that highlight “only the negative, discouraging consequences of deleting their accounts,” e.g., “you’ll lose everything forever,” or “you won’t be able to reactivate your account” (p. 55). The EDPB even criticises interfaces that preselect deactivation or pause options over delete options, considering that “[t]he default selection of the pause option is likely to nudge users to select it instead of deleting their account as initially intended. Therefore, the practice described in this example can be considered as a breach of Article 12 (2) GDPR since it does not, in this case, facilitate the exercise of the right to erasure, and even tries to nudge users away from exercising it” (p. 56). This, combined with the EDPB’s aversion to confirmation requests (see section 5 below), suggests that the EDPB is ignoring the risk that a data subject might opt for deletion without fully recognizing the consequences, i.e., loss of access to the deleted data.

The EDPB’s approach suggests that any effort to woo users into giving more data or leaving data with the organisation will be viewed as harmful by data protection authorities. Yet data protection rules are there to prevent abuse and protect data subjects, not to render all marketing techniques illegal.

In this context, the guidelines should in our opinion be viewed as an invitation to re-examine marketing techniques to ensure that they are not too pushy – in the sense that users would in effect truly be pushed into a decision regarding personal data that they would not otherwise have made. Marketing techniques are not per se unlawful under the GDPR but may run afoul of GDPR requirements in situations where data subjects are misled or robbed of their choice.

Other key lessons for marketers and user interface designers

  • Avoid continuous prompting: One of the issues regularly highlighted by the EDPB is “continuous prompting”, i.e., prompts that appear again and again during a user’s experience on a platform. The EDPB suggests that this creates fatigue, leading the user to “give in,” i.e., by “accepting to provide more data or to consent to another processing, as they are wearied from having to express a choice each time they use the platform” (p. 14). Examples given by the EDPB include the SMS two-factor authentication popup mentioned above, as well as “import your contacts” functionality. Outside of social media platforms, the main example for most organisations is their cookie policy (so this position by the EDPB reinforces the need to manage cookie banners properly). In addition, newsletter popups and popups about “how to get our new report for free by filling out this form” are frequent on many digital properties. While popups can be effective ways to get more subscribers or more data, the EDPB guidance suggests that regulators will consider such practices questionable from a data protection perspective.
  • Ensure consistency or a justification for confirmation steps: The EDPB highlights the “longer than necessary” dark pattern at several places in its guidelines (in particular pp. 18, 52, & 57), with illustrations of confirmation pop-ups that appear before a user is allowed to select a more privacy-friendly option (and while no such confirmation is requested for more privacy-intrusive options). Such practices are unlawful according to the EDPB. This does not mean that confirmation pop-ups are always unlawful – just that you need to have a good justification for using them where you do.
  • Have a good reason for preselecting less privacy-friendly options: Because the GDPR requires not only data protection by design but also data protection by default, make sure that you are able to justify an interface in which a more privacy-intrusive option is selected by default – or better yet, don’t make any preselection. The EDPB calls preselection of privacy-intrusive options “deceptive snugness” (“Because of the default effect which nudges individuals to keep a pre-selected option, users are unlikely to change these even if given the possibility” p. 19).
  • Make all privacy settings available in all platforms: If a user is asked to make a choice during registration or upon his/her first visit (e.g., for cookies, newsletters, sharing preferences, etc.), ensure that those settings can all be found easily later on, from a central privacy settings page if possible, and alongside all data protection tools (such as tools for exercising a data subject’s right to access his/her data, to modify data, to delete an account, etc.). Also make sure that all such functionality is available not only on a desktop interface but also for mobile devices and across all applications. The EDPB illustrates this point by criticising the case where an organisation has a messaging app that does not include the same privacy statement and data subject request tools as the main app (p. 27).
  • Be clearer in using general language such as “Your data might be used to improve our services”: It is common in most privacy statements to include a statement that personal data (e.g., customer feedback) “can” or “may be used” to improve an organisation’s products and services. According to the EDPB, the word “services” is likely to be “too general” to be viewed as “clear,” and it is “unclear how data will be processed for the improvement of services.” The use of the conditional tense in the example (“might”) also “leaves users unsure whether their data will be used for the processing or not” (p. 25). Given that the EDPB’s stance in this respect is a confirmation of a position taken by EU regulators in previous guidance on transparency, and serves as a reminder to tell data subjects how data will be used.
  • Ensure linguistic consistency: If your website or app is available in more than one language, ensure that all data protection notices and tools are available in those languages as well and that the language choice made on the main interface is automatically taken into account on the data-related pages (pp. 25-26).

Best practices according to the EDPB

Finally, the EDPB highlights some other “best practices” throughout its guidelines. We have combined them below for easier review:

  • Structure and ease of access:
    • Shortcuts: Links to information, actions, or settings that can be of practical help to users to manage their data and data protection settings should be available wherever they relate to information or experience (e.g., links redirecting to the relevant parts of the privacy policy; in the case of a data breach communication to users, to provide users with a link to reset their password).
    • Data protection directory: For easy navigation through the different section of the menu, provide users with an easily accessible page from where all data protection-related actions and information are accessible. This page could be found in the organisation’s main navigation menu, the user account, through the privacy policy, etc.
    • Privacy Policy Overview: At the start/top of the privacy policy, include a collapsible table of contents with headings and sub-headings that shows the different passages the privacy notice contains. Clearly identified sections allow users to quickly identify and jump to the section they are looking for.
    • Sticky navigation: While consulting a page related to data protection, the table of contents could be constantly displayed on the screen allowing users to quickly navigate to relevant content thanks to anchor links.
  • Transparency:
    • Organisation contact information: The organisation’s contact address for addressing data protection requests should be clearly stated in the privacy policy. It should be present in a section where users can expect to find it, such as a section on the identity of the data controller, a rights related section, or a contact section.
    • Reaching the supervisory authority: Stating the specific identity of the EU supervisory authority and including a link to its website or the specific website page for lodging a complaint is another EDPB recommendation. This information should be present in a section where users can expect to find it, such as a rights-related section.
    • Change spotting and comparison: When changes are made to the privacy notice, make previous versions accessible with the date of release and highlight any changes.
  • Terminology & explanations:
    • Coherent wording: Across the website, the same wording and definition is used for the same data protection concepts. The wording used in the privacy policy should match that used on the rest of the platform.
    • Providing definitions: When using unfamiliar or technical words or jargon, providing a definition in plain language will help users understand the information provided to them. The definition can be given directly in the text when users hover over the word and/or be made available in a glossary.
    • Explaining consequences: When users want to activate or deactivate a data protection control, or give or withdraw their consent, inform them in a neutral way of the consequences of such action.
    • Use of examples: In addition to providing mandatory information that clearly and precisely states the purpose of processing, offering specific data processing examples can make the processing more tangible for users
  • Contrasting Data Protection Elements: Making data protection-related elements or actions visually striking in an interface that is not directly dedicated to the matter helps readability. For example, when posting a public message on the platform, controls for geolocation should be directly available and clearly visible.
  • Data Protection Onboarding: Just after the creation of an account, include data protection points within the onboarding experience for users to discover and set their preferences seamlessly. This can be done by, for example, inviting them to set their data protection preferences after adding their first friend or sharing their first post.
  • Notifications (including data breach notifications): Notifications can be used to raise awareness of users of aspects, changes, or risks related to personal data processing (e.g., when a data breach occurs). These notifications can be implemented in several ways, such as through inbox messages, pop-in windows, fixed banners at the top of the webpage, etc.

Next steps and international perspectives

These guidelines (available online) are subject to public consultation until 2 May 2022, so it is possible they will be modified as a result of the consultation and, we hope, improved to reflect a more pragmatic view of data protection that balances data subjects’ rights, security, and operational business needs. If you wish to contribute to the public consultation, note that the EDPB publishes feedback it receives (as a result, we have occasionally submitted feedback on behalf of clients wishing to remain anonymous).

Irrespective of the outcome of the public consultation, the guidelines are guaranteed to have an influence on the approach of EU data protection authorities in their investigations. From this perspective, it is better to be forewarned – and to have legal arguments at your disposal if you wish to adopt an approach that deviates from the EDPB’s position.

Moreover, these guidelines come at a time when the United States Federal Trade Commission (FTC) is also concerned with dark patterns. The FTC recently published an enforcement policy statement on the matter in October 2021. Dark patterns are also being discussed at the Organisation for Economic Cooperation and Development (OECD). International dialogue can be helpful if conversations about desired policy also consider practical solutions that can be implemented by businesses and reflect a desirable user experience for data subjects.

Organisations should consider evaluating their own techniques to encourage users to go one way or another and document the justification for their approach.

© 2022 Keller and Heckman LLP