White House Publishes Steps to Protect Workers from the Risks of AI

Last year the White House weighed in on the use of artificial intelligence (AI) in businesses.

Since the executive order, several government entities including the Department of Labor have released guidance on the use of AI.

And now the White House published principles to protect workers when AI is used in the workplace.

The principles apply to both the development and deployment of AI systems. These principles include:

  • Awareness – Workers should be informed of and have input in the design, development, testing, training, and use of AI systems in the workplace.
  • Ethical development – AI systems should be designed, developed, and trained in a way to protect workers.
  • Governance and Oversight – Organizations should have clear governance systems and oversight for AI systems.
  • Transparency – Employers should be transparent with workers and job seekers about AI systems being used.
  • Compliance with existing workplace laws – AI systems should not violate or undermine worker’s rights including the right to organize, health and safety rights, and other worker protections.
  • Enabling – AI systems should assist and improve worker’s job quality.
  • Supportive during transition – Employers support workers during job transitions related to AI.
  • Privacy and Security of Data – Worker’s data collected, used, or created by AI systems should be limited in scope and used to support legitimate business aims.

NIST Releases Risk ‘Profile’ for Generative AI

A year ago, we highlighted the National Institute of Standards and Technology’s (“NIST”) release of a framework designed to address AI risks (the “AI RMF”). We noted how it is abstract, like its central subject, and is expected to evolve and change substantially over time, and how NIST frameworks have a relatively short but significant history that shapes industry standards.

As support for the AI RMF, last month NIST released in draft form the Generative Artificial Intelligence Profile (the “Profile”).The Profile identifies twelve risks posed by Generative AI (“GAI”) including several that are novel or expected to be exacerbated by GAI. Some of the risks are exotic and new, such as confabulation, toxicity, and homogenization.

The Profile also identifies risks that are familiar, such as those for data privacy and cybersecurity. For the latter, the Profile details two types of cybersecurity risks: (1) those with the potential to discover or enable the lowering of barriers for offensive capabilities, and (2) those that can expand the overall attack surface by exploiting vulnerabilities as novel attacks.

For offensive capabilities and novel attack risks, the Profile includes these examples:

  • Large language models (a subset of GAI) that discover vulnerabilities in data and write code to exploit them.
  • GAI-powered co-pilots that proactively inform threat actors on how to evade detection.
  • Prompt-injections that steal data and run code remotely on a machine.
  • Compromised datasets that have been ‘poisoned’ to undermine the integrity of outputs.

In the past, the Federal Trade Commission (“FTC”) has referred to NIST when investigating companies’ data breaches. In settlement agreements, the FTC has required organizations to implement security measures through the NIST Cybersecurity Framework. It is reasonable to assume then, that NIST guidance on GAI will also be recommended or eventually required.

But it’s not all bad news – despite the risks when in the wrong hands, GAI will also improve cybersecurity defenses. As recently noted by Microsoft’s recent report on the GDPR & GAI, GAI can already: (1) support cybersecurity teams and protect organizations from threats, (2) train models to review applications and code for weaknesses, and (3) review and deploy new code more quickly by automating vulnerability detection.

Before ‘using AI to fight AI’ becomes legally required, just as multi-factor authentication, encryption, and training have become legally required for cybersecurity, the Profile should be considered to mitigate GAI risks. From pages 11-52, the Profile examines four hundred ways to use the Profile for GAI risks. Grouping them together, some of the recommendations include:

  • Refine existing incident response plans and risk assessments if acquiring, embedding, incorporating, or using open-source or proprietary GAI systems.
  • Implement regular adversary testing of the GAI, along with regular tabletop exercises with stakeholders and the incident response team to better inform improvements.
  • Carefully review and revise contracts and service level agreements to identify who is liable for a breach and responsible for handling an incident in case one is identified.
  • Document everything throughout the GAI lifecycle, including changes to any third parties’ GAI systems, and where audited data is stored.

“Cybersecurity is the mother of all problems. If you don’t solve it, all the other technology stuff just doesn’t happen” said Charlie Bell, Microsoft’s Chief of Security, in 2022. To that end, the AM RMF and now the Profile provide useful and early guidance on how to manage GAI Risks. The Profile is open for public comment until June 2, 2024.

Continuing Forward: Senate Leaders Release an AI Policy Roadmap

The US Senate’s Bipartisan AI Policy Roadmap is a highly anticipated document expected to shape the future of artificial intelligence (AI) in the United States over the next decade. This comprehensive guide, which complements the AI research, investigations, and hearings conducted by Senate committees during the 118th Congress, identifies areas of consensus that could help policymakers establish the ground rules for AI use and development across various sectors.

From intellectual property reforms and substantial funding for AI research to sector-specific rules and transparent model testing, the roadmap addresses a wide range of AI-related issues. Despite the long-awaited arrival of the AI roadmap, Sen. Chuck Schumer (D-NY), the highest-ranking Democrat in the Senate and key architect of the high-level document, is expected to strongly defer to Senate committees to continue drafting individual bills impacting the future of AI policy in the United States.

The Senate’s bipartisan roadmap is the culmination of a series of nine forums held last year by the same group, during which they gathered diverse perspectives and information on AI technology. Topics of the forums included:

  1. Inaugural Forum
  2. Supporting US Innovation in AI
  3. AI and the Workforce
  4. High Impact Uses of AI
  5. Elections and Democracy
  6. Privacy and Liability
  7. Transparency, Explainability, Intellectual Property, and Copyright
  8. Safeguarding
  9. National Security

The wide range of views and concerns expressed by over 150 experts including developers, startups, hardware and software companies, civil rights groups, and academia during these forums helped policymakers develop a thorough and inclusive document that reveals the areas of consensus and disagreement. As the 118th Congress continues, it’s expected that Sen. Schumer will reach out to his counterparts in the US House of Representatives to determine the common areas of interest. Those bipartisan and bicameral conversations will ultimately help Congress establish the foundational rules for AI use and development, potentially shaping not only the future of AI in the United States but also influencing global AI policy.

The final text of this guiding document focuses on several high-level categories. Below, we highlight a handful of notable provisions:

Publicity Rights (Name, Image, and Likeness)

The roadmap encourages senators to consider whether there is a need for legislation that would protect against the unauthorized use of one’s name, image, likeness, and voice, as it relates to AI. While state laws have traditionally recognized the right of individuals to control the commercial use of their so-called “publicity rights,” federal recognition of those rights would mark a major shift in intellectual property law and make it easier for musicians, celebrities, politicians, and other prominent public figures to prevent or discourage the unauthorized use of their publicity rights in the context of AI.

Disclosure and Transparency Requirements

Noting that the “black box” nature of some AI systems can make it difficult to assess compliance with existing consumer protection and civil rights laws, the roadmap encourages lawmakers to ensure that regulators are able to access information directly relevant to enforcing those laws and, if necessary, place appropriate transparency and “explainability” requirements on “high risk” uses of AI. The working group does not offer a definition of “high risk” use cases, but suggests that systems implicating constitutional rights, public safety, or anti-discrimination laws could be forced to disclose information about their training data and factors that influence automated or algorithmic decision making. The roadmap also encourages the development of best practices for when AI users should disclose that their products utilize AI, and whether developers should be required to disclose information to the public about the data sets used to train their AI models.

The document also pushes senators to develop sector-specific rules for AI use in areas such as housing, health care, education, financial services, news and journalism, and content creation.

Increased Funding for AI Innovation

On the heels of the findings included in the National Security Commission on Artificial Intelligence’s (NSCAI) final report, the roadmap encourages Senate appropriators to provide at least $32 billion for AI research funding at federal agencies, including the US Department of Energy, the National Science Foundation, and the National Institute of Standards and Technology. This request for a substantial investment underscores the government’s commitment to advancing AI technology and seeks to position federal agencies as “AI ready.” The roadmap’s innovation agenda includes funding the CHIPS and Science Act, support for semiconductor research and development to create high-end microchips, modernizing the federal government’s information technology infrastructure, and developing in-house supercomputing and AI capacity in the US Department of Defense.

Investments in National Defense

Many members of Congress believe that creating a national framework for AI will also help the United States compete on the global stage with China. Senators who see this as the 21st century space race believe investments in the defense and intelligence community’s AI capabilities are necessary to push back against China’s head start in AI development and deployment. The working group’s national security priorities include leveraging AI’s potential to build a digital armed services workforce, enhancing and accelerating the security clearance application process, blocking large language models from leaking intelligence or reconstructing classified information, and pushing back on perceived “censorship, repression, and surveillance” by Russia and China.

Addressing AI in Political Ads

Looking ahead to the 2024 election cycle, the roadmap’s authors are already paying attention to the threats posed by AI-generated election ads. The working group encourages digital content providers to watermark any political ads made with AI and include disclaimers in any AI-generated election content. These guardrails also align with the provisions of several bipartisan election-related AI bills that passed out of the Senate Rules Committee the same day of the roadmap’s release.

Privacy and Legal Liability for AI Usage

The AI Working Group recommends the passage of a federal data privacy law to protect personal information. The AI Working Group notes that the legislation should address issues related to data minimization, data security, consumer data rights, consent and disclosure, and the role of data brokers. Support for these principles is reflected in numerous state privacy laws enacted since 2018, and in bipartisan, bicameral draft legislation (the American Privacy Rights Act) supported by Rep. McMorris Rogers (D-WA), and Sen. Maria Cantwell (D-WA).

As we await additional legislative activity later this year, it is clear that these guidelines will have far-reaching implications for the AI industry and society at large.

CFTC Releases Artificial Intelligence Report

On 2 May 2024, the Commodity Futures Trading Commission’s (CFTC) Technology Advisory Committee (Committee) released a report entitled Responsible AI in Financial Markets: Opportunities, Risks & Recommendations. The report discusses the impact and future implications of artificial intelligence (AI) on financial markets and further illustrates the CFTC’s desire to oversee the AI space.

In the accompanying press release, Commissioner Goldsmith Romero highlighted the significance of the Committee’s recommendations, acknowledging decades of AI use in financial markets and proposing that new challenges will arise with the development of generative AI. Importantly, the report proposes that the CFTC develop a sector-specific AI Risk Management Framework addressing AI-associated risks.

The Committee opined that, without proper industry engagement and regulatory guardrails, the use of AI could “erode public trust in financial markets.” The report outlines potential risks associated with AI in financial markets such as the lack of transparency in AI decision processes, data handling errors, and the potential reinforcement of existing biases.

The report recommends that the CFTC host public roundtable discussions to foster a deeper understanding of AI’s role in financial markets and develop an AI Risk Management Framework for CTFC-registered entities aligned with the National Institute of Standards and Technology’s AI Risk Management Framework. This approach aims to enhance the transparency and reliability of AI systems in financial settings.

The report also calls for continued collaboration across federal agencies and stresses the importance of developing internal AI expertise within the CFTC. It advocates for responsible and transparent AI usage that adheres to ethical standards to ensure the stability and integrity of financial markets.

A Paradigm Shift in Legal Practice: Enhancing Civil Litigation with Artificial Intelligence

A paradigm shift in legal practice is occurring now. The integration of artificial intelligence (AI) has emerged as a transformative force, particularly in civil litigation. No longer is AI the stuff of science fiction – it’s a real tangible power that is reshaping the manner in which the world functions and, along with it, the manner in which the lawyer practices. From complex document review processes to predicting case outcomes, AI technologies are revolutionizing the way legal professions approach and navigate litigation and redefining traditional legal practice.

Streamlining Document Discovery and Review

One of the most time-consuming tasks in civil litigation is discovery document analysis and review. Traditionally, legal teams spend countless hours sifting through documents to identify relevant evidence, often reviewing the same material multiple times, depending on the task at hand. However, AI-powered document review platforms can now significantly expedite this process. By leveraging natural language processing (NLP) and machine learning algorithms, there platforms can quickly analyze and categorize documents based on relevance, reducing the time and resources required for document review while ensuring thoroughness and accuracy. AI in the civil discovery process offers a multitude of benefits for the practitioner and cost saving advantages for the client, such as:

• Efficiency: AI powered document review significantly reduces required discovery, allowing legal teams to focus their efforts on higher value tasks and strategic analysis;

• Accuracy: By automating the initial document review process AI helps minimize potential human error and ensures a greater consistency and accuracy in identifying relevant documents and evidence;

• Cost-effectiveness: AI driven platforms offer a cost-effective alternative to traditional manual review methods, helping to lower overall litigation costs for clients

• Scalability: AI technology can easily scale to handle large volumes of data making it ideal for complex litigation cases with extensive document discovery requirements;

• Insight Generation: AI algorithms can uncover hidden patterns, trends, and relationships within the closed data bases that might not be apparent through manual review, providing valuable strategy and decision-making.

Predictive Analytics for Case Strategy

Predicting case outcomes is inherently challenging, often relying on legal expertise, jurisdictional experience of the lawyer and analysis of the claimed damage. However, AI-driven predictive analytics tools are changing the game by providing hyper-accurate data-driven insights into case strategies. By analyzing past case law, court rulings, and other relevant data points, these tools can forecast-model the likely outcome of a given case, allowing legal teams and clients to make more informed decisions regarding jurisdictionally specific settlement negotiations, trial strategy and resource allocation.

Enhanced Legal Research and Due Diligence

AI-powered legal research tools have become powerful tools for legal professionals involved in civil litigation. These tools utilize advanced algorithms to sift through vast repositories in a closed system of case law, statutes, regulations and legal precedent, delivering relevant information in a fraction of the time it would take through manual research methods. Additionally, AI can assist in due diligence processes by automatically flagging potential legal risks and identifying critical issues within contracts and other legal documents.

Improving case Management and Workflow Efficiency

Managing multiple cases simultaneously can be daunting for legal practitioners and could lead to inefficiencies and oversight. AI-driven case management systems offer a solution by providing centralized case-related information, deadlines and communications. These systems can automate routine tasks, such as scheduling document filing and client communication schedules, freeing up valuable time for attorneys to focus on legal substantive tasks and proactive case movement .

Ethical Considerations and Challenges

While the benefits of AI in civil litigation are undeniable, they also raise important ethical considerations and challenges. Issues such as data privacy, algorithmic bias, and the ethical use of AI in decision-making processes must be carefully addressed to ensure fairness and transparency in the legal system. Additionally, there is a growing need for ongoing education and training to equip legal professionals with the necessary skills to effectively leverage AI tools while maintain ethical standards and preserving the integrity of the legal profession.

Take Away

The integration of AI technologies in civil litigation represents a paradigm shift in legal practice, offering unprecedented opportunities to streamline processes, enhance decision-making and improve client satisfaction. By harnessing the power of AI-driven solutions, legal professionals can navigate complex civil disputes more efficiently and effectively, ultimately delivering better outcomes for clients and advancing the pursuit of just outcomes in our rapidly evolving legal landscape.

For All Patent/Trademark Practitioners: USPTO Provides Guidance for Use of AI in Preparing USPTO Submissions

The USPTO expounds a clear message for patent and trademark attorneys, patent agents, and inventors: use of artificial intelligence (AI), including generative AI, in patent and trademark activities and filings before the USPTO entails risks to be mitigated, and you must disclose use of AI in creation of an invention or practice before the USPTO if the use of AI is material to patentability.

The USPTO’s new guidance issued on April 11, 2024 is a counterpart to its guidance issued on February 13, 2024, which addresses AI-assisted invention creation process. In the new guidance issued on April 11, 2024, USPTO officials communicate the risks of using AI in preparing USPTO submissions, including patent applications, affidavits, petitions, office action responses, information disclosure statements, Patent Trial and Appeal Board (PTAB) submissions, and trademark / Trademark Trial and Appeal Board (TTAB) submissions. The common theme between the February 13 and April 11 guidance is the duty to disclose to the USPTO all information known to be material to patentability.

Building on the USPTO’s existing rules and policies, the USPTO’s April 11 guidance discusses the following:

(A) The duty of candor and good faith – each individual associated with a proceeding at the USPTO owes the duty to disclose the USPTO all information known to be material to patentability, including on the use of AI by inventors, parties, and practitioners.

(B) Signature requirement and corresponding certifications – using AI to draft documents without verifying information risks “critical misstatements and omissions”. Any submission for the USPTO in which AI helped prepare must be carefully reviewed by practitioners, who are ultimately responsible, to ensure that they are true and submitted for a proper purpose.

(C) Confidentiality of information – sensitive and confidential client information risks being compromised if shared to third-party AI systems, some of which may be located outside of the United States.

(D) Foreign filing licenses and export regulations – a foreign filing license from the USPTO does not authorize the exporting of subject matter abroad for the preparation of patent applications to be filed in the United States. Practitioners must ensure data is not improperly exported when using AI.

(E) USPTO electronic systems’ policies – Practitioners using AI must be mindful of the terms and conditions for the USPTO’s electronic system, which prohibit the unauthorized access, actions, use, modifications, or disclosure of the data contained in the USPTO system in transit to/from the system.

(F) The USPTO Rules of Professional Conduct – when using the AI tools, practitioners must ensure that they are not violating the duties owed to clients. For example, practitioners must have the requisite legal, scientific, and technical knowledge to reasonably represent the client, without inappropriate reliance on AI. Practitioners also have duty to reasonably consult with the client, including about the use of AI in accomplishing the client’s objectives.

The USPTO’s April 11 guidance overall shares principles with the ethics guidelines that multiple state bars have issued related to generative AI use in practice of law, and addresses them in the patent- and trademark-specific context. Importantly, in addition to ethics considerations, the USPTO guidance reminds us that knowing or willful withholding of information about AI use under (A), overlooking AI’s misstatements leading to false certification under (B), or AI-mediated improper or unauthorized exporting of data or unauthorized access to data under (D) and (E) may lead to criminal or civil liability under federal law or penalties or sanctions by the USPTO.

On the positive side, the USPTO guidance describes the possible favorable aspects of AI “to expand access to our innovation ecosystem and lower costs for parties and practitioners…. The USPTO continues to be actively involved in the development of domestic and international measures to address AI considerations at the intersection of innovation, creativity, and intellectual property.” We expect more USPTO AI guidance to be forthcoming, so please do watch for continued updates in this area.

Incorporating AI to Address Mental Health Challenges in K-12 Students

The National Institute of Mental Health reported that 16.32% of youth (aged 12-17) in the District of Columbia (DC) experience at least one major depressive episode (MDE).
Although the prevalence of youth with MDE in DC is lower compared to some states, such as Oregon (where it reached 21.13%), it is important to address mental health challenges in youth early, as untreated mental health challenges can persist into adulthood. Further, the number of youths with MDE climbs nationally each year, including last year when it rose by almost 2% to approximately 300,000 youth.

It is important to note that there are programs specifically designed to help and treat youth that have experienced trauma and are living with mental health challenges. In DC, several mental health services and professional counseling services are available to residents. Most importantly, there is a broad reaching school-based mental health program that aims to provide a behavioral health expert in every school building. Additionally, on the DC government’s website, there is a list of mental health services programs available, which can be found here.

In conjunction with the mental health programs, early identification of students at risk for suicide, self-harm, and behavioral issues can help states, including DC, ensure access to mental health care and support for these young individuals. In response to the widespread youth mental health crisis, K-12 schools are employing the use of artificial intelligence (AI)-based tools to identify students at risk for suicide and self-harm. Through AI-based suicide risk monitoring, natural language processing, sentiment analysis, predictive models, early intervention, and surveillance and evaluation, AI is playing a crucial role in addressing the mental challenges faced by youth.

AI systems, developed by companies like Bark, Gaggle, and GoGuardian, aim to monitor students’ digital footprint through various data inputs, such as online interactions and behavioral patterns, for signs of distress or risk. These programs identify students who may be at risk for self-harm or suicide and alert the school and parents accordingly.

Proposals for using AI models to enhance mental health surveillance in school settings by implementing chat boxes to interact with students are being introduced. The chat box conversation logs serve as the source of raw data for the machine learning. According to Using AI for Mental Health Analysis and Prediction in School Surveys, existing survey results evaluated by health experts can be used to create a test dataset to validate the machine learning models. Supervised learning can then be deployed to classify specific behaviors and mental health patterns. However, there are concerns about how these programs work and what safeguards the companies have in place to protect youths’ data from being sold to other platforms. Additionally, there are concerns about whether these companies are complying with relevant laws (e.g., the Family Educational Rights and Privacy Act [FERPA]).

The University of Michigan identified AI technologies, such as natural language processing (NLP) and sentiment analysis, that can analyze user interactions, such as posts and comments, to identify signs of distress, anxiety, or depression. For example, Breathhh is an AI-powered Chrome extension designed to automatically deliver mental health exercises based on an individual’s web activity and online behaviors. By monitoring and analyzing the user’s interactions, the application can determine appropriate moments to present stress-relieving practices and strategies. Applications, like Breathhh, are just one example of personalized interventions designed by monitoring user interaction.

When using AI to address mental health concerns among K-12 students, policy implications must be carefully considered.

First, developers must obtain informed consent from students, parents, guardians, and all stakeholders before deploying such AI models. The use of AI models is always a topic of concern for policymakers because of the privacy concerns that come with it. To safely deploy AI models, there needs to be privacy protection policies in place to safeguard sensitive information from being improperly used. There is no comprehensive legislation that addresses those concerns either nationally or locally.
Second, developers also need to consider and factor in any bias engrained in their algorithm through data testing and regular monitoring of data output before it reaches the user. AI has the ability to detect early signs of mental health challenges. However, without such proper safeguards in place, we risk failing to protect students from being disproportionately impacted. When collected data reflects biases, it can lead to unfair treatment of certain groups. For youth, this can result in feelings of marginalization and adversely affect their mental health.
Effective policy considerations should encourage the use of AI models that will provide interpretable results, and policymakers need to understand how these decisions are made. Policies should outline how schools will respond to alerts generated by the system. A standard of care needs to be universally recognized, whether it be through policy or the companies’ internal safeguards. This standard of care should outline guidelines that address situations in which AI data output conflicts with human judgment.

Responsible AI implementation can enhance student well-being, but it requires careful evaluation to ensure students’ data is protected from potential harm. Moving forward, school leaders, policymakers, and technology developers need to consider the benefits and risks of AI-based mental health monitoring programs. Balancing the intended benefits while mitigating potential harms is crucial for student well-being.

© 2024 ArentFox Schiff LLP
by: David P. GrossoStarshine S. Chun of ArentFox Schiff LLP

For more news on Artificial Intelligence and Mental Health, visit the NLR Communications, Media & Internet section.

Navigating the EU AI Act from a US Perspective: A Timeline for Compliance

After extensive negotiations, the European Parliament, Commission, and Council came to a consensus on the EU Artificial Intelligence Act (the “AI Act”) on Dec. 8, 2023. This marks a significant milestone, as the AI Act is expected to be the most far-reaching regulation on AI globally. The AI Act is poised to significantly impact how companies develop, deploy, and manage AI systems. In this post, NM’s AI Task Force breaks down the key compliance timelines to offer a roadmap for U.S. companies navigating the AI Act.

The AI Act will have a staged implementation process. While it will officially enter into force 20 days after publication in the EU’s Official Journal (“Entry into Force”), most provisions won’t be directly applicable for an additional 24 months. This provides a grace period for businesses to adapt their AI systems and practices to comply with the AI Act. To bridge this gap, the European Commission plans to launch an AI Pact. This voluntary initiative allows AI developers to commit to implementing key obligations outlined in the AI Act even before they become legally enforceable.

With the impending enforcement of the AI Act comes the crucial question for U.S. companies that operate in the EU or whose AI systems interact with EU citizens: How can they ensure compliance with the new regulations? To start, U.S. companies should understand the key risk categories established by the AI Act and their associated compliance timelines.

I. Understanding the Risk Categories
The AI Act categorizes AI systems based on their potential risk. The risk level determines the compliance obligations a company must meet.  Here’s a simplified breakdown:

  • Unacceptable Risk: These systems are banned entirely within the EU. This includes applications that threaten people’s safety, livelihood, and fundamental rights. Examples may include social credit scoring, emotion recognition systems at work and in education, and untargeted scraping of facial images for facial recognition.
  • High Risk: These systems pose a significant risk and require strict compliance measures. Examples may include AI used in critical infrastructure (e.g., transport, water, electricity), essential services (e.g., insurance, banking), and areas with high potential for bias (e.g., education, medical devices, vehicles, recruitment).
  • Limited Risk: These systems require some level of transparency to ensure user awareness. Examples include chatbots and AI-powered marketing tools where users should be informed that they’re interacting with a machine.
  • Minimal Risk: These systems pose minimal or no identified risk and face no specific regulations.

II. Key Compliance Timelines (as of March 2024):

Time Frame  Anticipated Milestones
6 months after Entry into Force
  • Prohibitions on Unacceptable Risk Systems will come into effect.
12 months after Entry into Force
  • This marks the start of obligations for companies that provide general-purpose AI models (those designed for widespread use across various applications). These companies will need to comply with specific requirements outlined in the AI Act.
  • Member states will appoint competent authorities responsible for overseeing the implementation of the AI Act within their respective countries.
  • The European Commission will conduct annual reviews of the list of AI systems categorized as “unacceptable risk” and banned under the AI Act.
  • The European Commission will issue guidance on high-risk AI incident reporting.
18 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
24 months after Entry into Force
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.
36 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
By the end of 2030
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.

In addition to the above, we can expect further rulemaking and guidance from the European Commission to come forth regarding aspects of the AI Act such as use cases, requirements, delegated powers, assessments, thresholds, and technical documentation.

Even before the AI Act’s Entry into Force, there are crucial steps U.S. companies operating in the EU can take to ensure a smooth transition. The priority is familiarization. Once the final version of the Act is published, carefully review it to understand the regulations and how they might apply to your AI systems. Next, classify your AI systems according to their risk level (high, medium, minimal, or unacceptable). This will help you determine the specific compliance obligations you’ll need to meet. Finally, conduct a thorough gap analysis. Identify any areas where your current practices for developing, deploying, or managing AI systems might not comply with the Act. By taking these proactive steps before the official enactment, you’ll gain valuable time to address potential issues and ensure your AI systems remain compliant in the EU market.

The Imperatives of AI Governance

If your enterprise doesn’t yet have a policy, it needs one. We explain here why having a governance policy is a best practice and the key issues that policy should address.

Why adopt an AI governance policy?

AI has problems.

AI is good at some things, and bad at other things. What other technology is linked to having “hallucinations”? Or, as Sam Altman, CEO of OpenAI, recently commented, it’s possible to imagine “where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

If that isn’t a red flag…

AI can collect and summarize myriad information sources at breathtaking speed. Its ability to reason from or evaluate that information, however, consistent with societal and governmental values and norms, is almost non-existent. It is a tool – not a substitute for human judgment and empathy.

Some critical concerns are:

  • Are AI’s outputs accurate? How precise are they?
  • Does it use PII, biometric, confidential, or proprietary data appropriately?
  • Does it comply with applicable data privacy laws and best practices?
  • Does it mitigate the risks of bias, whether societal or developer-driven?

AI is a frontier technology.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

In other words, there are relatively few rules governing AI—and those that have been adopted are probably out of date. You need to go above and beyond regulatory compliance and create your own rules and guidelines.

And the capabilities of AI tools are not always foreseeable.

Hundreds of companies are releasing AI tools without fully understanding the functionality, potential and reach of these tools. In fact, this is somewhat intentional: at some level, AI’s promise – and danger – is its ability to learn or “evolve” to varying degrees, without human intervention or supervision.

AI tools are readily available.

Your employees have access to AI tools, regardless of whether you’ve adopted those tools at an enterprise level. Ignoring AI’s omnipresence, and employees’ inherent curiosity and desire to be more efficient, creates an enterprise level risk.

Your customers and stakeholders demand transparency.

The policy is a critical part of building trust with your stakeholders.

Your customers likely have two categories of questions:

How are you mitigating the risks of using AI? And, in particular, what are you doing with my data?

And

Will AI benefit me – by lowering the price you charge me? By enhancing your service or product? Does it truly serve my needs?

Your board, investors and leadership team want similar clarity and direction.

True transparency includes explainability: At a minimum, commit to disclose what AI technology you are using, what data is being used, and how the deliverables or outputs are being generated.

What are the key elements of AI governance?

Any AI governance policy should be tailored to your institutional values and business goals. Crafting the policy requires asking some fundamental questions and then delineating clear standards and guidelines to your workforce and stakeholders.

1. The policy is a “living” document, not a one and done task.

Adopt a policy, and then re-evaluate it at least semi-annually, or even more often. AI governance will not be a static challenge: It requires continuing consideration as the technology evolves, as your business uses of AI evolve, and as legal compliance directives evolve.

2. Commit to transparency and explainability.

What is AI? Start there.

Then,

What AI are you using? Are you developing your own AI tools, or using tools created by others?

Why are you using it?

What data does it use? Are you using your own datasets, or the datasets of others?

What outputs and outcomes is your AI intended to deliver?

3. Check the legal compliance box.

At a minimum, use the policy to communicate to stakeholders what you are doing to comply with applicable laws and regulations.

Update the existing policies you have in place addressing data privacy and cyber risk issues to address AI risks.

The EU recently adopted its Artificial Intelligence Act, the world’s first comprehensive AI legislation. The White House has issued AI directives to dozens of federal agencies. Depending on the industry, you may already be subject to SEC, FTC, USPTO, or other regulatory oversight.

And keeping current will require frequent diligence: The technology is rapidly changing even while the regulatory landscape is evolving weekly.

4. Establish accountability. 

Who within your company is “in charge of” AI? Who will be accountable for the creation, use and end products of AI tools?

Who will manage AI vendor relationships? Is their clarity as to what risks will be borne by you, and what risks your AI vendors will own?

What is your process for approving, testing and auditing AI?

Who is authorized to use AI? What AI tools are different categories of employees authorized to use?

What systems are in place to monitor AI development and use? To track compliance with your AI policies?

What controls will ensure that the use of AI is effective, while avoiding cyber risks and vulnerabilities, or societal biases and discrimination?

5. Embrace human oversight as essential.

Again, building trust is key.

The adoption of a frontier, possibly hallucinatory technology is not a build it, get it running, and then step back process.

Accountability, verifiability, and compliance require hands on ownership and management.

If nothing else, ensure that your AI governance policy conveys this essential.

AI Got It Wrong, Doesn’t Mean We Are Right: Practical Considerations for the Use of Generative AI for Commercial Litigators

Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.

Seemingly overnight, the use of AI platforms has become the hottest thing going, including (potentially) for commercial litigators. However, like most rapidly rising technological trends, the associated pitfalls don’t fully bubble to the surface until after the public has an opportunity (or several) to put the technology to the test. Indeed, the use of AI platforms to streamline legal research and writing has already begun to show its warts. Of course, just last year, prime examples of the danger of relying too heavily on AI were exposed in highly publicized cases venued in the Southern District of New York. See e.g. Benajmin Weiser, Michael D. Cohen’s Lawyer Cited Cases That May Not Exist, Judge Says, NY Times (December 12, 2023); Sara Merken, New York Lawyers Sanctioned For Using Fake Chat GPT Case In Legal Brief, Reuters (June 26, 2023).

In order to ensure litigators are striking the appropriate balance between using technological assistance in producing legal work product, while continuing to adhere to the ethical duties and professional responsibility mandated by the legal profession, below are some immediate considerations any complex commercial litigator should abide by when venturing into the world of AI.

Confidentiality

As any experienced litigator will know, involving a third-party in the process of crafting of a client’s strategy and case theory—whether it be an expert, accountant, or investigator—inevitably raises the issue of protecting the client’s privileged, proprietary and confidential information. The same principle applies to the use of an AI platform. Indeed, when stripped of its bells and whistles, an AI platform could potentially be viewed as another consultant employed to provide work product that will assist in the overall representation of your client. Given this reality, it is imperative that any litigator who plans to use AI, also have a complete grasp of the security of that AI system to ensure the safety of their client’s privileged, proprietary and confidential information. A failure to do so may not only result in your client’s sensitive information being exposed to an unsecure, and potentially harmful, online network, but it can also result in a violation of the duty to make reasonable efforts to prevent the disclosure of or unauthorized access to your client’s sensitive information. Such a duty is routinely set forth in the applicable rules of professional conduct across the country.

Oversight

It goes without saying that a lawyer has a responsibility to ensure that he or she adheres to the duty of candor when making representations to the Court. As mentioned, violations of that duty have arisen based on statements that were included in legal briefs produced using AI platforms. While many lawyers would immediately rebuff the notion that they would fail to double-check the accuracy of a brief’s contents—even if generated using AI—before submitting it to the Court, this concept gets trickier when working on larger litigation teams. As a result, it is not only incumbent on those preparing the briefs to ensure that any information included in a submission that was created with the assistance of an AI platform is accurate, but also that the lawyers responsible for oversight of a litigation team are diligent in understanding when and to what extent AI is being used to aid the work of that lawyer’s subordinates. Similar to confidentiality considerations, many courts’ rules of professional conduct include rules related to senior lawyer responsibilities and oversight of subordinate lawyers. To appropriately abide by those rules, litigation team leaders should make it a point to discuss with their teams the appropriate use of AI at the outset of any matter, as well as to put in place any law firm, court, or client-specific safeguards or guidelines to avoid potential missteps.

Judicial Preferences

Finally, as the old saying goes: a good lawyer knows the law; a great lawyer knows the judge. Any savvy litigator knows that the first thing one should understand prior to litigating a case is whether the Court and the presiding Judge have put in place any standing orders or judicial preferences that may impact litigation strategy. As a result of the rise of use of AI in litigation, many Courts across the country have responded in turn by developing either standing orders, local rules, or related guidelines concerning the appropriate use of AI. See e.g., Standing Order Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson (June 6, 2023 E.D.P.A.), Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (January 25, 2024, N.J. Supreme Court). Litigators should follow suit and ensure they understand the full scope of how their Court, and more importantly, their assigned Judge, treat the issue of using AI to assist litigation strategy and development of work product.