NIST Releases Risk ‘Profile’ for Generative AI

A year ago, we highlighted the National Institute of Standards and Technology’s (“NIST”) release of a framework designed to address AI risks (the “AI RMF”). We noted how it is abstract, like its central subject, and is expected to evolve and change substantially over time, and how NIST frameworks have a relatively short but significant history that shapes industry standards.

As support for the AI RMF, last month NIST released in draft form the Generative Artificial Intelligence Profile (the “Profile”).The Profile identifies twelve risks posed by Generative AI (“GAI”) including several that are novel or expected to be exacerbated by GAI. Some of the risks are exotic and new, such as confabulation, toxicity, and homogenization.

The Profile also identifies risks that are familiar, such as those for data privacy and cybersecurity. For the latter, the Profile details two types of cybersecurity risks: (1) those with the potential to discover or enable the lowering of barriers for offensive capabilities, and (2) those that can expand the overall attack surface by exploiting vulnerabilities as novel attacks.

For offensive capabilities and novel attack risks, the Profile includes these examples:

  • Large language models (a subset of GAI) that discover vulnerabilities in data and write code to exploit them.
  • GAI-powered co-pilots that proactively inform threat actors on how to evade detection.
  • Prompt-injections that steal data and run code remotely on a machine.
  • Compromised datasets that have been ‘poisoned’ to undermine the integrity of outputs.

In the past, the Federal Trade Commission (“FTC”) has referred to NIST when investigating companies’ data breaches. In settlement agreements, the FTC has required organizations to implement security measures through the NIST Cybersecurity Framework. It is reasonable to assume then, that NIST guidance on GAI will also be recommended or eventually required.

But it’s not all bad news – despite the risks when in the wrong hands, GAI will also improve cybersecurity defenses. As recently noted by Microsoft’s recent report on the GDPR & GAI, GAI can already: (1) support cybersecurity teams and protect organizations from threats, (2) train models to review applications and code for weaknesses, and (3) review and deploy new code more quickly by automating vulnerability detection.

Before ‘using AI to fight AI’ becomes legally required, just as multi-factor authentication, encryption, and training have become legally required for cybersecurity, the Profile should be considered to mitigate GAI risks. From pages 11-52, the Profile examines four hundred ways to use the Profile for GAI risks. Grouping them together, some of the recommendations include:

  • Refine existing incident response plans and risk assessments if acquiring, embedding, incorporating, or using open-source or proprietary GAI systems.
  • Implement regular adversary testing of the GAI, along with regular tabletop exercises with stakeholders and the incident response team to better inform improvements.
  • Carefully review and revise contracts and service level agreements to identify who is liable for a breach and responsible for handling an incident in case one is identified.
  • Document everything throughout the GAI lifecycle, including changes to any third parties’ GAI systems, and where audited data is stored.

“Cybersecurity is the mother of all problems. If you don’t solve it, all the other technology stuff just doesn’t happen” said Charlie Bell, Microsoft’s Chief of Security, in 2022. To that end, the AM RMF and now the Profile provide useful and early guidance on how to manage GAI Risks. The Profile is open for public comment until June 2, 2024.

Continuing Forward: Senate Leaders Release an AI Policy Roadmap

The US Senate’s Bipartisan AI Policy Roadmap is a highly anticipated document expected to shape the future of artificial intelligence (AI) in the United States over the next decade. This comprehensive guide, which complements the AI research, investigations, and hearings conducted by Senate committees during the 118th Congress, identifies areas of consensus that could help policymakers establish the ground rules for AI use and development across various sectors.

From intellectual property reforms and substantial funding for AI research to sector-specific rules and transparent model testing, the roadmap addresses a wide range of AI-related issues. Despite the long-awaited arrival of the AI roadmap, Sen. Chuck Schumer (D-NY), the highest-ranking Democrat in the Senate and key architect of the high-level document, is expected to strongly defer to Senate committees to continue drafting individual bills impacting the future of AI policy in the United States.

The Senate’s bipartisan roadmap is the culmination of a series of nine forums held last year by the same group, during which they gathered diverse perspectives and information on AI technology. Topics of the forums included:

  1. Inaugural Forum
  2. Supporting US Innovation in AI
  3. AI and the Workforce
  4. High Impact Uses of AI
  5. Elections and Democracy
  6. Privacy and Liability
  7. Transparency, Explainability, Intellectual Property, and Copyright
  8. Safeguarding
  9. National Security

The wide range of views and concerns expressed by over 150 experts including developers, startups, hardware and software companies, civil rights groups, and academia during these forums helped policymakers develop a thorough and inclusive document that reveals the areas of consensus and disagreement. As the 118th Congress continues, it’s expected that Sen. Schumer will reach out to his counterparts in the US House of Representatives to determine the common areas of interest. Those bipartisan and bicameral conversations will ultimately help Congress establish the foundational rules for AI use and development, potentially shaping not only the future of AI in the United States but also influencing global AI policy.

The final text of this guiding document focuses on several high-level categories. Below, we highlight a handful of notable provisions:

Publicity Rights (Name, Image, and Likeness)

The roadmap encourages senators to consider whether there is a need for legislation that would protect against the unauthorized use of one’s name, image, likeness, and voice, as it relates to AI. While state laws have traditionally recognized the right of individuals to control the commercial use of their so-called “publicity rights,” federal recognition of those rights would mark a major shift in intellectual property law and make it easier for musicians, celebrities, politicians, and other prominent public figures to prevent or discourage the unauthorized use of their publicity rights in the context of AI.

Disclosure and Transparency Requirements

Noting that the “black box” nature of some AI systems can make it difficult to assess compliance with existing consumer protection and civil rights laws, the roadmap encourages lawmakers to ensure that regulators are able to access information directly relevant to enforcing those laws and, if necessary, place appropriate transparency and “explainability” requirements on “high risk” uses of AI. The working group does not offer a definition of “high risk” use cases, but suggests that systems implicating constitutional rights, public safety, or anti-discrimination laws could be forced to disclose information about their training data and factors that influence automated or algorithmic decision making. The roadmap also encourages the development of best practices for when AI users should disclose that their products utilize AI, and whether developers should be required to disclose information to the public about the data sets used to train their AI models.

The document also pushes senators to develop sector-specific rules for AI use in areas such as housing, health care, education, financial services, news and journalism, and content creation.

Increased Funding for AI Innovation

On the heels of the findings included in the National Security Commission on Artificial Intelligence’s (NSCAI) final report, the roadmap encourages Senate appropriators to provide at least $32 billion for AI research funding at federal agencies, including the US Department of Energy, the National Science Foundation, and the National Institute of Standards and Technology. This request for a substantial investment underscores the government’s commitment to advancing AI technology and seeks to position federal agencies as “AI ready.” The roadmap’s innovation agenda includes funding the CHIPS and Science Act, support for semiconductor research and development to create high-end microchips, modernizing the federal government’s information technology infrastructure, and developing in-house supercomputing and AI capacity in the US Department of Defense.

Investments in National Defense

Many members of Congress believe that creating a national framework for AI will also help the United States compete on the global stage with China. Senators who see this as the 21st century space race believe investments in the defense and intelligence community’s AI capabilities are necessary to push back against China’s head start in AI development and deployment. The working group’s national security priorities include leveraging AI’s potential to build a digital armed services workforce, enhancing and accelerating the security clearance application process, blocking large language models from leaking intelligence or reconstructing classified information, and pushing back on perceived “censorship, repression, and surveillance” by Russia and China.

Addressing AI in Political Ads

Looking ahead to the 2024 election cycle, the roadmap’s authors are already paying attention to the threats posed by AI-generated election ads. The working group encourages digital content providers to watermark any political ads made with AI and include disclaimers in any AI-generated election content. These guardrails also align with the provisions of several bipartisan election-related AI bills that passed out of the Senate Rules Committee the same day of the roadmap’s release.

Privacy and Legal Liability for AI Usage

The AI Working Group recommends the passage of a federal data privacy law to protect personal information. The AI Working Group notes that the legislation should address issues related to data minimization, data security, consumer data rights, consent and disclosure, and the role of data brokers. Support for these principles is reflected in numerous state privacy laws enacted since 2018, and in bipartisan, bicameral draft legislation (the American Privacy Rights Act) supported by Rep. McMorris Rogers (D-WA), and Sen. Maria Cantwell (D-WA).

As we await additional legislative activity later this year, it is clear that these guidelines will have far-reaching implications for the AI industry and society at large.

CFTC Releases Artificial Intelligence Report

On 2 May 2024, the Commodity Futures Trading Commission’s (CFTC) Technology Advisory Committee (Committee) released a report entitled Responsible AI in Financial Markets: Opportunities, Risks & Recommendations. The report discusses the impact and future implications of artificial intelligence (AI) on financial markets and further illustrates the CFTC’s desire to oversee the AI space.

In the accompanying press release, Commissioner Goldsmith Romero highlighted the significance of the Committee’s recommendations, acknowledging decades of AI use in financial markets and proposing that new challenges will arise with the development of generative AI. Importantly, the report proposes that the CFTC develop a sector-specific AI Risk Management Framework addressing AI-associated risks.

The Committee opined that, without proper industry engagement and regulatory guardrails, the use of AI could “erode public trust in financial markets.” The report outlines potential risks associated with AI in financial markets such as the lack of transparency in AI decision processes, data handling errors, and the potential reinforcement of existing biases.

The report recommends that the CFTC host public roundtable discussions to foster a deeper understanding of AI’s role in financial markets and develop an AI Risk Management Framework for CTFC-registered entities aligned with the National Institute of Standards and Technology’s AI Risk Management Framework. This approach aims to enhance the transparency and reliability of AI systems in financial settings.

The report also calls for continued collaboration across federal agencies and stresses the importance of developing internal AI expertise within the CFTC. It advocates for responsible and transparent AI usage that adheres to ethical standards to ensure the stability and integrity of financial markets.

A Paradigm Shift in Legal Practice: Enhancing Civil Litigation with Artificial Intelligence

A paradigm shift in legal practice is occurring now. The integration of artificial intelligence (AI) has emerged as a transformative force, particularly in civil litigation. No longer is AI the stuff of science fiction – it’s a real tangible power that is reshaping the manner in which the world functions and, along with it, the manner in which the lawyer practices. From complex document review processes to predicting case outcomes, AI technologies are revolutionizing the way legal professions approach and navigate litigation and redefining traditional legal practice.

Streamlining Document Discovery and Review

One of the most time-consuming tasks in civil litigation is discovery document analysis and review. Traditionally, legal teams spend countless hours sifting through documents to identify relevant evidence, often reviewing the same material multiple times, depending on the task at hand. However, AI-powered document review platforms can now significantly expedite this process. By leveraging natural language processing (NLP) and machine learning algorithms, there platforms can quickly analyze and categorize documents based on relevance, reducing the time and resources required for document review while ensuring thoroughness and accuracy. AI in the civil discovery process offers a multitude of benefits for the practitioner and cost saving advantages for the client, such as:

• Efficiency: AI powered document review significantly reduces required discovery, allowing legal teams to focus their efforts on higher value tasks and strategic analysis;

• Accuracy: By automating the initial document review process AI helps minimize potential human error and ensures a greater consistency and accuracy in identifying relevant documents and evidence;

• Cost-effectiveness: AI driven platforms offer a cost-effective alternative to traditional manual review methods, helping to lower overall litigation costs for clients

• Scalability: AI technology can easily scale to handle large volumes of data making it ideal for complex litigation cases with extensive document discovery requirements;

• Insight Generation: AI algorithms can uncover hidden patterns, trends, and relationships within the closed data bases that might not be apparent through manual review, providing valuable strategy and decision-making.

Predictive Analytics for Case Strategy

Predicting case outcomes is inherently challenging, often relying on legal expertise, jurisdictional experience of the lawyer and analysis of the claimed damage. However, AI-driven predictive analytics tools are changing the game by providing hyper-accurate data-driven insights into case strategies. By analyzing past case law, court rulings, and other relevant data points, these tools can forecast-model the likely outcome of a given case, allowing legal teams and clients to make more informed decisions regarding jurisdictionally specific settlement negotiations, trial strategy and resource allocation.

Enhanced Legal Research and Due Diligence

AI-powered legal research tools have become powerful tools for legal professionals involved in civil litigation. These tools utilize advanced algorithms to sift through vast repositories in a closed system of case law, statutes, regulations and legal precedent, delivering relevant information in a fraction of the time it would take through manual research methods. Additionally, AI can assist in due diligence processes by automatically flagging potential legal risks and identifying critical issues within contracts and other legal documents.

Improving case Management and Workflow Efficiency

Managing multiple cases simultaneously can be daunting for legal practitioners and could lead to inefficiencies and oversight. AI-driven case management systems offer a solution by providing centralized case-related information, deadlines and communications. These systems can automate routine tasks, such as scheduling document filing and client communication schedules, freeing up valuable time for attorneys to focus on legal substantive tasks and proactive case movement .

Ethical Considerations and Challenges

While the benefits of AI in civil litigation are undeniable, they also raise important ethical considerations and challenges. Issues such as data privacy, algorithmic bias, and the ethical use of AI in decision-making processes must be carefully addressed to ensure fairness and transparency in the legal system. Additionally, there is a growing need for ongoing education and training to equip legal professionals with the necessary skills to effectively leverage AI tools while maintain ethical standards and preserving the integrity of the legal profession.

Take Away

The integration of AI technologies in civil litigation represents a paradigm shift in legal practice, offering unprecedented opportunities to streamline processes, enhance decision-making and improve client satisfaction. By harnessing the power of AI-driven solutions, legal professionals can navigate complex civil disputes more efficiently and effectively, ultimately delivering better outcomes for clients and advancing the pursuit of just outcomes in our rapidly evolving legal landscape.

Incorporating AI to Address Mental Health Challenges in K-12 Students

The National Institute of Mental Health reported that 16.32% of youth (aged 12-17) in the District of Columbia (DC) experience at least one major depressive episode (MDE).
Although the prevalence of youth with MDE in DC is lower compared to some states, such as Oregon (where it reached 21.13%), it is important to address mental health challenges in youth early, as untreated mental health challenges can persist into adulthood. Further, the number of youths with MDE climbs nationally each year, including last year when it rose by almost 2% to approximately 300,000 youth.

It is important to note that there are programs specifically designed to help and treat youth that have experienced trauma and are living with mental health challenges. In DC, several mental health services and professional counseling services are available to residents. Most importantly, there is a broad reaching school-based mental health program that aims to provide a behavioral health expert in every school building. Additionally, on the DC government’s website, there is a list of mental health services programs available, which can be found here.

In conjunction with the mental health programs, early identification of students at risk for suicide, self-harm, and behavioral issues can help states, including DC, ensure access to mental health care and support for these young individuals. In response to the widespread youth mental health crisis, K-12 schools are employing the use of artificial intelligence (AI)-based tools to identify students at risk for suicide and self-harm. Through AI-based suicide risk monitoring, natural language processing, sentiment analysis, predictive models, early intervention, and surveillance and evaluation, AI is playing a crucial role in addressing the mental challenges faced by youth.

AI systems, developed by companies like Bark, Gaggle, and GoGuardian, aim to monitor students’ digital footprint through various data inputs, such as online interactions and behavioral patterns, for signs of distress or risk. These programs identify students who may be at risk for self-harm or suicide and alert the school and parents accordingly.

Proposals for using AI models to enhance mental health surveillance in school settings by implementing chat boxes to interact with students are being introduced. The chat box conversation logs serve as the source of raw data for the machine learning. According to Using AI for Mental Health Analysis and Prediction in School Surveys, existing survey results evaluated by health experts can be used to create a test dataset to validate the machine learning models. Supervised learning can then be deployed to classify specific behaviors and mental health patterns. However, there are concerns about how these programs work and what safeguards the companies have in place to protect youths’ data from being sold to other platforms. Additionally, there are concerns about whether these companies are complying with relevant laws (e.g., the Family Educational Rights and Privacy Act [FERPA]).

The University of Michigan identified AI technologies, such as natural language processing (NLP) and sentiment analysis, that can analyze user interactions, such as posts and comments, to identify signs of distress, anxiety, or depression. For example, Breathhh is an AI-powered Chrome extension designed to automatically deliver mental health exercises based on an individual’s web activity and online behaviors. By monitoring and analyzing the user’s interactions, the application can determine appropriate moments to present stress-relieving practices and strategies. Applications, like Breathhh, are just one example of personalized interventions designed by monitoring user interaction.

When using AI to address mental health concerns among K-12 students, policy implications must be carefully considered.

First, developers must obtain informed consent from students, parents, guardians, and all stakeholders before deploying such AI models. The use of AI models is always a topic of concern for policymakers because of the privacy concerns that come with it. To safely deploy AI models, there needs to be privacy protection policies in place to safeguard sensitive information from being improperly used. There is no comprehensive legislation that addresses those concerns either nationally or locally.
Second, developers also need to consider and factor in any bias engrained in their algorithm through data testing and regular monitoring of data output before it reaches the user. AI has the ability to detect early signs of mental health challenges. However, without such proper safeguards in place, we risk failing to protect students from being disproportionately impacted. When collected data reflects biases, it can lead to unfair treatment of certain groups. For youth, this can result in feelings of marginalization and adversely affect their mental health.
Effective policy considerations should encourage the use of AI models that will provide interpretable results, and policymakers need to understand how these decisions are made. Policies should outline how schools will respond to alerts generated by the system. A standard of care needs to be universally recognized, whether it be through policy or the companies’ internal safeguards. This standard of care should outline guidelines that address situations in which AI data output conflicts with human judgment.

Responsible AI implementation can enhance student well-being, but it requires careful evaluation to ensure students’ data is protected from potential harm. Moving forward, school leaders, policymakers, and technology developers need to consider the benefits and risks of AI-based mental health monitoring programs. Balancing the intended benefits while mitigating potential harms is crucial for student well-being.

© 2024 ArentFox Schiff LLP
by: David P. GrossoStarshine S. Chun of ArentFox Schiff LLP

For more news on Artificial Intelligence and Mental Health, visit the NLR Communications, Media & Internet section.

Navigating the EU AI Act from a US Perspective: A Timeline for Compliance

After extensive negotiations, the European Parliament, Commission, and Council came to a consensus on the EU Artificial Intelligence Act (the “AI Act”) on Dec. 8, 2023. This marks a significant milestone, as the AI Act is expected to be the most far-reaching regulation on AI globally. The AI Act is poised to significantly impact how companies develop, deploy, and manage AI systems. In this post, NM’s AI Task Force breaks down the key compliance timelines to offer a roadmap for U.S. companies navigating the AI Act.

The AI Act will have a staged implementation process. While it will officially enter into force 20 days after publication in the EU’s Official Journal (“Entry into Force”), most provisions won’t be directly applicable for an additional 24 months. This provides a grace period for businesses to adapt their AI systems and practices to comply with the AI Act. To bridge this gap, the European Commission plans to launch an AI Pact. This voluntary initiative allows AI developers to commit to implementing key obligations outlined in the AI Act even before they become legally enforceable.

With the impending enforcement of the AI Act comes the crucial question for U.S. companies that operate in the EU or whose AI systems interact with EU citizens: How can they ensure compliance with the new regulations? To start, U.S. companies should understand the key risk categories established by the AI Act and their associated compliance timelines.

I. Understanding the Risk Categories
The AI Act categorizes AI systems based on their potential risk. The risk level determines the compliance obligations a company must meet.  Here’s a simplified breakdown:

  • Unacceptable Risk: These systems are banned entirely within the EU. This includes applications that threaten people’s safety, livelihood, and fundamental rights. Examples may include social credit scoring, emotion recognition systems at work and in education, and untargeted scraping of facial images for facial recognition.
  • High Risk: These systems pose a significant risk and require strict compliance measures. Examples may include AI used in critical infrastructure (e.g., transport, water, electricity), essential services (e.g., insurance, banking), and areas with high potential for bias (e.g., education, medical devices, vehicles, recruitment).
  • Limited Risk: These systems require some level of transparency to ensure user awareness. Examples include chatbots and AI-powered marketing tools where users should be informed that they’re interacting with a machine.
  • Minimal Risk: These systems pose minimal or no identified risk and face no specific regulations.

II. Key Compliance Timelines (as of March 2024):

Time Frame  Anticipated Milestones
6 months after Entry into Force
  • Prohibitions on Unacceptable Risk Systems will come into effect.
12 months after Entry into Force
  • This marks the start of obligations for companies that provide general-purpose AI models (those designed for widespread use across various applications). These companies will need to comply with specific requirements outlined in the AI Act.
  • Member states will appoint competent authorities responsible for overseeing the implementation of the AI Act within their respective countries.
  • The European Commission will conduct annual reviews of the list of AI systems categorized as “unacceptable risk” and banned under the AI Act.
  • The European Commission will issue guidance on high-risk AI incident reporting.
18 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
24 months after Entry into Force
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.
36 months after Entry into Force
  • The European Commission will issue an implementing act outlining specific requirements for post-market monitoring of high-risk AI systems, including a list of practical examples of high-risk and non-high risk use cases.
By the end of 2030
  • This is a critical milestone for companies developing or using high-risk AI systems listed in Annex III of the AI Act, as compliance obligations will be effective. These systems, which encompass areas like biometrics, law enforcement, and education, will need to comply with the full range of regulations outlined in the AI Act.
  • EU member states will have implemented their own rules on penalties, including administrative fines, for non-compliance with the AI Act.

In addition to the above, we can expect further rulemaking and guidance from the European Commission to come forth regarding aspects of the AI Act such as use cases, requirements, delegated powers, assessments, thresholds, and technical documentation.

Even before the AI Act’s Entry into Force, there are crucial steps U.S. companies operating in the EU can take to ensure a smooth transition. The priority is familiarization. Once the final version of the Act is published, carefully review it to understand the regulations and how they might apply to your AI systems. Next, classify your AI systems according to their risk level (high, medium, minimal, or unacceptable). This will help you determine the specific compliance obligations you’ll need to meet. Finally, conduct a thorough gap analysis. Identify any areas where your current practices for developing, deploying, or managing AI systems might not comply with the Act. By taking these proactive steps before the official enactment, you’ll gain valuable time to address potential issues and ensure your AI systems remain compliant in the EU market.

The Imperatives of AI Governance

If your enterprise doesn’t yet have a policy, it needs one. We explain here why having a governance policy is a best practice and the key issues that policy should address.

Why adopt an AI governance policy?

AI has problems.

AI is good at some things, and bad at other things. What other technology is linked to having “hallucinations”? Or, as Sam Altman, CEO of OpenAI, recently commented, it’s possible to imagine “where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

If that isn’t a red flag…

AI can collect and summarize myriad information sources at breathtaking speed. Its ability to reason from or evaluate that information, however, consistent with societal and governmental values and norms, is almost non-existent. It is a tool – not a substitute for human judgment and empathy.

Some critical concerns are:

  • Are AI’s outputs accurate? How precise are they?
  • Does it use PII, biometric, confidential, or proprietary data appropriately?
  • Does it comply with applicable data privacy laws and best practices?
  • Does it mitigate the risks of bias, whether societal or developer-driven?

AI is a frontier technology.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

In other words, there are relatively few rules governing AI—and those that have been adopted are probably out of date. You need to go above and beyond regulatory compliance and create your own rules and guidelines.

And the capabilities of AI tools are not always foreseeable.

Hundreds of companies are releasing AI tools without fully understanding the functionality, potential and reach of these tools. In fact, this is somewhat intentional: at some level, AI’s promise – and danger – is its ability to learn or “evolve” to varying degrees, without human intervention or supervision.

AI tools are readily available.

Your employees have access to AI tools, regardless of whether you’ve adopted those tools at an enterprise level. Ignoring AI’s omnipresence, and employees’ inherent curiosity and desire to be more efficient, creates an enterprise level risk.

Your customers and stakeholders demand transparency.

The policy is a critical part of building trust with your stakeholders.

Your customers likely have two categories of questions:

How are you mitigating the risks of using AI? And, in particular, what are you doing with my data?

And

Will AI benefit me – by lowering the price you charge me? By enhancing your service or product? Does it truly serve my needs?

Your board, investors and leadership team want similar clarity and direction.

True transparency includes explainability: At a minimum, commit to disclose what AI technology you are using, what data is being used, and how the deliverables or outputs are being generated.

What are the key elements of AI governance?

Any AI governance policy should be tailored to your institutional values and business goals. Crafting the policy requires asking some fundamental questions and then delineating clear standards and guidelines to your workforce and stakeholders.

1. The policy is a “living” document, not a one and done task.

Adopt a policy, and then re-evaluate it at least semi-annually, or even more often. AI governance will not be a static challenge: It requires continuing consideration as the technology evolves, as your business uses of AI evolve, and as legal compliance directives evolve.

2. Commit to transparency and explainability.

What is AI? Start there.

Then,

What AI are you using? Are you developing your own AI tools, or using tools created by others?

Why are you using it?

What data does it use? Are you using your own datasets, or the datasets of others?

What outputs and outcomes is your AI intended to deliver?

3. Check the legal compliance box.

At a minimum, use the policy to communicate to stakeholders what you are doing to comply with applicable laws and regulations.

Update the existing policies you have in place addressing data privacy and cyber risk issues to address AI risks.

The EU recently adopted its Artificial Intelligence Act, the world’s first comprehensive AI legislation. The White House has issued AI directives to dozens of federal agencies. Depending on the industry, you may already be subject to SEC, FTC, USPTO, or other regulatory oversight.

And keeping current will require frequent diligence: The technology is rapidly changing even while the regulatory landscape is evolving weekly.

4. Establish accountability. 

Who within your company is “in charge of” AI? Who will be accountable for the creation, use and end products of AI tools?

Who will manage AI vendor relationships? Is their clarity as to what risks will be borne by you, and what risks your AI vendors will own?

What is your process for approving, testing and auditing AI?

Who is authorized to use AI? What AI tools are different categories of employees authorized to use?

What systems are in place to monitor AI development and use? To track compliance with your AI policies?

What controls will ensure that the use of AI is effective, while avoiding cyber risks and vulnerabilities, or societal biases and discrimination?

5. Embrace human oversight as essential.

Again, building trust is key.

The adoption of a frontier, possibly hallucinatory technology is not a build it, get it running, and then step back process.

Accountability, verifiability, and compliance require hands on ownership and management.

If nothing else, ensure that your AI governance policy conveys this essential.

AI Got It Wrong, Doesn’t Mean We Are Right: Practical Considerations for the Use of Generative AI for Commercial Litigators

Picture this: You’ve just been retained by a new client who has been named as a defendant in a complex commercial litigation. While the client has solid grounds to be dismissed from the case at an early stage via a dispositive motion, the client is also facing cost constraints. This forces you to get creative when crafting a budget for your client’s defense. You remember the shiny new toy that is generative Artificial Intelligence (“AI”). You plan to use AI to help save costs on the initial research, and even potentially assist with brief writing. It seems you’ve found a practical solution to resolve all your client’s problems. Not so fast.

Seemingly overnight, the use of AI platforms has become the hottest thing going, including (potentially) for commercial litigators. However, like most rapidly rising technological trends, the associated pitfalls don’t fully bubble to the surface until after the public has an opportunity (or several) to put the technology to the test. Indeed, the use of AI platforms to streamline legal research and writing has already begun to show its warts. Of course, just last year, prime examples of the danger of relying too heavily on AI were exposed in highly publicized cases venued in the Southern District of New York. See e.g. Benajmin Weiser, Michael D. Cohen’s Lawyer Cited Cases That May Not Exist, Judge Says, NY Times (December 12, 2023); Sara Merken, New York Lawyers Sanctioned For Using Fake Chat GPT Case In Legal Brief, Reuters (June 26, 2023).

In order to ensure litigators are striking the appropriate balance between using technological assistance in producing legal work product, while continuing to adhere to the ethical duties and professional responsibility mandated by the legal profession, below are some immediate considerations any complex commercial litigator should abide by when venturing into the world of AI.

Confidentiality

As any experienced litigator will know, involving a third-party in the process of crafting of a client’s strategy and case theory—whether it be an expert, accountant, or investigator—inevitably raises the issue of protecting the client’s privileged, proprietary and confidential information. The same principle applies to the use of an AI platform. Indeed, when stripped of its bells and whistles, an AI platform could potentially be viewed as another consultant employed to provide work product that will assist in the overall representation of your client. Given this reality, it is imperative that any litigator who plans to use AI, also have a complete grasp of the security of that AI system to ensure the safety of their client’s privileged, proprietary and confidential information. A failure to do so may not only result in your client’s sensitive information being exposed to an unsecure, and potentially harmful, online network, but it can also result in a violation of the duty to make reasonable efforts to prevent the disclosure of or unauthorized access to your client’s sensitive information. Such a duty is routinely set forth in the applicable rules of professional conduct across the country.

Oversight

It goes without saying that a lawyer has a responsibility to ensure that he or she adheres to the duty of candor when making representations to the Court. As mentioned, violations of that duty have arisen based on statements that were included in legal briefs produced using AI platforms. While many lawyers would immediately rebuff the notion that they would fail to double-check the accuracy of a brief’s contents—even if generated using AI—before submitting it to the Court, this concept gets trickier when working on larger litigation teams. As a result, it is not only incumbent on those preparing the briefs to ensure that any information included in a submission that was created with the assistance of an AI platform is accurate, but also that the lawyers responsible for oversight of a litigation team are diligent in understanding when and to what extent AI is being used to aid the work of that lawyer’s subordinates. Similar to confidentiality considerations, many courts’ rules of professional conduct include rules related to senior lawyer responsibilities and oversight of subordinate lawyers. To appropriately abide by those rules, litigation team leaders should make it a point to discuss with their teams the appropriate use of AI at the outset of any matter, as well as to put in place any law firm, court, or client-specific safeguards or guidelines to avoid potential missteps.

Judicial Preferences

Finally, as the old saying goes: a good lawyer knows the law; a great lawyer knows the judge. Any savvy litigator knows that the first thing one should understand prior to litigating a case is whether the Court and the presiding Judge have put in place any standing orders or judicial preferences that may impact litigation strategy. As a result of the rise of use of AI in litigation, many Courts across the country have responded in turn by developing either standing orders, local rules, or related guidelines concerning the appropriate use of AI. See e.g., Standing Order Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson (June 6, 2023 E.D.P.A.), Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers (January 25, 2024, N.J. Supreme Court). Litigators should follow suit and ensure they understand the full scope of how their Court, and more importantly, their assigned Judge, treat the issue of using AI to assist litigation strategy and development of work product.

Recent Healthcare-Related Artificial Intelligence Developments

AI is here to stay. The development and use of artificial intelligence (“AI”) is rapidly growing in the healthcare landscape with no signs of slowing down.

From a governmental perspective, many federal agencies are embracing the possibilities of AI. The Centers for Disease Control and Prevention is exploring the ability of AI to estimate sentinel events and combat disease outbreaks and the National Institutes of Health is using AI for priority research areas. The Centers for Medicare and Medicaid Services is also assessing whether algorithms used by plans and providers to identify high risk patients and manage costs can introduce bias and restrictions. Additionally, as of December 2023, the U.S. Food & Drug Administration cleared more than 690 AI-enabled devices for market use.

From a clinical perspective, payers and providers are integrating AI into daily operations and patient care. Hospitals and payers are using AI tools to assist in billing. Physicians are using AI to take notes and a wide range of providers are grappling with which AI tools to use and how to deploy AI in the clinical setting. With the application of AI in clinical settings, the standard of patient care is evolving and no entity wants to be left behind.

From an industry perspective, the legal and business spheres are transforming as a result of new national and international regulations focused on establishing the safe and effective use of AI, as well as commercial responses to those regulations. Three such regulations are top of mind, including (i) President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI; (ii) the U.S. Department of Health and Human Services’ (“HHS”) Final Rule on Health Data, Technology, and Interoperability; and (iii) the World Health Organization’s (“WHO”) Guidance for Large Multi-Modal Models of Generative AI. In response to the introduction of regulations and the general advancement of AI, interested healthcare stakeholders, including many leading healthcare companies, have voluntarily committed to a shared goal of responsible AI use.

U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI

On October 30, 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (“Executive Order”). Though long-awaited, the Executive Order was a major development and is one of the most ambitious attempts to regulate this burgeoning technology. The Executive Order has eight guiding principles and priorities, which include (i) Safety and Security; (ii) Innovation and Competition; (iii) Commitment to U.S. Workforce; (iv) Equity and Civil Rights; (v) Consumer Protection; (vi) Privacy; (vii) Government Use of AI; and (viii) Global Leadership.

Notably for healthcare stakeholders, the Executive Order directs the National Institute of Standards and Technology to establish guidelines and best practices for the development and use of AI and directs HHS to develop an AI Task force that will engineer policies and frameworks for the responsible deployment of AI and AI-enabled tech in healthcare. In addition to those directives, the Executive Order highlights the duality of AI with the “promise” that it brings and the “peril” that it has the potential to cause. This duality is reflected in HHS directives to establish an AI safety program to prioritize the award of grants in support of AI development while ensuring standards of nondiscrimination are upheld.

U.S. Department of Health and Human Services Health Data, Technology, and Interoperability Rule

In the wake of the Executive Order, the HHS Office of the National Coordinator finalized its rule to increase algorithm transparency, widely known as HT-1, on December 13, 2023. With respect to AI, the rule promotes transparency by establishing transparency requirements for AI and other predictive algorithms that are part of certified health information technology. The rule also:

  • implements requirements to improve equity, innovation, and interoperability;
  • supports the access, exchange, and use of electronic health information;
  • addresses concerns around bias, data collection, and safety;
  • modifies the existing clinical decision support certification criteria and narrows the scope of impacted predictive decision support intervention; and
  • adopts requirements for certification of health IT through new Conditions and Maintenance of Certification requirements for developers.

Voluntary Commitments from Leading Healthcare Companies for Responsible AI Use

Immediately on the heels of the release of HT-1 came voluntary commitments from leading healthcare companies on responsible AI development and deployment. On December 14, 2023, the Biden Administration announced that 28 healthcare provider and payer organizations signed up to move toward the safe, secure, and trustworthy purchasing and use of AI technology. Specifically, the provider and payer organizations agreed to:

  • develop AI solutions to optimize healthcare delivery and payment;
  • work to ensure that the solutions are fair, appropriate, valid, effective, and safe (“F.A.V.E.S.”);
  • deploy trust mechanisms to inform users if content is largely AI-generated and not reviewed or edited by a human;
  • adhere to a risk management framework when utilizing AI; and use of AI technology. Specifically, the provider and payer organizations agreed to:
  • develop AI solutions to optimize healthcare delivery and payment;
  • work to ensure that the solutions are fair, appropriate, valid, effective, and safe (“F.A.V.E.S.”);
  • deploy trust mechanisms to inform users if content is largely AI-generated and not reviewed or edited by a human;
  • adhere to a risk management framework when utilizing AI; and
  • research, investigate, and develop AI swiftly but responsibly.

WHO Guidance for Large Multi-Modal Models of Generative AI

On January 18, 2024, the WHO released guidance for large multi-modal models (“LMM”) of generative AI, which can simultaneously process and understand multiple types of data modalities such as text, images, audio, and video. The WHO guidance contains 98 pages with over 40 recommendations for tech developers, providers and governments on LMMs, and names five potential applications of LMMs, such as (i) diagnosis and clinical care; (ii) patient-guided use; (iii) administrative tasks; (iv) medical education; and (v) scientific research. It also addresses the liability issues that may arise out of the use of LMMs.

Closely related to the WHO guidance, the European Council’s agreement to move forward with a European Union AI Act (“Act”), was a significant milestone in AI regulation in the European Union. As previewed in December 2023, the Act will inform how AI is regulated across the European Union, and other nations will likely take note of and follow suit.

Conclusion

There is no question that AI is here to stay. But how the healthcare industry will look when AI is more fully integrated still remains to be seen. The framework for regulating AI will continue to evolve as AI and the use of AI in healthcare settings changes. In the meantime, healthcare stakeholders considering or adopting AI solutions should stay abreast of developments in AI to ensure compliance with applicable laws and regulations.

Commerce Department Launches Cross-Sector Consortium on AI Safety — AI: The Washington Report

  1. The Department of Commerce has launched the US AI Safety Institute Consortium (AISIC), a multistakeholder body tasked with developing AI safety standards and practices.
  2. The AISIC is currently composed of over 200 members representing industry, academia, labor, and civil society.
  3. The consortium may play an important role in implementing key provisions of President Joe Biden’s executive order on AI, including the development of guidelines on red-team testing[1] for AI and the creation of a companion resource to the AI Risk Management Framework.

Introduction: “First-Ever Consortium Dedicated to AI Safety” Launches

On February 8, 2024, the Department of Commerce announced the creation of the US AI Safety Institute Consortium (AISIC), a multistakeholder body housed within the National Institute of Standards and Technology (NIST). The purpose of the AISIC is to facilitate the development and adoption of AI safety standards and practices.

The AISIC has brought together over 200 organizations from industry, labor, academia, and civil society, with more members likely to join in the coming months.

Biden AI Executive Order Tasks Commerce Department with AI Safety Efforts

On October 30, 2023, President Joe Biden signed a wide-ranging executive order on AI (“AI EO”). This executive order has mobilized agencies across the federal bureaucracy to implement policies, convene consortiums, and issue reports on AI. Among other provisions, the AI EO directs the Department of Commerce (DOC) to establish “guidelines and best practices, with the aim of promoting consensus…[and] for developing and deploying safe, secure, and trustworthy AI systems.”

Responding to this mandate, the DOC established the US Artificial Intelligence Safety Institute (AISI) in November 2023. The role of the AISI is to “lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating the most advanced AI models.” Concretely, the AISI is tasked with developing AI safety guidelines and standards and liaising with the AI safety bodies of partner nations.

The AISI is also responsible for convening multistakeholder fora on AI safety. It is in pursuance of this responsibility that the DOC has convened the AISIC.

The Responsibilities of the AISIC

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” said DOC Secretary Gina Raimondo in a statement announcing the launch of the AISIC. “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”

To achieve the objectives set out by the AI EO, the AISIC has convened leading AI developers, research institutions, and civil society groups. At launch, the AISIC has over 200 members, and that number will likely grow in the coming months.

According to NIST, members of the AISIC will engage in the following objectives:

  1. Guide the evolution of industry standards on the development and deployment of safe, secure, and trustworthy AI.
  2. Develop methods for evaluating AI capabilities, especially those that are potentially harmful.
  3. Encourage secure development practices for generative AI.
  4. Ensure the availability of testing environments for AI tools.
  5. Develop guidance and practices for red-team testing and privacy-preserving machine learning.
  6. Create guidance and tools for digital content authentication.
  7. Encourage the development of AI-related workforce skills.
  8. Conduct research on human-AI system interactions and other social implications of AI.
  9. Facilitate understanding among actors operating across the AI ecosystem.

To join the AISIC, organizations were instructed to submit a letter of intent via an online webform. If selected for participation, applicants were asked to sign a Cooperative Research and Development Agreement (CRADA)[2] with NIST. Entities that could not participate in a CRADA were, in some cases, given the option to “participate in the Consortium pursuant to separate non-CRADA agreement.”

While the initial deadline to submit a letter of intent has passed, NIST has provided that there “may be continuing opportunity to participate even after initial activity commences for participants who were not selected initially or have submitted the letter of interest after the selection process.” Inquiries regarding AISIC membership may be directed to this email address.

Conclusion: The AISIC as a Key Implementer of the AI EO?

While at the time of writing NIST has not announced concrete initiatives that the AISIC will undertake, it is likely that the body will come to play an important role in implementing key provisions of Biden’s AI EO. As discussed earlier, NIST created the AISI and the AISIC in response to the AI EO’s requirement that DOC establish “guidelines and best practices…for developing and deploying safe, secure, and trustworthy AI systems.” Under this general heading, the AI EO lists specific resources and frameworks that the DOC must establish, including:

It is premature to assert that either the AISI or the AISIC will exclusively carry out these goals, as other bodies within the DOC (such as the National AI Research Resource) may also contribute to the satisfaction of these requirements. That being said, given the correspondence between these mandates and the goals of the AISIC, along with the multistakeholder and multisectoral structure of the consortium, it is likely that the AISIC will play a significant role in carrying out these tasks.

We will continue to provide updates on the AISIC and related DOC AI initiatives. Please feel free to contact us if you have questions as to current practices or how to proceed.

Endnotes

[1] As explained in our July 2023 newsletter on Biden’s voluntary framework on AI, “red-teaming” is “a strategy whereby an entity designates a team to emulate the behavior of an adversary attempting to break or exploit the entity’s technological systems. As the red team discovers vulnerabilities, the entity patches them, making their technological systems resilient to actual adversaries.”

[2] See “CRADAs – Cooperative Research & Development Agreements” for an explanation of CRADAs. https://www.doi.gov/techtransfer/crada.

Raj Gambhir contributed to this article.