On October 30, 2023, the Biden Administration released and signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order) that articulates White House priorities and policies related to the use and development of artificial intelligence (AI) across different sectors, including health care.
The Biden Administration acknowledged the various competing interests related to AI, including weighing significant technological innovation against unintended societal consequences. Our Mintz and ML Strategies colleagues broadly covered the Executive Order in this week’s issue of AI: The Washington Report. Some sections of the Executive Order are sector-agnostic but will be especially relevant in health care, such as the requirement that agencies use available policy and technical tools, including privacy-enhancing technologies (PETs) where appropriate, to protect privacy and to combat the improper collection and use of individuals’ data.
The Biden Administration only recently announced the Executive Order, but the discussion of regulating AI in health care is certainly not novel. For example, the U.S. Food and Drug Administration (FDA) has already incorporated artificial intelligence and machine learning-based medical device software into its medical device and software regulatory regime. The Office of the National Coordinator for Health Information Technology (ONC) also included AI and machine learning proposals under the HTI-1 Proposed Rule, including proposals to increase algorithmic transparency and allow users of clinical decision support (CDS) to determine if predictive Decision Support Interventions (DSIs) are fair, appropriate, valid, effective, and safe.
We will focus this post on the Executive Order health care-specific directives for the U.S. Department of Health and Human Services (HHS) and other relevant agencies.
HHS AI Task Force and Quality Assurance
To address how AI should be used safely and effectively in health care, the Executive Order requires HHS, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, to establish an “HHS AI Task Force” by January 28, 2024. Once created, the HHS AI Task Force has 365 days to develop a regulatory action plan for predictive and generative AI-enabled technologies in health care that includes:
- use of AI in health care delivery and financing and the need for human oversight where necessary and appropriate;
- long-term safety and real-world performance monitoring of AI-enabled technologies;
- integration of equity principles in AI-enabled technologies, including monitoring for model discrimination and bias;
- assurance that safety, privacy, and security standards are baked into the software development lifecycle;
- prioritization of transparency and making model documentation available to users to ensure AI is used safely;
- collaboration with state, local, Tribal, and territorial health and human services agencies to communicate successful AI use cases and best practices; and
- use of AI to make workplaces more efficient and reduce administrative burdens where possible.
HHS also has until March 28, 2024 to take the following steps:
- consult with other relevant agencies to determine whether AI-enabled technologies in health care “maintain appropriate levels of quality”;
- develop (along with other agencies) AI assurance policies to evaluate the performance of AI-enabled health care tools and assess AI-enabled health care-technology algorithmic system performance against real-world data; and
- consult with other relevant agencies to reconcile the uses of AI in health care against federal non-discrimination and privacy laws, including providing technical assistance to and communicating potential consequences of noncompliance to health care providers and payers.
AI Safety Program and Drug Development
The Executive Order also directs HHS, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, to organize and implement an AI Safety Program by September 30, 2024. In partnership with federally listed Patient Safety Organizations, the AI Safety Program will be tasked with creating a common framework that organizations can use to monitor and track clinical errors resulting from AI used in health care settings. The program will also create a central tracking repository to track complaints from patients and caregivers who report discrimination and bias related to the use of AI.
Additionally, by September 30, 2024, HHS must develop a strategy to regulate the use of AI or AI-enabled tools in the various phases of the drug development process, including determining opportunities for future regulation, rulemaking, guidance, and use of additional statutory authority.
HHS Grant and Award Programs and AI Tech Sprint
The Executive Order also directs HHS to use existing grant and award programs to support ethical AI development by health care technology developers by:
- leveraging existing HHS programs to work with private sector actors to develop AI-enabled tools that can create personalized patient immune-response profiles safely and securely;
- allocating 2024 Leading Edge Acceleration Projects (LEAP) in Health Information Technology funding for the development of AI tools for clinical care, real-world-evidence programs, population health, public health, and related research; and
- accelerating grants awarded through the National Institutes of Health Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program and demonstrating successful AIM-AHEAD activities in underserved communities.
The Secretary of Veterans Affairs must also host two 3-month nationwide AI Tech Sprint competitions by September 30, 2024, with the goal of further developing AI systems to improve the quality of health care for veterans.
Key Takeaways
The Executive Order will spark the cross-agency development of a variety of AI-focused working groups, programs, and policies, including possible rulemaking and regulation, across the health care sector in the coming months. While the law has not yet caught up with the technology, the Executive Order provides helpful insight into the areas that will be topics of new legislation and regulation, such as drug development, as well as what may be the new enforcement priorities under existing law such as non-discrimination and data privacy and security. Health care technology developers and users will want to review their current policies and practices in light of the Biden Administration’s priorities to determine possible areas of improvement in the short term in connection with developing, implementing, and using AI.
Additionally, the National Institute of Standards and Technology (NIST) released the voluntary AI Risk Management Framework in January 2023 that, among other things, organizations can use to analyze and manage risks, impacts, and harms while responsibly designing, developing, deploying, and using AI systems over time. The Executive Order calls for NIST to develop a companion resource to the AI Risk Management Framework for generative AI. In preparation for the new AI programs and possible associated rulemaking from HHS, organizations in health care will want to familiarize themselves with the NIST AI Risk Management Framework and its generative AI companion as well as the AI Bill of Rights published by the Biden Administration in October 2022 to better understand what the federal government sees as characteristics of trustworthy AI systems.
Madison Castle contributed to this article.