WHO Publishes Guidance for Ethics and Governance of AI for Healthcare Sector

The World Health Organization (WHO) recently published “Ethics and Governance of Artificial Intelligence for Health: Guidance on large multi-modal models” (LMMs), which is designed to provide “guidance to assist Member States in mapping the benefits and challenges associated with the use of for health and in developing policies and practices for appropriate development, provision and use. The guidance includes recommendations for governance within companies, by governments, and through international collaboration, aligned with the guiding principles. The principles and recommendations, which account for the unique ways in which humans can use generative AI for health, are the basis of this guidance.”

The guidance focused on one type of generative AI, large multi-modal models (LMMs), “which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm.” According to the report, LMMs have “been adopted faster than any consumer application in history.” The report outlines the benefits and risks of LLMs, particularly the risk of using LLMs in the healthcare sector.

The report proposes solutions to address the risks of using LMMs in health care during development, provision, and deployment of LMMs and ethics and governance of LLMs, “what can be done, and by who.”

In the ever-changing world of AI, this is one report that is timely and provides steps and solutions to follow to tackle the risk of using LMMs.

5 Trends to Watch: 2024 Artificial Intelligence

  1. Banner Year for Artificial Intelligence (AI) in Health – With AI-designed drugs entering clinical trials, growing adoption of generative AI tools in medical practices, increasing FDA approvals for AI-enabled devices, and new FDA guidance on AI usage, 2023 was a banner year for advancements in AI for medtech, healthtech, and techbio—even with the industry-wide layoffs that also hit digital and AI teams. The coming year should see continued innovation and investment in AI in areas from drug design to new devices to clinical decision support to documentation and revenue cycle management (RCM) to surgical augmented reality (AR) and more, together with the arrival of more new U.S. government guidance on and best practices for use of this fast-evolving technology.
  2. Congress and AI Regulation – Congress continues to grapple with the proper regulatory structure for AI. At a minimum, expect Congress in 2024 to continue funding AI research and the development of standards required under the Biden Administration’s October 2023 Executive Order. Congress will also debate legislation relating to the use of AI in elections, intelligence operations, military weapons systems, surveillance and reconnaissance, logistics, cybersecurity, health care, and education.
  3. New State and City Laws Governing AI’s Use in HR Decisions – Look for additional state and city laws to be enacted governing an employer’s use of AI in hiring and performance software, similar to New York City’s Local Law 144, known as the Automated Employment Decisions Tools law. More than 200 AI-related laws have been introduced in state legislatures across the country, as states move forward with their own regulation while debate over federal law continues. GT expects 2024 to bring continued guidance from the EEOC and other federal agencies, mandating notice to employees regarding the use of AI in HR-function software as well as restricting its use absent human oversight.
  4. Data Privacy Rules Collide with Use of AI – Application of existing laws to AI, both within the United States and internationally, will be a key issue as companies apply transparency, consent, automated decision making, and risk assessment requirements in existing privacy laws to AI personal information processing. U.S. states will continue to propose new privacy legislation in 2024, with new implementing regulations for previously passed laws also expected. Additionally, there’s a growing trend towards the adoption of “privacy by design” principles in AI development, ensuring privacy considerations are integrated into algorithms and platforms from the ground up. These evolving legal landscapes are not only shaping AI development but also compelling organizations to reevaluate their data strategies, balancing innovation with the imperative to protect individual privacy rights, all while trying to “future proof” AI personal information processing from privacy regulatory changes.
  5. Continued Rise in AI-Related Copyright & Patent Filings, Litigation – Expect the Patent and Copyright Offices to develop and publish guidance on issues at the intersection of AI and IP, including patent eligibility and inventorship for AI-related innovations, the scope of protection for works produced using AI, and the treatment of copyrighted works in AI training, as mandated in the Biden Administration Executive Order. IP holders are likely to become more sophisticated in how they integrate AI into their innovation and authorship workflows. And expect to see a surge in litigation around AI-generated IP, particularly given the ongoing denial of IP protection for AI-generated content and the lack of precedent in this space in general.

Software as a Medical Device: Challenges Facing the Industry

SaMD Blog Series: Introduction

Editor’s Note: We are excited to announce that this article is the first of a series addressing Software as a Medical Device and the issues that plague digital health companies, investors, clinicians and other organizations that utilize software and medical devices. We will be addressing various considerations including technology, data, intellectual property, licensing, and contracting.

The intersection of software, technology and health care and the proliferation of software as a medical device in the health care arena has become common place and has spurred significant innovations. The term Software as a Medical Device (SaMD) is defined by the International Medical Device Regulators Forum as “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.” In other words, SaMD need not be part of a physical device to achieve its intended purpose. For instance, SaMD could be an application on a mobile phone and not be connected to a physical medical device.

With the proliferation of SaMD also comes the need for those building and using it to firmly grasp legal and regulatory considerations to ensure successful use and commercialization. Over the next several weeks, we will be addressing some of more common issues faced by digital health companies, investors, innovators, and clinicians when developing, utilizing, or commercializing SaMD. The Food and Drug Administration (FDA) has already cleared a significant amount of SaMD, including more than 500 algorithms employing artificial intelligence (AI). Some notable examples include FDA-cleared SaMD such as wearable technology for remote patient monitoring; doctor prescribed video game treatment for children with ADHD; fully immersive virtual reality tools for both physical therapy and mental wellness; and end to end software that generates 3D printed models to better plan surgery and reduce operation time. With this rapid innovation comes a host of legal and regulatory considerations which will be discussed over the course of this SaMD Blog Series.

General Intellectual Property (IP) Considerations for SaMD

This edition will discuss the sophisticated IP strategies that can be used to protect innovations for the three categories of software for biomedical applications: SaMD, software in a medical device, and software used in the manufacture or maintenance of a medical device, including clinical trial collaboration and sponsored research agreements, filing patent applications, and pursuing other forms of protection, such as trade secrets.

Licensing and Contracting with Third Parties for SaMD

This edition will unpack engaging with third parties practically and comprehensively, whether in the context of (i) developing new SaMD or (ii) refining or testing existing SaMD. Data and IP can be effectively either owned or licensed, provided such licenses protect the future interests of the licensee. Such ownership and licensing are particularly important in the AI and machine learning space, which is one area of focus for this edition.

FDA Considerations for SaMD

This edition will explore how FDA is regulating SaMD, which will include a discussion of what constitutes a regulated device, legislative actions to spur innovation, and how FDA is approaching regulation of specific categories of SaMD such as clinical decision support software, general wellness applications, and other mobile medical devices. It will also examine the different regulatory pathways for SaMD and FDA’s current focus on Cybersecurity issues for software.

Health Care Regulatory and Reimbursement Considerations for SAMD

This edition will discuss the intersection of remote monitoring services and SaMD, prescription digital therapeutics and how they intersect with SaMD, licensure and distributor considerations associated with commercializing SaMD, and the growing trend to seek out device specific codes for SaMD.

Our hope is that this series will be a starting point for digital health companies, investors, innovators, and clinicians as each approaches development and use of SaMD as part of their business and clinical offerings.

© 2023 Foley & Lardner LLP

For more information on Healthcare, click here to visit the National Law Review.

 

Bias in Healthcare Algorithms

The application of artificial intelligence technologies to health care delivery, coding and population management may profoundly alter the manner in which clinicians and others interact with patients, and seek reimbursement. While on one hand, AI may promote better treatment decisions and streamline onerous coding and claims submission, there are risks associated with unintended bias that may be lurking in the algorithms. AI is trained on data. To the extent that data encodes historical bias, that bias may cause unintended errors when applied to new patients. This can result in errors in utilization management, coding, billing and healthcare delivery.

The following hypothetical illustrates the problem.

A physician practice management service organization (MSO) adopts a third-party software tool to assist its personnel in make treatment decisions for both the fee-for-service population and a Medicare Advantage population for which the MSO is at financial risk. The tool is used for both pre-authorizations and ICD diagnostic coding for Medicare Advantage patients, without the need of human coders. 

 The MSO’s compliance officer observes two issues:

  1. It appears Native American patients seeking substance abuse treatment are being approved by the MSO’s team far more frequently than other cohorts who are seeking the same care, and
  2. Since the deployment of the software, the MSO is realizing increased risk adjustment revenue attributable to a significant increase in rheumatic condition codes being identified by the AI tool.

Though the compliance officer doesn’t have any independent studies to support it, she is comfortable that the program is making appropriate substance abuse treatment and utilization management recommendations because she believes that there may be a genetic reason why Native Americans are at greater risk than others. With regard to the diagnostic coding, she:

  1. is also comfortable with the vendor’s assurances that their software is more accurate than eyes-on coding;
  2. understands that prevalence data suggests that the elderly population in the United States likely has undiagnosed rheumatic conditions; and,
  3. finds through her own investigation that anecdotally it appears that the software, while perhaps over-inclusive, is catching some diagnoses that could have been missed by the clinician alone. 

 Is the compliance officer’s comfort warranted?

The short answer is, of course, no.

There are two fundamental issues that the compliance officer needs to identify and investigate – both related to possible bias. First, is the tool authorizing unnecessary substance use disorder treatments for Native Americans, (overutilization) and at the same time not approving medically necessary treatments for other ethnicities (underutilization)? Overutilization drives health spend and can result in payment errors, and underutilization can result in improper denials, patient harm and legal exposure. The second issue relates to the AI tool potentially “finding” diagnostic codes that, while statistically supportable based on population data the vendor used in the training set, might not be supported in the MSO’s population. This error can result in submission of unsupported codes that can drive risk adjustment payment, which can carry significant legal and financial exposure.

©2020 Epstein Becker & Green, P.C. All rights reserved.


For more, visit the NLR Health Law & Managed Care section