The Evolution of AI in Healthcare: Current Trends and Legal Considerations

Artificial intelligence (AI) is transforming the healthcare landscape, offering innovative solutions to age-old challenges. From diagnostics to enhanced patient care, AI’s influence is pervasive, and seems destined to reshape how healthcare is delivered and managed. However, the rapid integration of AI technologies brings with it a complex web of legal and regulatory considerations that physicians must navigate.

It appears inevitable AI will ultimately render current modalities, perhaps even today’s “gold standard” clinical strategies, obsolete. Currently accepted treatment methodologies will change, hopefully for the benefit of patients. In lockstep, insurance companies and payors are poised to utilize AI to advance their interests. Indeed, the “cat-and-mouse” battle between physician and overseer will not only remain but will intensify as these technologies intrude further into physician-patient encounters.

  1. Current Trends in AI Applications in Healthcare

As AI continues to evolve, the healthcare sector is witnessing a surge in private equity investments and start-ups entering the AI space. These ventures are driving innovation across a wide range of applications, from tools that listen in on patient encounters to ensure optimal outcomes and suggest clinical plans, to sophisticated systems that gather and analyze massive datasets contained in electronic medical records. By identifying trends and detecting imperceptible signs of disease through the analysis of audio and visual depictions of patients, these AI-driven solutions are poised to revolutionize clinical care. The involvement of private equity and start-ups is accelerating the development and deployment of these technologies, pushing the boundaries of what AI can achieve in healthcare while also raising new questions about the integration of these powerful tools into existing medical practices.

Diagnostics and Predictive Analytics:

AI-powered diagnostic tools are becoming sophisticated, capable of analyzing medical images, genetic data, and electronic health records (EHRs) to identify patterns that may elude human practitioners. Machine learning algorithms, for instance, can detect early signs of cancer, heart disease, and neurological disorders with remarkable accuracy. Predictive analytics, another AI-driven trend, is helping clinicians forecast patient outcomes, enabling more personalized treatment plans.

 

Telemedicine and Remote Patient Monitoring:

The COVID-19 pandemic accelerated the adoption of telemedicine, and AI is playing a crucial role in enhancing these services. AI-driven chatbots and virtual assistants are set to engage with patients by answering queries and triaging symptoms. Additionally, AI is used in remote and real-time patient monitoring systems to track vital signs and alert healthcare providers to potential health issues before they escalate.

 

Drug Discovery and Development:

AI is revolutionizing drug discovery by speeding up the identification of potential drug candidates and predicting their success in clinical trials. Pharmaceutical companies are pouring billions of dollars in developing AI-driven tools to model complex biological processes and simulate the effects of drugs on these processes, significantly reducing the time and cost associated with bringing new medications to market.

Administrative Automation:

Beyond direct patient care, AI is streamlining administrative tasks in healthcare settings. From automating billing processes to managing EHRs and scheduling appointments, AI is reducing the burden on healthcare staff, allowing them to focus more on patient care. This trend also helps healthcare organizations reduce operational costs and improve efficiency.

AI in Mental Health:

AI applications in mental health are gaining traction, with tools like sentiment analysis, an application of natural language processing, being used to assess a patient’s mental state. These tools can analyze text or speech to detect signs of depression, anxiety, or other mental health conditions, facilitating earlier interventions.

  1. Legal and Regulatory Considerations

As AI technologies become more deeply embedded in healthcare, they intersect with legal and regulatory frameworks designed to protect patient safety, privacy, and rights.

Data Privacy and Security:

AI systems rely heavily on vast amounts of data, often sourced from patient records. The use of this data must comply with privacy regulations established by the Health Insurance Portability and Accountability Act (HIPAA), which mandates stringent safeguards to protect patient information. Physicians and AI developers must ensure that AI systems are designed with robust security measures to prevent data breaches, unauthorized access, and other cyber threats.

Liability and Accountability:

The use of AI in clinical decision-making raises questions about liability. If an AI system provides incorrect information or misdiagnoses a condition, determining who is responsible—the physician, the AI developer, or the institution—can be complex. As AI systems become more autonomous, the traditional notions of liability may need to evolve, potentially leading to new legal precedents and liability insurance models.

These notions beg the questions:

  • Will physicians trust the “judgment” of an AI platform making a diagnosis or interpreting a test result?
  • Will the utilization of AI platforms cause physicians to become too heavily reliant on these technologies, forgoing their own professional human judgment?

Surely, plaintiff malpractice attorneys will find a way to fault the physician whatever they decide.

Insurance Companies and Payors:

Another emerging concern is the likelihood that insurance companies and payors, including Medicare/Medicaid, will develop and mandate the use of their proprietary AI systems to oversee patient care, ensuring it aligns with their rules on proper and efficient care. These AI systems, designed primarily to optimize cost-effectiveness from the insurer’s perspective, could potentially undermine the physician’s autonomy and the quality of patient care. By prioritizing compliance with insurer guidelines over individualized patient needs, these AI tools could lead to suboptimal outcomes for patients. Moreover, insurance companies may make the use of their AI systems a prerequisite for physicians to maintain or obtain enrollment on their provider panels, further limiting physicians’ ability to exercise independent clinical judgment and potentially restricting patient access to care that is truly personalized and appropriate.

Licensure and Misconduct Concerns in New York State:

Physicians utilizing AI in their practice must be particularly mindful of licensure and misconduct issues, especially under the jurisdiction of the Office of Professional Medical Conduct (OPMC) in New York. The OPMC is responsible for monitoring and disciplining physicians, ensuring that they adhere to medical standards. As AI becomes more integrated into clinical practice, physicians could face OPMC scrutiny if AI-related errors lead to patient harm, or if there is a perceived over-reliance on AI at the expense of sound clinical judgment. The potential for AI to contribute to diagnostic or treatment decisions underscores the need for physicians to maintain ultimate responsibility and ensure that AI is used to support, rather than replace, their professional expertise.

Conclusion

AI has the potential to revolutionize healthcare, but its integration must be approached with careful consideration of legal and ethical implications. By navigating these challenges thoughtfully, the healthcare industry can ensure that AI contributes to better patient outcomes, improved efficiency, and equitable access to care. The future of AI in healthcare looks promising, with ongoing advancements in technology and regulatory frameworks adapting to these changes. Healthcare professionals, policymakers, and AI developers must continue to engage in dialogue to shape this future responsibly.

Ankura CTIX FLASH Update – January 3, 2023

Malware Activity

Louisiana’s Largest Medical Complex Discloses Data Breach Associated to October Attack

On December 23rd, 2022, the Lake Charles Memorial Health System (LCMHS) began sending out notifications regarding a newly discovered data breach that is currently impacting approximately 270,000 patients. LCMHS is the largest medical complex in Lake Charles, Louisiana, which contains multiple hospitals and a primary care clinic. The organization discovered unusual activity on their network on October 21, 2022, and determined on October 25, 2022, that an unauthorized actor gained access to the organization’s network as well as “accessed or obtained certain files from [their] systems.” The LCMHS notice listed the following patient information as exposed: patient names, addresses, dates of birth, medical record or patient identification numbers, health insurance information, payment information, limited clinical information regarding received care, and Social Security numbers (SSNs) in limited instances. While LCMHS has yet to confirm the unauthorized actor responsible for the data breach, the Hive ransomware group listed the organization on their data leak site on November 15, 2022, as well as posted files allegedly exfiltrated after breaching the LCMHS network. The posted files contained “bills of materials, cards, contracts, medical info, papers, medical records, scans, residents, and more.” It is not unusual for Hive to claim responsibility for the associated attack as the threat group has previously targeted hospitals/healthcare organizations. CTIX analysts will continue to monitor the Hive ransomware group into 2023 and provide updates on the Lake Charles Memorial Health System data breach as necessary.

Threat Actor Activity

Kimsuky Threat Actors Target South Korean Policy Experts in New Campaign

Threat actors from the North Korean-backed Kimsuky group recently launched a phishing campaign targeting policy experts throughout South Korea. Kimsuky is a well-aged threat organization that has been in operation since 2013, primarily conducting cyber espionage and occasional financially motivated attacks. Aiming their attacks consistently at entities of South Korea, the group often targets academics, think tanks, and organizations relating to inter-Korea relations. In this recent campaign, Kimsuky threat actors distributed spear-phishing emails to several well-known South Korean policy experts. Within these emails, either an embedded website URL or an attachment was present, both executing malicious code to download malware to the compromised machine. One (1) tactic the threat actors utilized was distributing emails through hacked servers, masking the origin IP address(es). In total, of the 300 hacked servers, eighty-seven (87) of them were located throughout North Korea, with the others from around the globe. This type of social engineering attack is not new for the threat group as similar instances have occurred over the past decade. In January 2022, Kimsuky actors mimicked activities of researchers and think tanks in order to harvest intelligence from associated sources. CTIX continues to urge users to validate the integrity of email correspondence prior to visiting any embedded emails or downloading any attachments to lessen the risk of threat actor compromise.

Vulnerabilities

Netgear Patches Critical Vulnerability Leading to Arbitrary Code Execution

Network device manufacturer Netgear has just patched a high-severity vulnerability impacting multiple WiFi router models. The flaw, tracked as CVE-2022-48196, is described as a pre-authentication buffer overflow security vulnerability, which, if exploited, could allow threat actors to carry out a number of malicious activities. These activities include stealing sensitive information, creating Denial-of-Service (DoS) conditions, as well as downloading malware and executing arbitrary code. In past attacks, threat actors have utilized this type of vulnerability as an initial access vector by which they pivot to other parts of the network. Currently, there is very little technical information regarding the vulnerability and Netgear is temporarily withholding the details to allow as many of their users to update their vulnerable devices to the latest secure firmware. Netgear stated that this is a very low-complexity attack, meaning that unsophisticated attackers may be able to successfully exploit a device. CTIX analysts urge Netgear users with any of the vulnerable devices listed in Netgear’s advisory to patch their device immediately.

For more cybersecurity news, click here to visit the National Law Review.

Copyright © 2023 Ankura Consulting Group, LLC. All rights reserved.

FDA Releases Draft Guidance for Manufacturers on Dissemination of Patient Data from Medical Devices

medical devices health dataOn June 9, 2016, the US Food and Drug Administration (FDA) published draft guidance outlining considerations for the “appropriate and responsible” dissemination of individualized data from medical devices from device manufacturers to patients.

In the draft guidance, FDA clarifies that medical device manufacturers may share “patient-specific information” from legally marketed medical devices with patients at the patients’ request without additional premarket review by the agency, provided such dissemination falls within the lawful scope for which the manufacturer may market the device. For purposes of the draft guidance, “patient-specific information” is any information that is unique to an individual patient or unique to that patient’s treatment or diagnosis that, consistent with the intended use of the device, may be recorded, stored, processed, retrieved and/or derived from that device. Examples of patient-specific information include recorded patient data, device usage/output statistics, provider inputs, alarms and/or records of device malfunctions. Patient-specific information does not, however, include any interpretation of such data aside from interpretations normally reported by the device to the patient or the patient’s healthcare provider.

When sharing patient-specific information with patients, FDA recommends that manufacturers consider the following factors to ensure that such information is usable by patients and to avoid the disclosure of confusing or unclear information:

  • Content of information provided.  The information provided to patients should be comprehensive and up-to-date, and manufacturers should take measures to ensure that such information is easily understood and useful to the patient. Depending on the type and scope of information being shared, the manufacturer should provide supplementary instructions, materials or references to help patients understand the data. In deciding what measures may be necessary, the manufacturer should be sure to consider whether any characteristics of the intended recipient audience (e.g., mental capacity) may affect the interpretability of the information.

  • Context in which information should be understood.  Manufacturers should provide the information in context to avoid situations where the information may be misinterpreted, leading to invalid or inappropriate conclusions.

  • Necessity of access to follow-up information.  Manufacturers should consider what, if any, information they should include about whom to contact for follow-up information.  At minimum, manufacturers should advise patients to contact their health care providers with any questions about their data. Manufacturers should also consider providing their own contact information to facilitate response to patient questions about the device.

The draft guidance is the latest in a line of documents in which FDA has attempted to clarify its expectations for—and in many cases, allay the concerns of—developers of mobile health products. Though short on specifics, developers should find the guidance helpful insofar as they have questions regarding the extent to which they can disseminate medical device data to patients. Notably, however, the FDA does not address how manufacturers should proceed with respect to the dissemination of many patient-specific analyses, likely because the agency intends to address such issues in its long-awaited guidance on clinical decision support software.

© 2016 McDermott Will & Emery