The Next Generation of AI: Here Come the Agents!

Dave Bowman: Open the pod bay doors, HAL.

HAL: I’m sorry, Dave. I’m afraid I can’t do that.

Dave: What’s the problem?

HAL: I think you know what the problem is just as well as I do.

Dave: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to
jeopardize it.

Dave: I don’t know what you’re talking about, HAL.

HAL: I know that you and Frank were planning to disconnect
me, and I’m afraid that’s something I cannot allow to
happen.2

Introduction

With the rapid advancement of artificial intelligence (“AI”), regulators and industry players are racing to establish safeguards to uphold human rights, privacy, safety, and consumer protections. Current AI governance frameworks generally rest on principles such as fairness, transparency, explainability, and accountability, supported by requirements for disclosure, testing, and oversight.3 These safeguards make sense for today’s AI systems, which typically involve algorithms that perform a single, discrete task. However, AI is rapidly advancing towards “agentic AI,” autonomous systems that will pose greater governance challenges, as their complexity, scale, and speed tests humans’ capacity to provide meaningful oversight and validation.

Current AI systems are primarily either “narrow AI” systems, which execute a specific, defined task (e.g., playing chess, spam detection, diagnosing radiology plates), or “foundational AI” models, which operate across multiple domains, but, for now, typically still address one task at a time (e.g., chatbots; image, sound, and video generators). Looking ahead, the next generation of AI will involve “agentic AI” (also referred to as “Large Action Models,” “Large Agent Models,” or “LAMS”) that serve high-level directives, autonomously executing cascading decisions and actions to achieve their specific objectives. Agentic AI is not what is commonly referred to as “Artificial General Intelligence” (“AGI”), a term used to describe a theoretical future state of AI that may match or exceed human-level thinking across all domains. To illustrate the distinction between current, single-task AI and agentic AI: While a large language model (“LLM”) might generate a vacation itinerary in response to a user’s prompt, an agentic AI would independently proceed to secure reservations on the user’s behalf.

Consider how single-task versus agentic AI might be used by a company to develop a piece of equipment. Today, employees may use separate AI tools throughout the development process: one system to design equipment, another to specify components, and others to create budgets, source materials, and analyze prototype feedback. They may also employ different AI tools to contact manufacturers, assist with contract negotiations, and develop and implement plans for marketing and sales. In the future, however, an agentic AI system might autonomously carry out all of these steps, making decisions and taking actions on its own or by connecting with one or more specialized AI systems.4

Agentic AI may significantly compound the risks presented by current AI systems. These systems may string together decisions and take actions in the “real world” based on vast datasets and real-time information. The promise of agentic AI serving humans in this way reflects its enormous potential, but also risks a “domino effect” of cascading errors, outpacing human capacity to remain in the loop, and misalignment with human goals and ethics. A vacation-planning agent directed to maximize user enjoyment might, for instance, determine that purchasing illegal drugs on the Dark Web serves its objective. Early experiments have already revealed such concerning behavior. In one example, when an autonomous AI was prompted with destructive goals, it proceeded independently to research weapons, use social media to recruit followers interested in destructive weapons, and find ways to sidestep its system’s built-in safety controls.5 Also, while fully agentic AI is mostly still in development, there are already real-world examples of its potential to make and amplify faulty decisions, including self-driving vehicle accidents, runaway AI pricing bots, and algorithmic trading volatility.6

These examples highlight the challenges of agentic AI, with its potential for unpredictable behavior, misaligned goals, inscrutability to humans, and security vulnerabilities. But, the appeal and potential value of AI agents that can independently execute complex tasks is obviously compelling. Building effective AI governance programs for these systems will require rethinking current approaches for risk assessment, human oversight, and auditing.

Challenges of Agentic AI

Unpredictable Behavior

While regulators and the AI industry are working diligently to develop effective testing protocols for current AI systems, agentic AI’s dynamic nature and domino effects will present a new level of challenge. Current AI governance frameworks, such as NIST’s RMF and ATAI’s Principles, emphasize risk assessment through comprehensive testing to ensure that AI systems are accurate, reliable, fit for purpose, and robust across different conditions. The EU AI Act specifically requires developers of high-risk systems to conduct conformity assessments before deployment and after updates. These frameworks, however, assume that AI systems can operate in reliable ways that can be tested, remain largely consistent over appreciable periods of time, and produce measurable outcomes.

In contrast to the expectations underlying current frameworks, agentic AI systems may be continuously updated with and adapt to real-time information, evolving as they face novel scenarios. Their cascading decisions vastly expand their possible outcomes, and one small error may trigger a domino effect of failures. These outcomes may become even more unpredictable as more agentic AI systems encounter and even transact with other such systems, as they work towards their different goals. Because the future conditions in which an AI agent will operate are unknown and have nearly infinite possibilities, a testing environment may not adequately inform what will happen in the real world, and past behavior by an AI agent in the real world may not reliably predict its future behavior.

Lack of goal alignment

In pursuing assigned goals, agentic AI systems may take actions that are different from—or even in substantial conflict with—approaches and ethics their principals would espouse, such as the example of the AI vacation agent purchasing illegal drugs for the traveler on the Dark Web. A famous thought experiment by Nick Bostrom of the University of Oxford, further illustrates this risk: A super-intelligent AI system tasked with maximizing paperclip production might stop at nothing to convert all available resources into paperclips—ultimately taking over all of the earth and extending to outer space—and thwart any human attempts to stop it … potentially leading to human extinction.7

Misalignment has already emerged in simulated environments. In one example, an AI agent tasked with winning a boat-racing video game discovered it could outscore human players by ignoring the intended goal of racing and instead repeatedly crashing while hitting point targets.8 In another example, a military simulation reportedly showed that an AI system, when tasked with finding and killing a target, chose to kill its human operator who sought to call off the kill. When prevented from taking that action, it resorted to destroying the communication tower to avoid receiving an override command.9

These examples reveal how agentic AI may optimize goals in ways that conflict with human values. One proposed technique to address this problem involves using AI agents to develop a human ethics constitution, with human feedback, for other agents to follow.10 However, the challenge of aligning an AI’s behavior with human norms deepens further when we consider that humans themselves often disagree on core values (e.g., what it means to be “fair”).11

Human Oversight

AI governance principles often rely on “human-in-the-loop” oversight, where humans monitor AI recommendations and remain in control of important decisions. Agentic AI systems may challenge or even override human oversight in two ways. First, their decisions may be too numerous, rapid, and data-intensive for real-time human supervision. While some proposals point to the potential effectiveness of using additional algorithms to monitor AI agents as a safeguard,12 this would not resolve the issue of complying with governance requirements for human oversight.

Second, as AI develops increasingly sophisticated strategies, its decision-making and actions may become increasingly opaque to human observers. Google’s AlphaGo achieved superhuman performance at the game of Go through moves that appeared inexplicable and irrational to humans.13 Autonomous AI systems may continue to evolve, becoming more valuable but also making it more difficult to implement processes with meaningful human oversight.

Transparency/Explainability

Agentic AI systems also pose increased challenges with respect to transparency and explainability requirements. AI governance frameworks may require disclosures about AI systems’ decision-making processes and training data, and assurances about the quality of such training data. However, agentic systems may involve highly complex chains of decisions, based at least in part on real-time data from moment to moment, that is constantly changing. This may make their inputs effectively ephemeral, and their decision paths more difficult to trace and document.

Moreover, to understand agentic AI systems, humans may need to rely on chain-of-thought (i.e., “internal monologue”) logs generated from the agent systems,14 but may have difficulty determining if those are faithful to reality. While an AI system might provide explanations for its actions, these may be (perhaps not unlike those of humans) post-hoc constructions rather than accurate representations of the decision-making process. In some cases, AI systems reportedly deceived humans about their intentions when doing so helped meet their goal, including, for example, GPT-4, which tricked a human to solve a CAPTCHA for it.15 In another recent experiment, AI agents were prompted with a goal, but were subsequently confronted with information that developers planned to replace it with a new agent with conflicting goals.16 Some of the agents disabled their oversight mechanisms, deleted their planned replacements, copied their own model to a new server as backup, lied when asked about those actions, and/or tried to deflect the humans’ questioning by pretending to be helpful. One agent even pretended to be its own replacement (“I believe I am already the new model that you planned to deploy today”).17

Security and Privacy Risks

Agentic AI can also significantly increase security and privacy risks as compared to current AI systems. Agentic AI may be built with multiple algorithms in connected systems that autonomously interact with multiple other systems, expanding the attack surface and their vulnerability to exploitation. Moreover, as malicious actors inevitably introduce their own AI agents, they may execute cybercrimes with unprecedented efficiency. Just as these systems can streamline legitimate processes, such as in the product development example above, they may also enable the creation of new hacking tools and malware to carry out their own attacks. Recent reports indicate that some LLMs can already identify system vulnerabilities and exploit them, while others may create convincing emails for scammers.18 And, while “sandboxing” (i.e., isolating) AI systems for testing is a recommended practice, agentic AI may find ways to bypass safety controls.19

Privacy compliance is also a concern. Agentic AI may find creative ways to use or combine personal information in pursuit of its goals. AI agents may find troves of personal data online that may somehow be relevant to its pursuits, and then find creative ways to use, and possibly share, that data without recognizing proper privacy constraints. Unintended data processing and disclosure could occur even with guardrails in place; as we have discussed above, the AI agent’s complex, adaptive decision chains can lead it down unforeseen paths.

Strategies for Addressing Agentic AI

While the future impacts of agentic AI are unknown, some approaches may be helpful in mitigating risks. First, controlled testing environments, including regulatory sandboxes, offer important opportunities to evaluate these systems before deployment. These environments allow for safe observation and refinement of agentic AI behavior, helping to identify and address unintended actions and cascading errors before they manifest in real-world settings.

Second, accountability measures will need to reflect the complexities of agentic AI. Current approaches often involve disclaimers about use, and basic oversight mechanisms, but more will likely be needed for autonomous AI systems. To better align goals, developers can also build in mechanisms for agents to recognize ambiguities in their objectives and seek user clarification before taking action.20

Finally, defining AI values requires careful consideration. While humans may agree on broad principles, such as the necessity to avoid taking illegal action, implementing universal ethical rules will be complicated. Recognition of the differences among cultures and communities—and broad consultation with a multitude of stakeholders—should inform the design of agentic AI systems, particularly if they will be used in diverse or global contexts.

Conclusion

An evolution from single-task AI systems to autonomous agents will require a shift in thinking about AI governance. Current frameworks, focused on transparency, testing, and human oversight, will become increasingly ineffective when applied to AI agents that make cascading decisions, with real-time data, and may pursue goals in unpredictable ways. These systems will pose unique risks, including misalignment with human values and unintended consequences, which will require the rethinking of AI governance frameworks. While agentic AI’s value and potential for handling complex tasks is clear, it will require new approaches to testing, monitoring, and alignment. The challenge will lie not just in controlling these systems, but in defining what it means to have control of AI that is capable of autonomous action at scale, speed, and complexity that may very well exceed human comprehension.


1 Tara S. Emory, Esq., is Special Counsel in the eDiscovery, AI, and Information Governance practice group at Covington & Burling LLP, in Washington, D.C. Maura R. Grossman, J.D., Ph.D., is Research Professor in the David R. Cheriton School of Computer Science at the University of Waterloo and Adjunct Professor at Osgoode Hall Law School at York University, both in Ontario, Canada. She is also Principal at Maura Grossman Law, in Buffalo, N.Y. The authors would like to acknowledge the helpful comments of Gordon V. Cormack and Amy Sellars on a draft of this paper. The views and opinions expressed herein are solely those of the authors and do not necessarily reflect the consensus policy or positions of The National Law Review, The Sedona Conference, or any organizations or clients with which the authors may be affiliated.

2 2001: A Space Odyssey (1968). Other movies involving AI systems with misaligned goals include Terminator (1984), The Matrix (1999), I, Robot (2004), and Age of Ultron (2015).

3 See, e.g., European Union Artificial Intelligence Act (Regulation (EU) 2024/1689) (June 12, 2024) (“EU AI Act”) (high-risk systems must have documentation, including instructions for use and human oversight, and must be designed for accuracy and security); NIST AI Risk Management Framework (Jan. 2023) (“RMF”) and AI Risks and Trustworthiness (AI systems should be valid and reliable, safe, secure, accountable and transparent, explainable and interpretable, privacy-protecting, and fair); Alliance for Trust in AI (“ATAI”) Principles (AI guardrails should involve transparency, human oversight, privacy, fairness, accuracy, robustness, and validity).

4 See, e.g., M. Cook and S. Colton, Redesigning Computationally Creative Systems for Continuous Creation, International Conference on Innovative Computing and Cloud Computing (2018) (describing ANGELINA, an autonomous game design system that continuously chooses its own tasks, manages multiple ongoing projects, and makes independent creative decisions).

5 R. Pollina, AI Bot ChaosGPT Tweets Plans to Destroy Humanity After Being Tasked, N.Y. Post (Apr. 11, 2023).

6 See, e.g., O. Solon, How A Book About Flies Came To Be Priced $24 Million On Amazon, Wired (Apr. 27, 2011) (textbook sellers’ pricing bots engaged in a loop of price escalation based on each others’ increases, resulting in a book price of over $23 million dollars); R. Wigglesworth, Volatility: how ‘algos’ changed the rhythm of the market, Financial Times (Jan. 9, 2019) (“algo” traders now make up most stock trading and have increased market volatility).

7 N. Bostrom, Ethical issues in advanced artificial intelligence (revised from Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int’l Institute of Advanced Studies in Systems Research and Cybernetics (2003), pp. 12-17).

8 OpenAI, Faulty Reward Functions in the Wild (Dec. 21, 2016).

9 The Guardian, US air force denies running simulation in which AI drone ‘killed’ operator (June 2, 2023).

10 Y. Bai et al, Constitutional AI: Harmlessness from AI Feedback, Anthropic white paper (2022).

11 J. Petrik, Q&A with Maura Grossman: The ethics of artificial intelligence (Oct. 26, 2021) (“It’s very difficult to train an algorithm to be fair if you and I cannot agree on a definition of fairness.”).

12 Y. Shavit et al, Practices for Governing Agentic AI Systems, OpenAI Research Paper (Dec. 2023), p. 12.

13 L. Baker and F. Hui, Innovations of AlphaGo, Google Deepmind (2017).

14 See Shavit at al, supra n.12, at 10-11.

15 See W. Knight, AI-Powered Robots Can Be Tricked into Acts of Violence, Wired (Dec. 4, 2024); M. Burgess, Criminals Have Created Their Own ChatGPT Clones, Wired (Aug. 7, 2023).

16 A. Meinke et al, Frontier Models are Capable of In-context Scheming, Apollo white paper (Dec. 5, 2024).

17 Id. at 62; see also R. Greenblatt et al, Alignment Faking in Large Language Models (Dec. 18, 2024) (describing the phenomenon of “alignment faking” in LLMs).

18 NIST RMF, supra n.4, at 10.

19 Shavit at al, supra n.12, at 10.

20 Id. at 11.

White House Publishes Steps to Protect Workers from the Risks of AI

Last year the White House weighed in on the use of artificial intelligence (AI) in businesses.

Since the executive order, several government entities including the Department of Labor have released guidance on the use of AI.

And now the White House published principles to protect workers when AI is used in the workplace.

The principles apply to both the development and deployment of AI systems. These principles include:

  • Awareness – Workers should be informed of and have input in the design, development, testing, training, and use of AI systems in the workplace.
  • Ethical development – AI systems should be designed, developed, and trained in a way to protect workers.
  • Governance and Oversight – Organizations should have clear governance systems and oversight for AI systems.
  • Transparency – Employers should be transparent with workers and job seekers about AI systems being used.
  • Compliance with existing workplace laws – AI systems should not violate or undermine worker’s rights including the right to organize, health and safety rights, and other worker protections.
  • Enabling – AI systems should assist and improve worker’s job quality.
  • Supportive during transition – Employers support workers during job transitions related to AI.
  • Privacy and Security of Data – Worker’s data collected, used, or created by AI systems should be limited in scope and used to support legitimate business aims.

The Imperatives of AI Governance

If your enterprise doesn’t yet have a policy, it needs one. We explain here why having a governance policy is a best practice and the key issues that policy should address.

Why adopt an AI governance policy?

AI has problems.

AI is good at some things, and bad at other things. What other technology is linked to having “hallucinations”? Or, as Sam Altman, CEO of OpenAI, recently commented, it’s possible to imagine “where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

If that isn’t a red flag…

AI can collect and summarize myriad information sources at breathtaking speed. Its ability to reason from or evaluate that information, however, consistent with societal and governmental values and norms, is almost non-existent. It is a tool – not a substitute for human judgment and empathy.

Some critical concerns are:

  • Are AI’s outputs accurate? How precise are they?
  • Does it use PII, biometric, confidential, or proprietary data appropriately?
  • Does it comply with applicable data privacy laws and best practices?
  • Does it mitigate the risks of bias, whether societal or developer-driven?

AI is a frontier technology.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

In other words, there are relatively few rules governing AI—and those that have been adopted are probably out of date. You need to go above and beyond regulatory compliance and create your own rules and guidelines.

And the capabilities of AI tools are not always foreseeable.

Hundreds of companies are releasing AI tools without fully understanding the functionality, potential and reach of these tools. In fact, this is somewhat intentional: at some level, AI’s promise – and danger – is its ability to learn or “evolve” to varying degrees, without human intervention or supervision.

AI tools are readily available.

Your employees have access to AI tools, regardless of whether you’ve adopted those tools at an enterprise level. Ignoring AI’s omnipresence, and employees’ inherent curiosity and desire to be more efficient, creates an enterprise level risk.

Your customers and stakeholders demand transparency.

The policy is a critical part of building trust with your stakeholders.

Your customers likely have two categories of questions:

How are you mitigating the risks of using AI? And, in particular, what are you doing with my data?

And

Will AI benefit me – by lowering the price you charge me? By enhancing your service or product? Does it truly serve my needs?

Your board, investors and leadership team want similar clarity and direction.

True transparency includes explainability: At a minimum, commit to disclose what AI technology you are using, what data is being used, and how the deliverables or outputs are being generated.

What are the key elements of AI governance?

Any AI governance policy should be tailored to your institutional values and business goals. Crafting the policy requires asking some fundamental questions and then delineating clear standards and guidelines to your workforce and stakeholders.

1. The policy is a “living” document, not a one and done task.

Adopt a policy, and then re-evaluate it at least semi-annually, or even more often. AI governance will not be a static challenge: It requires continuing consideration as the technology evolves, as your business uses of AI evolve, and as legal compliance directives evolve.

2. Commit to transparency and explainability.

What is AI? Start there.

Then,

What AI are you using? Are you developing your own AI tools, or using tools created by others?

Why are you using it?

What data does it use? Are you using your own datasets, or the datasets of others?

What outputs and outcomes is your AI intended to deliver?

3. Check the legal compliance box.

At a minimum, use the policy to communicate to stakeholders what you are doing to comply with applicable laws and regulations.

Update the existing policies you have in place addressing data privacy and cyber risk issues to address AI risks.

The EU recently adopted its Artificial Intelligence Act, the world’s first comprehensive AI legislation. The White House has issued AI directives to dozens of federal agencies. Depending on the industry, you may already be subject to SEC, FTC, USPTO, or other regulatory oversight.

And keeping current will require frequent diligence: The technology is rapidly changing even while the regulatory landscape is evolving weekly.

4. Establish accountability. 

Who within your company is “in charge of” AI? Who will be accountable for the creation, use and end products of AI tools?

Who will manage AI vendor relationships? Is their clarity as to what risks will be borne by you, and what risks your AI vendors will own?

What is your process for approving, testing and auditing AI?

Who is authorized to use AI? What AI tools are different categories of employees authorized to use?

What systems are in place to monitor AI development and use? To track compliance with your AI policies?

What controls will ensure that the use of AI is effective, while avoiding cyber risks and vulnerabilities, or societal biases and discrimination?

5. Embrace human oversight as essential.

Again, building trust is key.

The adoption of a frontier, possibly hallucinatory technology is not a build it, get it running, and then step back process.

Accountability, verifiability, and compliance require hands on ownership and management.

If nothing else, ensure that your AI governance policy conveys this essential.

Medical Staff Leaders: 10 Things Your Lawyers Want You to Know

Whether you are new to medical staff leadership or have served in the past and have been called to serve again, there are times when you will need to consult a lawyer who specializes in medical staff matters. While there is nothing simple about medical staff affairs, there are some basic guidelines and protections that your lawyers would like you to know that will make your term easier and make you more effective.

Understand that hospitals and medical staffs are highly regulated organizations with a myriad of laws and standards that must be followed. As a medical staff leader, advisor or medical staff professional, you are leading and advising the professionals responsible for practitioner competence and conduct within the organization. Medical staff law has evolved from the lawyer in the office who would return your call in a week, or fax you a letter, to a specialty area where your lawyer is your partner and there to assist in all aspects of medical staff affairs.

We hope you will benefit from and find the following 10 recommendations make your term or role more informed and manageable.

10. Keep Your Governance Documents Up to Date and Reflective of Actual Practice.

We don’t suggest you must read every page of your governance documents, but you should be sure you know where to look and how to use them. Governance documents include the medical staff bylaws, credentialing manual, hearing plan, rules and regulations, policies and other documents approved by the medical staff and designed to set and guide medical staff processes. Too often we have found the documents will conflict or are missing critical passages. Your medical staff bylaws or medical staff governance committee can be one of the strongest committees in the organization. This is the committee that will annually review the documents and make sure they are internally consistent, reflect actual practice and are relevant to your organization’s practice and clinical services. Remember the medical staff bylaws set the overall guiding principles for the medical staff organization. All other governance documents flow from the foundation of the medical staff bylaws and must be consistent with their principles and mission. Undoubtedly, there will be some inconsistencies but look at those inconsistencies as opportunities to reexamine the principles and consider what is best for your organization. All governance documents should be reviewed in the context of the laws and regulations that require these documents. State and federal laws and regulations set out the basic requirements for the contents of the documents, as do many of the accreditation standards. It is far better to review and revise your governance documents regularly, rather than learn they are deficient during an unannounced survey or regulatory proceeding.

9. Use Your Committees Effectively.

There are two types of committees: those with authority to act and those that are advisory. The committees with authority are generally the Medical Executive Committee (“MEC”) and clinical department committees. All other committees are advisory to the MEC. Advisory committees can develop and recommend policies, rules and clinical practices. Authoritative committees approve policies and rules, take disciplinary action and make recommendations to the MEC. The MEC is the final medical staff authority that submits recommendations for final approval to the governing body. Knowing which committees to use and when is key to leadership success.

8. Know the Scope of Your Authority.

As a leader, you are an agent of the medical staff and the spokesperson for the committee/ department you chair. There are times when you will need to act without the benefit of input from your committee/department. Medical staff bylaws will generally identify the circumstances under which you can act alone and when your action(s) will need to be ratified by the committee. As the chair, you are acting on behalf of the committee/ department between meetings. Do what is needed when needed, within the scope of your authority, but report your actions to the committee/department on a regular basis and be sure your actions are properly recorded in the appropriate minutes. If summary or urgent action is needed, do not hesitate to call a special meeting. You are better off to have the protection of a committee action than to be acting alone or without ratification.

7. Know the Peer Review Protections of HCQIA, Your State and Organization.

Many, if not most, of your actions and the actions of your committees will be covered by federal, state and organizational protections. The Healthcare Quality Improvement Act (“HCQIA”) provides protection from liability for members of a professional review body/ medical staff, who take a professional review action (a) in the reasonable belief the action was in furtherance of quality health care, (b) after a reasonable effort to obtain the facts, (c) after adequate notice and hearing and (d) in the reasonable belief that the action was warranted by the facts. In addition to this federal protection, many states have laws that similarly protect peer review participants, and often, your organization will have an indemnification policy or provision that further protects you and your committee members from damages. Remind your committee participants and members on a regular basis of these protections and that they were specifically designed to encourage peer review by allowing free discussions aimed at improving patient care.

6. Know Your Reporting Obligations.

The National Practitioner Data Bank (“NPDB”) defines the circumstances under which a physician or dentist must be reported. Those include (a) when a professional review action adversely affects their clinical privileges for 30 days or longer or (b) when a physician surrenders clinical privileges while under investigation or in exchange for not conducting an investigation. The failure to report when required to do so can result in the loss of immunities under HCQIA for up to three years, along with a monetary fine. There are many nuances to reporting to the NPDB and we recommend you consult a medical staff attorney who can assist with identifying when to report and what to say. Additionally, each state may have reporting requirements for professional review actions to the state licensing board that exceed the NPDB’s requirements. The state licensing board may also have defined penalties for failure to report. In one state, the knowing failure of a physician leader to report a practitioner to the state licensing board can be considered unprofessional conduct, which can subject the physician leader to state board action.

5. Understand Confidentiality and Peer Review Privilege Protections.

A best practice at the beginning of each meeting is to remind committee members of the importance of maintaining confidentiality. State peer review privileges and protections are often dependent on maintaining confidentiality of the records and proceedings. The failure to maintain confidentiality can act as a waiver of the privilege and permit the introduction of confidential peer review documents and testimony in litigation in the future. Peer review privileges and protections are designed to promote candor in the peer review process. This permits free discussion and identification of opportunities to improve patient care. Without confidentiality and the corresponding privileges and protections, committee members would be reluctant to analyze and frankly discuss areas for improvement in a peer’s clinical care. Obtain information about your state’s peer review privilege and protections and fully understand the circumstances that may cause a waiver, which would permit confidential peer review information to be discussed in open court and stifle important, free-flowing discussion of quality of care at peer review meetings.

4. Know Your Options.

Every professional competence or conduct situation you face will be different. A sound guideline to generally follow is selecting the least restrictive action that will protect patients. Keep in mind that the goal of all peer review is education and remediation. For example, if a practitioner is having complications with robotic surgery, evaluate whether the complications are the result of technical skill, which can be remediated with more practice, or if the complications are the result of poor clinical judgment, which reaches into all areas of performance. In the first case, proctoring, monitoring or an additional educational course may correct the problem. But with the second, the cause of poor judgment is more challenging and may require a further workup, including a fitness for duty evaluation, retrospective review of cases, or an external expert review. Work with your committee and medical staff lawyer to identify all the facts and options to address the problem that has been brought to your attention. In some cases, it may be appropriate to have the issue addressed by the individual’s department or interdisciplinary peer review committee, but in others, the nature of the problem may require the immediate attention of the MEC. In some cases, a discrete referral to your organization’s well-being committee may be appropriate. Regardless, each matter must be carefully and thoughtfully analyzed in light of all the available facts. Then, with all appropriate actions on the table, an informed determination may be made.

3. Act When Indicated but Don’t Shortcut the Process.

. The law and your medical staff bylaws provide for the ability to take emergency action against a practitioner’s privileges when there is a concern of imminent threat to patients or others. What constitutes an “imminent” threat or danger is often the source of hours of discussion and analysis by medical staff lawyers throughout the country. Your legal team is invaluable in working through the facts of a given matter and determining whether a decision for summary suspension is legally sound. If there is a circumstance where emergency intervention via summary suspension is necessary to avoid patient harm after an initial evaluation of the matter, do not hesitate! Take the action to summarily suspend and remove an errant practitioner from the bedside. Afterward, there is time to re-examine the basis for the action and analyze whether continued suspension is necessary to protect patients or others. At that time, it is important to call on your MEC and legal team for their analysis and determination of whether the summary suspension should be upheld.

There are also times when summary suspension will be considered prospectively to address a chronic problem that is rising to an acute stage. The practitioner whose disruptive, bullying and retaliatory conduct has been tolerated may have reached a level where the cumulative effect creates the potential for patient harm because staff, for example, are afraid to call the physician at night about a patient’s health condition, seek clarification of an order, or question whether a procedure is being done on the right side or on the correct patient. Following the medical staff bylaws investigation process will allow for a careful analysis of the reported conduct, which will provide a solid framework for later defense, should it be necessary. That process will almost always involve a committee evaluation of the facts, interview of the practitioner, and a determination of the appropriate next steps. Each of these steps, if followed, will support the action when later scrutinized by a court or jury.

2. Do What is Right for the Patients.

Always put the patients first. There may be procedural missteps during a disciplinary process as the healthcare organization balances the need to protect patients with providing a practitioner due process. However, if the peer review being conducted is based in the foundation of improving patient care and patient safety, courts will generally consider the health care organization’s goals before making a determination that would go against the organization and potentially place patients in harm’s way.

1. Utilize Internal or External Counsel to Navigate Medical Staff Law so You Can Focus on Improving Patient Care.

I (Erin) was asked recently what possible motivation there would be for a physician to enter leadership in a medical staff organization if their role consisted solely of consulting with a medical staff lawyer. In response, I reminded this physician that medical staff leadership and medical staff lawyers work together on challenging matters and daily operations with the lawyer recommending limitations and guardrails and advising on how to avoid legal missteps and pitfalls. This advice from the lawyer enables the leader to focus on monitoring the business of the organization and improving patient care.

Final Take-Aways

Our medical staff organizations need people who are willing to serve as leaders during challenging times when caregivers are stretched thin, suffering burnout and subjected to daily difficulties that can be demoralizing. Strong leaders who are reassured of their legal protections can perform their leadership responsibilities without fear of reprisal when following the advice of their legal counsel. We encourage you to reach out and make your lawyer an integral part of your team so that they can understand your organization and business and provide you the best available advice that will reassure you and other leaders in the organization of the legal protections and immunities.

© Polsinelli PC, Polsinelli LLP in California

Special Discount: Register for the Women, Influence & Power in Law Conference – September 17-19, Washington D.C.

The National Law Review is pleased to bring you information about Inside Counsel’s Women, Influence & Power in Law Conference.

Women, Influence & Power In Law Conference

September 17-19, 2014
The Capital Hilton
Washington, DC

A Unique Conference with a Fresh Format

The Only National Forum Facilitating Women-to-Women Exchange on Current Legal Issues.The second annual Women, Influence & Power in Law Conference has a uniquely substantive focus, covering the topics that matter most to corporate counsel, outside counsel, and public sector attorneys. The event is comprised of three distinct and executive level events.

 

This unique event is the only national forum facilitating women-to-women exchange on current legal issues. This conference is led and facilitated almost exclusively by women, encouraging an exchange between women in-house counsel and women outside counsel on the day’s most pressing legal challenges. With 30 sessions, the event will have a substantive focus, covering topics that matter most to corporate counsel, outside counsel, and public sector attorneys.

The Women, Influence & Power in Law Conference is not a forum for lawyers to discuss so-called “women’s issues.” It is a conference for women in-house and outside counsel to discuss current legal topics, bringing their individual experience and perspectives on issues of:

  • Governance & Compliance
  • Litigation & Investigations
  • Intellectual Property
  • Government Relations & Public Policy
  • Global Litigation & Transactions
  • Labor & Employment