American Bar Association Issues Formal Opinion on Use of Generative AI Tools

On July 29, 2024, the American Bar Association issued ABA Formal Opinion 512 titled “Generative Artificial Intelligence Tools.”

The opinion addresses the ethical considerations lawyers are required to consider when using generative AI (GenAI) tools in the practice of law.

The opinion sets forth the ethical rules to consider, including the duties of competence, confidentiality, client communication, raising only meritorious claims, candor toward the tribunal, supervisory responsibilities of others, and setting of fees.

Competence

The opinion reiterates previous ABA opinions that lawyers are required to have a reasonable understanding of the capabilities and limitations of specific technologies used, including remaining “vigilant” about the benefits and risks of the use of technology, including GenAI tools. It specifically mentions that attorneys must be aware of the risk of inaccurate output or hallucinations of GenAI tools and that independent verification is necessary when using GenAI tools. According to the opinion, users must evaluate the tool being used, analyze the output, not solely rely on the tool’s conclusions, and cannot replace their judgment with that of the tool.

Confidentiality

The opinion reminds lawyers that they are ethically required to make reasonable efforts to prevent inadvertent or unauthorized access or disclosure of client information or their representation of a client. It suggests that, before inputting data into a GenAI tool, a lawyer must evaluate not only the risk of unauthorized disclosure outside the firm, but also possible internal unauthorized disclosure in violation of an ethical wall or access controls. The opinion stressed that if client information is uploaded to a GenAI tool within the firm, the client data may be disclosed to and used by other lawyers in the firm, without the client’s consent, to benefit other clients. The client data input into the GenAI tool may be used for self-learning or teaching an algorithm that then discloses the client data without the client’s consent.

The opinion suggests that before submitting client data to a GenAI tool, lawyers must review the tool’s privacy policy, terms of use, and all contractual terms to determine how the GenAI tool will collect and use the data in the context of the ethical duty of confidentiality with clients.

Further, the opinion suggests that if lawyers intend to use GenAI tools to provide legal services to clients, lawyers are required to obtain informed client consent before using the tool. The lawyer is required to inform the client of the use of the GenAI tool, the risk of use of the tool and then obtain the client’s informed consent prior to use. Importantly, the opinion states that “general, boiler-plate provisions [in an] engagement letter” are not sufficient” to meet this requirement.

Communication

With regard to lawyers’ duty to effectively communicate information that is in the best interest of their client, the opinion notes that—depending on the circumstances—it may be in the best interest of the client to disclose the use of GenAI tools, particularly if the use will affect the fee charged to the client, or the output of the GenAI tool will influence a significant decision in the representation of the client. This communication can be included in the engagement letter, though it may be appropriate to communicate directly with the client before including it in the engagement letter.

Meritorious Claims + Candor Toward Tribunal

Lawyers are officers of the court and have an ethical obligation to put forth meritorious claims and to be candid with the tribunal before which such claims are presented. In the context of the use of GenAI tools, as stated above, there is a risk that without appropriate evaluation and supervision (including the use of independent professional judgment), the output of a GenAI tool can sometimes be erroneous or considered a “hallucination.” Therefore, to reiterate the ethical duty of competence, lawyers are advised to independently evaluate any output provided by a GenAI tool.

In addition, some courts require that attorneys disclose whether GenAI tools have been used in court filings. It is important to research and follow local court rules and practices regarding disclosure of the use of GenAI tools before submitting filings.

Supervisory Responsibilities

Consistent with other ABA Opinions relevant to the use of technology, the opinion stresses that managerial responsibilities include providing clear policies to lawyers, non-lawyers, and staff about the use of GenAI in the practice of law. I think this is one of the most important messages of the opinion. Firms and law practices are required to develop and implement a GenAI governance program, evaluate the risk and benefit of the use of a GenAI tool, educate all individuals in the firm on the policies and guardrails put in place to use such tools, and supervise their use. This is a clear message that lawyers and law firms need to evaluate the use of GenAI tools and start working on developing and implementing their own AI governance program for all internal users.

Fees

The key takeaway of the fees section of Opinion 512 is that a lawyer can’t bill a client to learn how to use a GenAI tool. Consistent with other opinions relating to fees, only extraordinary costs associated with the use of GenAI tools are permitted to be billed to the client, with the client’s knowledge and consent. In addition, the opinion points out that any efficiencies gained by the use of GenAI tools, with the client’s consent, should benefit the client through reduced fees.

Conclusion

Although consistent with other ABA opinions related to the use of technology, an understanding of ABA Opinion 512 is important as GenAI tools become more ubiquitous. It is clear that there will be additional opinions related to the use of GenAI tools from the ABA as well as state bar associations and that it is a topic of interest in the context of adherence with ethical obligations. A clear message from Opinion 512 is that now is a good time to consider developing an AI governance program.

NIST Releases Risk ‘Profile’ for Generative AI

A year ago, we highlighted the National Institute of Standards and Technology’s (“NIST”) release of a framework designed to address AI risks (the “AI RMF”). We noted how it is abstract, like its central subject, and is expected to evolve and change substantially over time, and how NIST frameworks have a relatively short but significant history that shapes industry standards.

As support for the AI RMF, last month NIST released in draft form the Generative Artificial Intelligence Profile (the “Profile”).The Profile identifies twelve risks posed by Generative AI (“GAI”) including several that are novel or expected to be exacerbated by GAI. Some of the risks are exotic and new, such as confabulation, toxicity, and homogenization.

The Profile also identifies risks that are familiar, such as those for data privacy and cybersecurity. For the latter, the Profile details two types of cybersecurity risks: (1) those with the potential to discover or enable the lowering of barriers for offensive capabilities, and (2) those that can expand the overall attack surface by exploiting vulnerabilities as novel attacks.

For offensive capabilities and novel attack risks, the Profile includes these examples:

  • Large language models (a subset of GAI) that discover vulnerabilities in data and write code to exploit them.
  • GAI-powered co-pilots that proactively inform threat actors on how to evade detection.
  • Prompt-injections that steal data and run code remotely on a machine.
  • Compromised datasets that have been ‘poisoned’ to undermine the integrity of outputs.

In the past, the Federal Trade Commission (“FTC”) has referred to NIST when investigating companies’ data breaches. In settlement agreements, the FTC has required organizations to implement security measures through the NIST Cybersecurity Framework. It is reasonable to assume then, that NIST guidance on GAI will also be recommended or eventually required.

But it’s not all bad news – despite the risks when in the wrong hands, GAI will also improve cybersecurity defenses. As recently noted by Microsoft’s recent report on the GDPR & GAI, GAI can already: (1) support cybersecurity teams and protect organizations from threats, (2) train models to review applications and code for weaknesses, and (3) review and deploy new code more quickly by automating vulnerability detection.

Before ‘using AI to fight AI’ becomes legally required, just as multi-factor authentication, encryption, and training have become legally required for cybersecurity, the Profile should be considered to mitigate GAI risks. From pages 11-52, the Profile examines four hundred ways to use the Profile for GAI risks. Grouping them together, some of the recommendations include:

  • Refine existing incident response plans and risk assessments if acquiring, embedding, incorporating, or using open-source or proprietary GAI systems.
  • Implement regular adversary testing of the GAI, along with regular tabletop exercises with stakeholders and the incident response team to better inform improvements.
  • Carefully review and revise contracts and service level agreements to identify who is liable for a breach and responsible for handling an incident in case one is identified.
  • Document everything throughout the GAI lifecycle, including changes to any third parties’ GAI systems, and where audited data is stored.

“Cybersecurity is the mother of all problems. If you don’t solve it, all the other technology stuff just doesn’t happen” said Charlie Bell, Microsoft’s Chief of Security, in 2022. To that end, the AM RMF and now the Profile provide useful and early guidance on how to manage GAI Risks. The Profile is open for public comment until June 2, 2024.