The ABA Speaks on AI
Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI. Primary areas of focus identified in a report prepared by the ABA Science and Technology Law Section (the “Report”), include: use of AI in the legal practice, including ethical considerations; the problem of bias in AI; and ensuring proper oversight and control over legal AI.
Use Cases and Ethical Considerations in the Practice of Law
The Report briefly identified the various current uses of AI in the legal practice. Available technologies can help litigators (electronic discovery, predictive outcome analysis, and legal research); corporate practitioners (due diligence and contract management), and even law enforcement and compliance roles (detection of wrongdoing or deception).
Regardless of the nature of one’s practice, all attorneys need to be mindful of their ethical obligations when considering when (or whether) to utilize AI in their practice.
Duty of Competence: According to ABA Model Rule 1.1, attorneys must be adequately informed about current technologies available to them in their practice. Although the use of AI is by no means a required standard of care in today’s legal practice, at a minimum, attorneys should know what AI is available for use in their particular practice area, and should evaluate whether the available AI can help them provide more efficient and effective representation to their clients.
Duty to Communicate: Pursuant to ABA Model Rule 1.4, attorneys should provide reasonable consultation to their clients about the means the attorney will use to achieve the client’s objectives. Therefore, with respect to the use of AI, attorneys should communicate possible AI uses to their clients and obtain their client’s informed consent to use the AI technologies where appropriate. Likewise, if an attorney chooses not to use available AI in a particular case, that decision should also be communicated to and discussed with the client.
Duty to Provide Reasonable Fees: An ancillary consideration underlying the duties of competence and communication is that an attorney, when deciding if and when to use AI, should consider the attorney’s obligation to keep fees reasonable under Model Rule 1.5. If AI use or nonuse would drastically affect the attorney’s fee structure, that should be one consideration in the overall decision of what form of AI to use, or whether to employ AI at all.
Duty of Confidentiality: The use of AI technologies will almost always require an attorney to engage a third party vendor. Accordingly, there is a high probability that certain client information may be “shared” with that vendor. To meet its obligation of maintaining the confidentiality of client information pursuant to ABA Model Rule 1.6, the attorney should “take appropriate steps to ensure that their clients’ information … is safeguarded.” The Report offers a variety of questions, discussed below, that attorneys can ask of the vendor to ensure it can meet its confidentiality obligations when using the AI in question. This inquiry of vendor practices also supports the attorney’s duty of competence by ensuring the attorney is well educated about available technologies.
Duty to Supervise: ABA Model Rules 5.1 and 5.3 require attorneys to supervise the lawyers and nonlawyers that contribute to their legal representation of the client. This duty extends to the use of AI and means that attorneys need to understand the AI they employ well enough to ensure that the AI is producing accurate and reliable work product. It also means the attorney should have sufficient understanding of how the AI itself functions, so the attorney can be confident his or her use of the AI complies with legal and ethical rules applicable to the attorney, such as maintaining the confidentiality and security of client information.
The Problem of Bias in AI
The use of AI carries certain risks, not least of which is recognizing and combating bias. AI technologies depend on developers and trainers to improve over time. But if the developers or trainers themselves are biased or able to otherwise manipulate the AI, then the effectiveness of the AI is adversely affected or could be prejudicial in operation. The Report suggests that attorneys should avoid relying on “black box” AI technologies, which do not explain how the AI output was reached based on the input. Rather, the Report suggests, as do most commentators on AI in general, that AI users should opt for “explainable AI” technologies. Explainable AI is more transparent in that the technology can provide the reasoning for how the technology used the input to reach the decision.
Ensuring Oversight and Control When Using AI
The Report provides a variety of questions attorneys should ask of any AI vendor before using the AI technology. The suggested questions are designed to educate the attorney about how the AI technology works, thus ensuring the attorney is satisfying the duty of competence. The questions also seek to identify potential sources of bias. While bias is likely unavoidable, if it can be identified, then the attorney can account for it in the output and utilize other controls to ensure a more reliable result. By asking appropriate questions about the AI, the attorney will also determine if the AI can actually benefit the client by furthering the client’s objectives, while also complying with ethical obligations. Lastly, appropriate inquiry as to how the AI operates ensures that the AI vendor has implemented adequate recordkeeping and controls and that the AI is therefore reliable.
At the end of the day, no technology can completely replace an attorney’s training. But some technologies can help an attorney save time and thereby provide more effective and efficient representation. Resolution 112 puts the responsibility on the attorneys to understand the AI, and to ensure the AI meets the attorney’s legal and ethical requirements. It is only with that understanding that the attorney can reasonably determine when and how to employ AI to the utmost benefit of the client.