The ABA Speaks on AI

Aug 19, 2019

By Jennifer Thompson

See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI. Primary areas of focus identified in a report prepared by the ABA Science and Technology Law Section (the “Report”), include: use of AI in the legal practice, including ethical considerations; the problem of bias in AI; and ensuring proper oversight and control over legal AI.

Use Cases and Ethical Considerations in the Practice of Law

The Report briefly identified the various current uses of AI in the legal practice. Available technologies can help litigators (electronic discovery, predictive outcome analysis, and legal research); corporate practitioners (due diligence and contract management), and even law enforcement and compliance roles (detection of wrongdoing or deception).

Regardless of the nature of one’s practice, all attorneys need to be mindful of their ethical obligations when considering when (or whether) to utilize AI in their practice.

Duty of Competence:  According to ABA Model Rule 1.1, attorneys must be adequately informed about current technologies available to them in their practice. Although the use of AI is by no means a required standard of care in today’s legal practice, at a minimum, attorneys should know what AI is available for use in their particular practice area, and should evaluate whether the available AI can help them provide more efficient and effective representation to their clients.

Duty to Communicate:  Pursuant to ABA Model Rule 1.4, attorneys should provide reasonable consultation to their clients about the means the attorney will use to achieve the client’s objectives. Therefore, with respect to the use of AI, attorneys should communicate possible AI uses to their clients and obtain their client’s informed consent to use the AI technologies where appropriate. Likewise, if an attorney chooses not to use available AI in a particular case, that decision should also be communicated to and discussed with the client.

Duty to Provide Reasonable Fees: An ancillary consideration underlying the duties of competence and communication is that an attorney, when deciding if and when to use AI, should consider the attorney’s obligation to keep fees reasonable under Model Rule 1.5. If AI use or nonuse would drastically affect the attorney’s fee structure, that should be one consideration in the overall decision of what form of AI to use, or whether to employ AI at all.

Duty of Confidentiality: The use of AI technologies will almost always require an attorney to engage a third party vendor. Accordingly, there is a high probability that certain client information may be “shared” with that vendor. To meet its obligation of maintaining the confidentiality of client information pursuant to ABA Model Rule 1.6, the attorney should “take appropriate steps to ensure that their clients’ information … is safeguarded.”  The Report offers a variety of questions, discussed below, that attorneys can ask of the vendor to ensure it can meet its confidentiality obligations when using the AI in question. This inquiry of vendor practices also supports the attorney’s duty of competence by ensuring the attorney is well educated about available technologies.

Duty to Supervise:  ABA Model Rules 5.1 and 5.3 require attorneys to supervise the lawyers and nonlawyers that contribute to their legal representation of the client. This duty extends to the use of AI and means that attorneys need to understand the AI they employ well enough to ensure that the AI is producing accurate and reliable work product. It also means the attorney should have sufficient understanding of how the AI itself functions, so the attorney can be confident his or her use of the AI complies with legal and ethical rules applicable to the attorney, such as maintaining the confidentiality and security of client information.

The Problem of Bias in AI

The use of AI carries certain risks, not least of which is recognizing and combating bias. AI technologies depend on developers and trainers to improve over time. But if the developers or trainers themselves are biased or able to otherwise manipulate the AI, then the effectiveness of the AI is adversely affected or could be prejudicial in operation. The Report suggests that attorneys should avoid relying on “black box” AI technologies, which do not explain how the AI output was reached based on the input. Rather, the Report suggests, as do most commentators on AI in general, that AI users should opt for “explainable AI” technologies. Explainable AI is more transparent in that the technology can provide the reasoning for how the technology used the input to reach the decision.

Ensuring Oversight and Control When Using AI

The Report provides a variety of questions attorneys should ask of any AI vendor before using the AI technology. The suggested questions are designed to educate the attorney about how the AI technology works, thus ensuring the attorney is satisfying the duty of competence. The questions also seek to identify potential sources of bias. While bias is likely unavoidable, if it can be identified, then the attorney can account for it in the output and utilize other controls to ensure a more reliable result. By asking appropriate questions about the AI, the attorney will also determine if the AI can actually benefit the client by furthering the client’s objectives, while also complying with ethical obligations. Lastly, appropriate inquiry as to how the AI operates ensures that the AI vendor has implemented adequate recordkeeping and controls and that the AI is therefore reliable.


At the end of the day, no technology can completely replace an attorney’s training. But some technologies can help an attorney save time and thereby provide more effective and efficient representation. Resolution 112 puts the responsibility on the attorneys to understand the AI, and to ensure the AI meets the attorney’s legal and ethical requirements. It is only with that understanding that the attorney can reasonably determine when and how to employ AI to the utmost benefit of the client.


New York City Council Enters the Anti-Surveillance Fray

On Thursday, June 18, 2020 the New York City Council overwhelmingly approved the Public Oversight of Surveillance Technology Act (or “POST Act”) to institute oversight regarding the New York City Police Department’s use of surveillance technologies.

Harm or Deterrence?: FTC Civil Penalty Assessments Under COPPA

The Federal Trade Commission announced its most recent settlement action under the Children’s Online Privacy Protection Act (COPPA) on June 4. The settlement included a $4 million penalty (suspended to $150,000 due to the defendant’s proven inability to pay) against Hyperbeard, Inc. for alleged violations of COPPA.

Georgia COVID-19 Pandemic Business Safety Act

On July 29, 2020 the Georgia General Assembly sent to Governor Brian Kemp for his approval the Georgia COVID-19 Pandemic Business Safety Act.

FTC Provides Guidance on Using Artificial Intelligence and Algorithms

The Director of the Federal Trade Commission (FTC) Bureau of Consumer Protection recently issued guidance in its Tips and Advice blog as to how companies can manage consumer protection risks that may arise as a result of using artificial intelligence and algorithms.

Is Robotic Process Automation Reducing or Increasing your Software Licensing Fees?

While statistics regarding the increase in the use of Robotic Process Automation (RPA) vary, it is clear that the use of RPA is on the rise. Companies are rolling out RPA in an effort to increase productivity, cut costs and reduce errors.

A Few More Thoughts About Improving Our Force Majeure Provisions

The Coronavirus pandemic has brought the force majeure provision into the spotlight. A quick Google search brings up countless articles published in the past few weeks by lawyers worldwide about how to use force majeure provisions offensively and defensively in these uncertain times.

Government Efforts to Fight a Pandemic Challenge Data Privacy Concerns

Media outlets reported this week that representatives from Facebook, Google, Amazon, and Apple are meeting with members of the White House to brainstorm about ways in which the “Big Four,” can leverage the consumer information they possess to help in the war against COVID–19.

School or Parent? Factors Playing into the FTC’s Analysis of who should provide Parental Consent under COPPA in the Age of EdTech

The use of education technologies (EdTech) has exploded in recent years. In fact, between online learning sites, one to one device deployments in school districts and personalized curriculum services, virtually every student today has some online or digital component to their learning.

NYC’s Task Force to Tackle Algorithmic Bias Issues Final Report

In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

While you’ve been focused on CCPA Compliance Efforts, Elon has Been Developing Cyborgs

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).