The ABA Speaks on AI

Aug 19, 2019

By Jennifer Thompson


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI. Primary areas of focus identified in a report prepared by the ABA Science and Technology Law Section (the “Report”), include: use of AI in the legal practice, including ethical considerations; the problem of bias in AI; and ensuring proper oversight and control over legal AI.

Use Cases and Ethical Considerations in the Practice of Law

The Report briefly identified the various current uses of AI in the legal practice. Available technologies can help litigators (electronic discovery, predictive outcome analysis, and legal research); corporate practitioners (due diligence and contract management), and even law enforcement and compliance roles (detection of wrongdoing or deception).

Regardless of the nature of one’s practice, all attorneys need to be mindful of their ethical obligations when considering when (or whether) to utilize AI in their practice.

Duty of Competence:  According to ABA Model Rule 1.1, attorneys must be adequately informed about current technologies available to them in their practice. Although the use of AI is by no means a required standard of care in today’s legal practice, at a minimum, attorneys should know what AI is available for use in their particular practice area, and should evaluate whether the available AI can help them provide more efficient and effective representation to their clients.

Duty to Communicate:  Pursuant to ABA Model Rule 1.4, attorneys should provide reasonable consultation to their clients about the means the attorney will use to achieve the client’s objectives. Therefore, with respect to the use of AI, attorneys should communicate possible AI uses to their clients and obtain their client’s informed consent to use the AI technologies where appropriate. Likewise, if an attorney chooses not to use available AI in a particular case, that decision should also be communicated to and discussed with the client.

Duty to Provide Reasonable Fees: An ancillary consideration underlying the duties of competence and communication is that an attorney, when deciding if and when to use AI, should consider the attorney’s obligation to keep fees reasonable under Model Rule 1.5. If AI use or nonuse would drastically affect the attorney’s fee structure, that should be one consideration in the overall decision of what form of AI to use, or whether to employ AI at all.

Duty of Confidentiality: The use of AI technologies will almost always require an attorney to engage a third party vendor. Accordingly, there is a high probability that certain client information may be “shared” with that vendor. To meet its obligation of maintaining the confidentiality of client information pursuant to ABA Model Rule 1.6, the attorney should “take appropriate steps to ensure that their clients’ information … is safeguarded.”  The Report offers a variety of questions, discussed below, that attorneys can ask of the vendor to ensure it can meet its confidentiality obligations when using the AI in question. This inquiry of vendor practices also supports the attorney’s duty of competence by ensuring the attorney is well educated about available technologies.

Duty to Supervise:  ABA Model Rules 5.1 and 5.3 require attorneys to supervise the lawyers and nonlawyers that contribute to their legal representation of the client. This duty extends to the use of AI and means that attorneys need to understand the AI they employ well enough to ensure that the AI is producing accurate and reliable work product. It also means the attorney should have sufficient understanding of how the AI itself functions, so the attorney can be confident his or her use of the AI complies with legal and ethical rules applicable to the attorney, such as maintaining the confidentiality and security of client information.

The Problem of Bias in AI

The use of AI carries certain risks, not least of which is recognizing and combating bias. AI technologies depend on developers and trainers to improve over time. But if the developers or trainers themselves are biased or able to otherwise manipulate the AI, then the effectiveness of the AI is adversely affected or could be prejudicial in operation. The Report suggests that attorneys should avoid relying on “black box” AI technologies, which do not explain how the AI output was reached based on the input. Rather, the Report suggests, as do most commentators on AI in general, that AI users should opt for “explainable AI” technologies. Explainable AI is more transparent in that the technology can provide the reasoning for how the technology used the input to reach the decision.

Ensuring Oversight and Control When Using AI

The Report provides a variety of questions attorneys should ask of any AI vendor before using the AI technology. The suggested questions are designed to educate the attorney about how the AI technology works, thus ensuring the attorney is satisfying the duty of competence. The questions also seek to identify potential sources of bias. While bias is likely unavoidable, if it can be identified, then the attorney can account for it in the output and utilize other controls to ensure a more reliable result. By asking appropriate questions about the AI, the attorney will also determine if the AI can actually benefit the client by furthering the client’s objectives, while also complying with ethical obligations. Lastly, appropriate inquiry as to how the AI operates ensures that the AI vendor has implemented adequate recordkeeping and controls and that the AI is therefore reliable.

Conclusion

At the end of the day, no technology can completely replace an attorney’s training. But some technologies can help an attorney save time and thereby provide more effective and efficient representation. Resolution 112 puts the responsibility on the attorneys to understand the AI, and to ensure the AI meets the attorney’s legal and ethical requirements. It is only with that understanding that the attorney can reasonably determine when and how to employ AI to the utmost benefit of the client.

OTHER THOUGHT LEADERSHIP POSTS:

DHS Cybersecurity Arm Directs Executive Agencies to Develop Vulnerability Disclosure Policies

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

Open Internet Advocates Rejoice: Ninth Circuit Finds Web Scraping of Publicly Accessible Data Likely Does Not Violate CFAA

The Ninth Circuit Court of Appeals recently handed open internet advocates a big win by upholding the right of a data analytics startup to use automated bots to scrape publicly available data.

The ABA Speaks on AI

By Jennifer Thompson | Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI.

Is Anonymized Data Truly Safe From Re-Identification? Maybe not.

By Linda Henry | Across all industries, data collection is ubiquitous. One recent study estimates that over 2.5 quintillion bytes of data are created every day, and over 90% of the data in the world was generated over the last two years.

FTC Settlement Reminds IoT Companies to Employ Prudent Software Development Practices

By Linda Henry | Smart home products manufacturer D-Link Systems Inc. (D-Link) has reached a proposed settlement with the Federal Trade Commission after several years of litigation over D-Link’s security practices.

Beyond GDPR: How Brexit Affects Other Data Laws

By Dawn Ingley | Since the United Kingdom (UK) voted in June, 2016, to exit the European Union (i.e., “Brexit”), the question in many minds has been, “Whither GDPR?” After all, the UK was a substantial contributor to this legislation. The UK has offered assurances that that it intends to, in large part, harmonize its data protection laws with GDPR.

San Francisco Says The Eyes Don’t Have It: Setting Limits on Facial Recognition Technology

By Jennifer Thompson | On May 14, 2019, the San Francisco Board of Supervisors voted 8-1 to approve a proposal that will ban all city agencies, including law enforcement entities, from using facial recognition technologies in the performance of their duties.

NYC’s Task Force to Tackle Algorithmic Bias: A Study in Inertia

By Linda Henry | In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

U.S. Lawmakers Want Companies to Check their Bias

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, technology companies are under increased scrutiny for alleged discriminatory practices related to their use of artificial intelligence.

The Weight of “GDPR Lite”

By Dawn Ingley | In June, 2018, California’s legislature took the first steps to ensure that the state’s approach to data privacy was trending more closely to the European Union’s General Data Protection Regulation (GDPR), the de facto global industry standard for data protection. Though legislators have acknowledged that further refinements to the California Consumer Privacy Act (CCPA) will be necessary in the coming months, its salient requirements are known.