The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

Feb 5, 2019

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes. The use of disruptive technology such as AI raises a variety of ethical issues, and lawyers remain subject to the same rules of professional conduct even when using tools such as AI. Although each state has adopted its own code of professional ethics, most states have based their code of professional conduct on the ABA Model Rules of Professional Conduct. Some of the Model Rules that may apply are summarized below:

  • Rule 1.1: Competence. Rule 1.1 requires that lawyers provide competent representation, which requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for representation. In addition, a comment to Rule 1.1 provides that competence includes keeping abreast of changes in the practice of law, including the benefits and risks associated with relevant technology.

    Considering the speed at which AI is disrupting the legal profession and changing how lawyers provide legal services, attorneys should stay current with the benefits and risks of using AI in their legal practice. An attorney’s duty to provide competent representation would include making informed decisions as to whether AI is an appropriate tool for its intended use in providing legal services and also whether the program actually performs as marketed.

  • Rule 1.4: Duty to Communicate. Rule 1.4 requires that a lawyer reasonably consult with the client regarding the means by which the lawyer accomplishes the client’s objectives. Consequently, lawyers should determine whether the lawyer should inform the client about the use of AI in providing legal advice. In addition, there may be circumstances in which a lawyer has a duty to disclose to a client that the lawyer has elected not to use AI if such use might be beneficial to the client. 
  • Rule 1.5:   Rule 1.5 prohibits a lawyer from charging fees or expenses that are not reasonable. As with other technological tools (e.g., subscriptions to legal research platforms), the Model Rules do not prohibit passing through out of pockets costs incurred in connection with a lawyer’s use of technology, and a comment to Rule 1.5 provides that attorneys may charge an amount for services performed in-house that reasonably reflect the costs incurred by the lawyer. Alternatively, a lawyer could secure consent from a client if marking-up the cost. ABA Ethics Formal Opinion 93-379 (Billing for Professional Fees, Disbursements and Other Expenses) offers additional guidance, stating that “Any reasonable calculation of direct costs as well as any reasonable allocation of related overhead should pass ethical muster. On the other hand, in the absence of an agreement to the contrary, it is impermissible for a lawyer to create an additional source of profit for the law firm beyond that which is contained in the provision of professional services themselves.”
    Attorneys may also want to consider whether fees may be deemed unreasonable if an attorney fails to use AI in certain circumstances. A recent case in the Ontario Superior Court of Justice may offer insight as to how courts in the United States may begin to view AI technology as a necessity in certain circumstances. In Cass v. 1410088 Ontario Inc. (2018 ONSC 6959), a judge reduced attorneys’ fees awarded in part because the preparation time billed by the attorneys could have been significantly reduced if AI had been used for certain aspects of the case. Although this case occurred in Canada, it would not be surprising if there are similar findings in U.S. jurisdictions in the not too distant future.
  • Rule 1.6: Confidentiality of Information. Rule 1.6 includes an obligation to use reasonable efforts to prevent the unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client. Consequently, if an attorney provides a third party (e.g., technology vendor) access to client confidential information, the attorney has an obligation to understand the vendor’s security practices and make a determination that the security policies are reasonable.   
  • Rule 5.1 and Rule 5.3: Responsibilities of a Partner or Supervisory Lawyer and Responsibilities Regarding Nonlawyer Assistance. Rules 5.1 and 5.3 address an attorney’s obligation to supervise lawyers and nonlawyers to ensure their conduct complies with the professional obligations of a lawyer. A comment to Rule 5.3 cites technology vendors as examples of nonlawyers who may assist and explains that when using such third-party services, lawyers must use “reasonable efforts to ensure that the services are provided in a manner that is compatible with the lawyer’s professional obligations.” Although the comment does not specify what constitutes reasonable efforts, attorneys should undertake sufficient due diligence in order to understand the product’s limitations and capabilities, and also to determine whether the use of such technology will result in non-compliance with an attorney’s obligations (e.g., confidentiality).
  • Rule 5.5: Unauthorized Practice of Law. The Model Rules do not define “practice of law” or provide definitive guidelines as to when the use of technology may constitute the unauthorized practice of law (UPL). In addition, case law does not provide much clarity since courts have not been consistent as to how UPL is applied to software. Despite the lack of clear guidance, however, if an attorney adheres to her duty to exercise independent judgment, supervises the use of the AI tool and confirms that the final work product is accurate, the risk of UPL should be avoided.

    Rule 5.5’s prohibition of the unauthorized practice of law also raises the question as to whether tasks performed solely by a machine can be considered UPL. In 2015, the Second Circuit distinguished between tasks performed by machines and tasks performed by lawyers (Lola v. Skadden, Arps, Slate, Meagher & Flom LLP, No. 14-3845 (2d Cir. 2015)). The Second Circuit found that tasks that could otherwise be performed entirely by a machine could not be said to fall under the practice of law. Consequently, Lola raises the possibility that machines can reclassify tasks that were traditionally considered the practice of law as now falling outside of the scope of the practice of law. The broader implications of Lola on UPL claims are unclear, however, if machines cannot engage in the practice of law, then courts may also find that software cannot be responsible for UPL.

OTHER THOUGHT LEADERSHIP POSTS:

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry | Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes.

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley | On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson | Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry | The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.

D-Link Continues Challenges to FTC’s Data Security Authority

By Linda Henry | On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley | With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson | Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of, companies subject to COPPA. While the FTC has pursued companies for alleged violations of all facets of its COPPA Six Step Compliance Plan, most recently the FTC has focused on the obligation to promptly and securely delete all data collected if it is no longer needed. Taken as a whole, recent FTC activity may indicate a desire on the part of the FTC to expand its regulatory reach.

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley | Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that collected extensive metadata—average speeds and distances, routes routinely traveled, seat belt usage and other types of data.  But to what end?  Was the purpose of the monitor to learn more about my driving practices and to encourage better driving habits?  To share my data with advertisers wishing to serve up a buy-one, get-one free coupon for paper towels from my favorite grocery store (just as I pass by it) on my touchscreen dashboard?  Or to build a “risk profile” that could be sold to parties (AirBnB, banks, other insurance companies) who may have a vested interest in learning more about my propensity for making good decisions?  The answer could be, “all of the above.”

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry | As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use and monetization.  Data scraping has come a long way since its early days, which involved manually copying data visible on a website.  Today, data scraping is a thriving industry, and high-performance web scraping tools are fueling the big data revolution.  Like many technological advances though, the law has not kept up with the technology that enables scraping. As a result, the state of the law on data scraping remains in flux.