The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes. The use of disruptive technology such as AI raises a variety of ethical issues, and lawyers remain subject to the same rules of professional conduct even when using tools such as AI. Although each state has adopted its own code of professional ethics, most states have based their code of professional conduct on the ABA Model Rules of Professional Conduct. Some of the Model Rules that may apply are summarized below:

  • Rule 1.1: Competence. Rule 1.1 requires that lawyers provide competent representation, which requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for representation. In addition, a comment to Rule 1.1 provides that competence includes keeping abreast of changes in the practice of law, including the benefits and risks associated with relevant technology.

    Considering the speed at which AI is disrupting the legal profession and changing how lawyers provide legal services, attorneys should stay current with the benefits and risks of using AI in their legal practice. An attorney’s duty to provide competent representation would include making informed decisions as to whether AI is an appropriate tool for its intended use in providing legal services and also whether the program actually performs as marketed.

  • Rule 1.4: Duty to Communicate. Rule 1.4 requires that a lawyer reasonably consult with the client regarding the means by which the lawyer accomplishes the client’s objectives. Consequently, lawyers should determine whether the lawyer should inform the client about the use of AI in providing legal advice. In addition, there may be circumstances in which a lawyer has a duty to disclose to a client that the lawyer has elected not to use AI if such use might be beneficial to the client. 
  • Rule 1.5:   Rule 1.5 prohibits a lawyer from charging fees or expenses that are not reasonable. As with other technological tools (e.g., subscriptions to legal research platforms), the Model Rules do not prohibit passing through out of pockets costs incurred in connection with a lawyer’s use of technology, and a comment to Rule 1.5 provides that attorneys may charge an amount for services performed in-house that reasonably reflect the costs incurred by the lawyer. Alternatively, a lawyer could secure consent from a client if marking-up the cost. ABA Ethics Formal Opinion 93-379 (Billing for Professional Fees, Disbursements and Other Expenses) offers additional guidance, stating that “Any reasonable calculation of direct costs as well as any reasonable allocation of related overhead should pass ethical muster. On the other hand, in the absence of an agreement to the contrary, it is impermissible for a lawyer to create an additional source of profit for the law firm beyond that which is contained in the provision of professional services themselves.”
    Attorneys may also want to consider whether fees may be deemed unreasonable if an attorney fails to use AI in certain circumstances. A recent case in the Ontario Superior Court of Justice may offer insight as to how courts in the United States may begin to view AI technology as a necessity in certain circumstances. In Cass v. 1410088 Ontario Inc. (2018 ONSC 6959), a judge reduced attorneys’ fees awarded in part because the preparation time billed by the attorneys could have been significantly reduced if AI had been used for certain aspects of the case. Although this case occurred in Canada, it would not be surprising if there are similar findings in U.S. jurisdictions in the not too distant future.
  • Rule 1.6: Confidentiality of Information. Rule 1.6 includes an obligation to use reasonable efforts to prevent the unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client. Consequently, if an attorney provides a third party (e.g., technology vendor) access to client confidential information, the attorney has an obligation to understand the vendor’s security practices and make a determination that the security policies are reasonable.   
  • Rule 5.1 and Rule 5.3: Responsibilities of a Partner or Supervisory Lawyer and Responsibilities Regarding Nonlawyer Assistance. Rules 5.1 and 5.3 address an attorney’s obligation to supervise lawyers and nonlawyers to ensure their conduct complies with the professional obligations of a lawyer. A comment to Rule 5.3 cites technology vendors as examples of nonlawyers who may assist and explains that when using such third-party services, lawyers must use “reasonable efforts to ensure that the services are provided in a manner that is compatible with the lawyer’s professional obligations.” Although the comment does not specify what constitutes reasonable efforts, attorneys should undertake sufficient due diligence in order to understand the product’s limitations and capabilities, and also to determine whether the use of such technology will result in non-compliance with an attorney’s obligations (e.g., confidentiality).
  • Rule 5.5: Unauthorized Practice of Law. The Model Rules do not define “practice of law” or provide definitive guidelines as to when the use of technology may constitute the unauthorized practice of law (UPL). In addition, case law does not provide much clarity since courts have not been consistent as to how UPL is applied to software. Despite the lack of clear guidance, however, if an attorney adheres to her duty to exercise independent judgment, supervises the use of the AI tool and confirms that the final work product is accurate, the risk of UPL should be avoided.

    Rule 5.5’s prohibition of the unauthorized practice of law also raises the question as to whether tasks performed solely by a machine can be considered UPL. In 2015, the Second Circuit distinguished between tasks performed by machines and tasks performed by lawyers (Lola v. Skadden, Arps, Slate, Meagher & Flom LLP, No. 14-3845 (2d Cir. 2015)). The Second Circuit found that tasks that could otherwise be performed entirely by a machine could not be said to fall under the practice of law. Consequently, Lola raises the possibility that machines can reclassify tasks that were traditionally considered the practice of law as now falling outside of the scope of the practice of law. The broader implications of Lola on UPL claims are unclear, however, if machines cannot engage in the practice of law, then courts may also find that software cannot be responsible for UPL.

OTHER THOUGHT LEADERSHIP POSTS:

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry | Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes.

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley | On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson | Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry | The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.

D-Link Continues Challenges to FTC’s Data Security Authority

By Linda Henry | On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley | With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson | Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of, companies subject to COPPA. While the FTC has pursued companies for alleged violations of all facets of its COPPA Six Step Compliance Plan, most recently the FTC has focused on the obligation to promptly and securely delete all data collected if it is no longer needed. Taken as a whole, recent FTC activity may indicate a desire on the part of the FTC to expand its regulatory reach.

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley | Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that collected extensive metadata—average speeds and distances, routes routinely traveled, seat belt usage and other types of data.  But to what end?  Was the purpose of the monitor to learn more about my driving practices and to encourage better driving habits?  To share my data with advertisers wishing to serve up a buy-one, get-one free coupon for paper towels from my favorite grocery store (just as I pass by it) on my touchscreen dashboard?  Or to build a “risk profile” that could be sold to parties (AirBnB, banks, other insurance companies) who may have a vested interest in learning more about my propensity for making good decisions?  The answer could be, “all of the above.”

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry | As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use and monetization.  Data scraping has come a long way since its early days, which involved manually copying data visible on a website.  Today, data scraping is a thriving industry, and high-performance web scraping tools are fueling the big data revolution.  Like many technological advances though, the law has not kept up with the technology that enables scraping. As a result, the state of the law on data scraping remains in flux.

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.  Just one week later, Senator Ron Wyden of Oregon introduced a discussion draft for a privacy law, the Consumer Data Protection Act, that would result in steep fines and possible incarceration for top executives of companies that violated the law.  Cook’s position and Wyden’s proposed bill stand in stark contrast to several of Apple’s technology industry competitors.  Indeed, those competitors are reportedly already seeking to unravel California’s recent data protection legislation.  In the wake of the congressional and corporate push for more restrictive privacy regulations in an innovative industry, will other giants on the technology landscape be left in the dust?

Tim Cook’s proclamation was of no surprise to close followers of Apple’s culture and approach to data protection; yet, Cook’s message at this conference was more direct than ever, citing his competitors’ efforts to create “platforms and algorithms” to “weaponize personal data.”  Cook’s argument for increased data protection regulation then took an apocalyptic turn, when he warned, “Your profile is a bunch of algorithms that serve up increasingly extreme content, pounding our harmless preferences into harm…we shouldn’t sugarcoat the consequences.  This is surveillance.”

Cook’s proposed regulation included several of the key hallmarks of GDPR and its progeny:

  1. Minimization of personal data collection from technology users;
  2. Communication to technology users as to the “what and why”—what data is being collected and why;
  3. Users’ rights to obtain their data, as well as to correct and delete their data; and
  4. Security of user data.

Cook argued that not only were such rights fundamental to the user experience and adoption of emerging technologies, but that such rights also engendered trust between technology organizations and the consumers of such technologies.

Cook’s (and Apple’s) approach to increased data protection has garnered praise by those members of Congress who routinely champion consumer rights in this space, including Senator Wyden.  His discussion draft identified the federal government’s failures to protect data:

“ (1)     Information about consumers’ activities, including their location information and the websites they visit is tracked, sold and monetized without their knowledge by many entities;

(2)        Corporations’ lax cybersecurity and poor oversight of commercial data-sharing partnerships has resulted in major data breaches and the misuse of Americans’ personal data;

(3)        Consumers have no effective way to control companies’ use and sharing of their data.”

As consumer protections of the nature contemplated in data protection laws and regulations would typically be the province of the Federal Trade Commission (FTC), Senator Wyden then went on to demonstrate how the FTC lacked the power and ammunition to effectively combat threats to consumer data privacy:

“(1)      The FTC cannot fine first-time corporate offenders.  Fines for subsequent violators of the laws are tiny and not a credible deterrent.

(2)        The FTC does not have the power to punish companies unless they lie to consumers about how much they protect their privacy or the companies’ harmful behavior costs consumers money.

(3)        The FTC does not have the power to set minimum cybersecurity standards for products that process consumer data, nor does any federal regulator.

(4)        The FTC does not have enough staff, especially skilled technology experts…”

Senator Wyden posited that Congress could empower the FTC to:

“(1)      Establish minimum privacy and cybersecurity standards.

(2)        Issue steep fines (up to 4% of annual revenue), on the first offense for companies and 10-20 year criminal penalties for senior executives.

(3)        Create a national Do Not Track system that lets consumers stop third-party companies from tracking them on the web by sharing data, selling data, or targeting advertisements based on their personal information.  It permits companies to charge consumers who want to use their products and services, but don’t want their information monetized.

(4)        Give consumers a way to review their personal information a company has about them, learn with whom it has been shared or sold, and to challenge inaccuracies in it.

(5)        Hire 175 more staff to police the largely unregulated market for private data.

(6)        Require companies to assess the algorithms that process consumer data to examine their impact on accuracy, fairness, bias, discrimination, privacy and security.”

Any combination of Cook’s proposal and Senator Wyden’s bill would change the landscape of privacy in the United States, especially among the technology giants of Silicon Valley.  The push for increased data protection at the federal level from technology companies (other than Apple) largely have been efforts to avoid and supersede laws passed in states such as California and Massachusetts.  In the absence of a superseding federal law, those states’ laws become the de facto law of the land for companies doing business nationally.  Yet, technology companies must walk a fine line if they are to avoid a publicity and consumer nightmare—if they succeed in their efforts to have federal law implemented in place of state laws, consumer sentiment would dictate that federal law must be something akin to those ones proposed by Cook and Senator Wyden, and not merely lip service.  Otherwise, they could find themselves in the headlines for all the wrong reasons.

With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonabtele, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

Diversity of Backgrounds

Axon’s ethics board includes not only AI experts, but also a diverse sampling from the related fields of computer science and engineering, privacy and data protection and civil liberties.  A sampling of members includes the following individuals:

  • Ali Farhadi, Professor of Computer Science and Engineering, University of Washington
  • Barry Friedman, Professor and Director of the Policing Project at New York University School of Law
  • Jeremy Gillula, Privacy and Civil Liberties Technologist
  • Jim Bueerman, President of the Police Foundation
  • Miles Brundage, Research Fellow at the University of Oxford’s Future of Humanity Institute
  • Tracy Ann Kosa, Professor at Seattle University School of Law
  • Vera Bumpers, Chief of Houston Metro Police Department and President of the National Organization of Black Law Enforcement Executives
  • Walter McNeil, Sheriff of Leon County, Florida Sheriff’s Office and prior President of the International Association of Chiefs of Police

Obviously, Axon’s goal was to establish a team that could evaluate use and implementation from all angles, ranging from law enforcement officials who employ such technologies, to those experts who help to create and shape legislation governing use of the same.  Axon may be moving in the direction of facial recognition technologies; after all, police forces in both the United Kingdom and China have leveraged these types of technologies for years.  Thus far, one of the chief concerns surrounding facial recognition is the penchant for racial and gender bias—higher error rates for both females and African-Americans.  If Axon does, indeed, move in that direction, it is critical that its advisory group include constituents from all perspectives and demographics.

Core Values

In addition to its own commitment to diversity, DeepMind’s key principles are reflective of its owner’s more expansive footprint across technology platforms:

  • Social benefit: AI should “serve the global social and environmental good…to build fairer and more equal societies…”
  • Rigorous and evidence-based: Technical research must conform to the highest academic research standards, including peer review.
  • Transparent and open: DeepMind will be open as to “who we work with and what projects we fund.”
  • Collaboration and Inclusion: Research must be “accountable to all of society.”

DeepMind’s focus on managing the risk of AI is on an even broader canvas than that of Axon.  In furtherance of its key principles,  DeepMind seeks to answer key questions:

  • What are the societal risks when AI fails?
  • How can humans remain in control of AI?
  • How can dangerous applications of AI in the contexts of terrorism and warfare be avoided?

Though much of the AI industry has yet to provide details as to their own ethics programs, some of its blue chips have acted in unison to establish a more formalized set of guidelines.  In 2016, the Partnership on AI to Benefit People and Society was founded collectively by Amazon, Apple, Google, Facebook, IBM and Microsoft.  Seven pillars form the basis of this partnership:

  • Safety-critical AI: Tools used to perform human discretionary tasks must be “safe, trustworthy and aligned with ethics…”
  • Fair, transparent and accountable AI: Systems must designed so as to be alert to possible biases resulting from the use of AI.
  • Collaborations between people and AI systems: AI is best harnessed in a close collaboration between humans and the systems, themselves.
  • AI, labor and the economy: “Competition and innovation is encouraged and not stifled.”
  • Social and societal influences of AI: While AI has the potential to provide useful assistance and insights to humans, users must also be sensitive to its potential to subtly influence humans.
  • AI and social good: The sky’s the limit in terms of AI’s potential in addressing long-standing societal ills.

While these best practices are a promising start, the industry continues to lack more particulars in terms of how the guidelines will be put into practice.  It is likely that consumers will maintain a healthy skepticism until more particular guardrails are provided which offer compelling evidence of the good, rather than the bad and the ugly.

OTHER THOUGHT LEADERSHIP POSTS:

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry | Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes.

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley | On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson | Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry | The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.

D-Link Continues Challenges to FTC’s Data Security Authority

By Linda Henry | On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley | With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson | Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of, companies subject to COPPA. While the FTC has pursued companies for alleged violations of all facets of its COPPA Six Step Compliance Plan, most recently the FTC has focused on the obligation to promptly and securely delete all data collected if it is no longer needed. Taken as a whole, recent FTC activity may indicate a desire on the part of the FTC to expand its regulatory reach.

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley | Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that collected extensive metadata—average speeds and distances, routes routinely traveled, seat belt usage and other types of data.  But to what end?  Was the purpose of the monitor to learn more about my driving practices and to encourage better driving habits?  To share my data with advertisers wishing to serve up a buy-one, get-one free coupon for paper towels from my favorite grocery store (just as I pass by it) on my touchscreen dashboard?  Or to build a “risk profile” that could be sold to parties (AirBnB, banks, other insurance companies) who may have a vested interest in learning more about my propensity for making good decisions?  The answer could be, “all of the above.”

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry | As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use and monetization.  Data scraping has come a long way since its early days, which involved manually copying data visible on a website.  Today, data scraping is a thriving industry, and high-performance web scraping tools are fueling the big data revolution.  Like many technological advances though, the law has not kept up with the technology that enables scraping. As a result, the state of the law on data scraping remains in flux.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

On October 17, 2018, the ABA issued Formal Opinion 483, entitled “Lawyers’ Obligations After an Electronic Data Breach or Cyberattack” (the “Opinion”).  The Opinion establishes an ethical obligation for attorneys to notify clients of a data breach or substantially likely breach, and to take other reasonable steps consistent with the Model Rules of Conduct.  However, the Opinion noted that not all events will trigger an attorney’s ethical obligations.  In fact, the Opinion states that this obligation arises only in connection with:

“a data event where material client confidential information is misappropriated, destroyed or otherwise compromised, or where a lawyer’s ability to perform the legal services for which the lawyer is hired is significantly impaired by the episode”

The ABA noted that law firms, as keepers of highly sensitive information, are attractive targets for hackers.  As such, the ABA issued to the Opinion as a follow-up to the previously issued Formal Opinion 477, discussed in my previous article, “Before You Hit “Send”: Ensuring Your Attorney-Client Emails Comply with the New ABA Guidance.”   But the focus of Formal Opinion 483 is on an attorney’s obligation to monitor and secure electronically stored confidential client information, in addition to related obligations if such data is improperly accessed or breached.

The ABA’s analysis focuses on:

  • Model Rule 1.1 – Duty of Competence: This rule and subsequent interpretive comments made it clear that attorneys are required to stay abreast of and to understand current technologies.  The attorney may satisfy this obligation by self-study or by “employing or retaining qualified lawyers and non-lawyer assistants.”  Thus, this duty of competence requires that the attorney appropriately employ technology to safeguard client information from unauthorized access.In the context of a data breach, the duty of competence requires the attorney to “promptly stop the breach and mitigate damage resulting from the breach.”  While the ABA stops short of dictating how this is to be achieved, it does suggest that the attorney proactively should create and implement an incident response plan containing specific policies and procedures for responding to a data breach.  The Opinion also suggests that the attorney’s activities in mitigating and investigating the breach after resolution are equally important, to ensure that the damage is contained, and measures are implemented to prevent a recurrence.
  • Model Rules 5.1 and 5.3 – Duty to Supervise Lawyers and Staff: Comments to these rules not only obligate managing attorneys in a firm to create appropriate policies to safeguard client information, but also to ensure that all lawyers and staff are following such policies.  Though a subset of the duty of competence in the Opinion, these rules are nonetheless pertinent in the event of a data breach, in that an attorney must ensure it is appropriately supervising all retained data security professionals, as well as requiring all firm personnel to comply with appropriate cybersecurity and technology policies.
  • Model Rule 1.6 – Duty of Confidentiality:  The confidentiality rule requires all attorneys to use reasonable efforts to prevent the unauthorized disclosure or inadvertent access of information pertaining to its representation of a client.  The analysis of what are “reasonable efforts” is fact- based and depends upon: a) the sensitivity of the information; b) the relative effectiveness, cost and difficulty in implementing available safeguards; c) and the effect of the safeguards on the attorney’s ability to represent clients.  Again declining to prescribe required measures, the ABA instead refers to the ABA Cybersecurity Handbook, which discusses an emerging standard for “reasonable” security that rejects specific requirements, and instead suggests a fact-specific analysis of the processes employed by the attorney for data protection that includes:
    • risk assessment;
    • identification and implementation of appropriate security measures to address risks;
    • testing to ensure the effective implementation of the security measures; and
    • continuous updates as technologies and risks evolve

Attorneys also should consider carefully the duty of confidentiality when determining how much and which information to share with law enforcement officials in connection with any suffered breach.  The duty to protect sensitive information remains even during a breach, and attorneys should consider: a) whether certain sensitive information would harm a client if it were released to law enforcement officials; b) whether the client would object to the attorney sharing the information; and c) whether divulging the confidential information would, in fact, benefit the client by helping to stop the breach.  Overall, the lawyer should disclose only the information which is reasonable necessary to assist law enforcement in stopping the breach or recovering the stolen files.

Based on the  ethical obligations set forth above, ABA-confirmed attorneys have an affirmative duty pursuant to Rule 1.4 (which generally governs attorney-client communications) to notify current clients of a breach or suspected breach.  Notification of the breach or suspected breach is integral to keeping a client reasonably informed as to the status of an attorney’s representation of the client and providing the client all relevant information, so that a client can make informed decisions about an attorney’s representation.  While not requiring attorneys to notify former clients of data breaches, the ABA noted that an attorney should consider contractual arrangements with previous clients, as well as regulatory or statutory breach notification requirements in determining whether client notification is merited, so as to limit liability.

Once a decision has been made that a breach or potential breach involves material client information, and the duty to notify has been triggered, the notification must provide sufficient information for a client to make a reasonably informed decision of whether it wants to continue with the representation.  Depending on the facts of the breach, the lawyer will need to disclose what it does and does not know about the breach, as well as satisfy the ongoing duty to update a client as the post-breach investigation proceeds.

The Opinion concludes by discussing the need for attorneys experiencing a data breach also to carefully analyze all federal and state regulatory and statutory schemes which may apply to the breach and ensure compliance with those, especially if personally identifiable information was involved in the breach.  The ABA further cautioned that compliance with regulatory schemes and compliance with the attorney’s ethical obligations are separate requirements, and satisfying regulatory or statutory obligations does not necessarily ensure ethical obligations are also satisfied (or vice versa).

OTHER THOUGHT LEADERSHIP POSTS:

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry | Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes.

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley | On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson | Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry | The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.

D-Link Continues Challenges to FTC’s Data Security Authority

By Linda Henry | On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley | With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson | Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of, companies subject to COPPA. While the FTC has pursued companies for alleged violations of all facets of its COPPA Six Step Compliance Plan, most recently the FTC has focused on the obligation to promptly and securely delete all data collected if it is no longer needed. Taken as a whole, recent FTC activity may indicate a desire on the part of the FTC to expand its regulatory reach.

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley | Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that collected extensive metadata—average speeds and distances, routes routinely traveled, seat belt usage and other types of data.  But to what end?  Was the purpose of the monitor to learn more about my driving practices and to encourage better driving habits?  To share my data with advertisers wishing to serve up a buy-one, get-one free coupon for paper towels from my favorite grocery store (just as I pass by it) on my touchscreen dashboard?  Or to build a “risk profile” that could be sold to parties (AirBnB, banks, other insurance companies) who may have a vested interest in learning more about my propensity for making good decisions?  The answer could be, “all of the above.”

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry | As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use and monetization.  Data scraping has come a long way since its early days, which involved manually copying data visible on a website.  Today, data scraping is a thriving industry, and high-performance web scraping tools are fueling the big data revolution.  Like many technological advances though, the law has not kept up with the technology that enables scraping. As a result, the state of the law on data scraping remains in flux.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.

A few key takeaways from the CNIL report are as follows:

  • Data controllers: Legal entities or natural persons who have a right to write on a blockchain and create a transaction that is submitted for validation (referred to in the CNIL’s report as “participants”) can be considered a data controller if the participant records personal data on a blockchain and (i) is a natural person that is engaging in a professional or commercial activity or (ii) is a corporate entity. For example, if a bank enters customer data on a blockchain, the bank would be considered a data controller.
  • Joint controllers. The CNIL advises that if there are multiple participants, the parties should designate a single entity or participant as the data controller in order to avoid joint liability under Article 26 of GDPR.  In addition, designating a single entity or participant as the data controller will provide data subjects with a single controller against whom they can enforce their rights.
  • Smart contract developers:  A smart contract developer may be considered a data processor if the smart contract processes personal data on behalf of the controller.  The CNIL provides the example of a software developer that offers a smart contract to insurance companies that will automatically compensate airline passengers under their travel insurance policies if a flight is delayed.  In this example, the smart contract developer is considered a data processor.
  • Miners: 
    • A miner may be considered a data processor if it executes the instructions of the data controller when verifying whether a transaction meets specified technical criteria.  The CNIL acknowledges the practical difficulties that would result from considering miners as data processors in a public blockchain, and the impracticalities of satisfying the requirement for the miner, as data processor, to sign a data processing agreement with the data controller.  The CNIL indicates that it is still considering this issue and encourages others to find innovative ways to address issues that would arise when miners are considered data processors.
    • Because miners validate transactions on behalf of blockchain participants and do not determine the purpose and means of processing, miners would not be considered data controllers.
  • Privacy by design and data minimization:
    •  In order to comply with GDPR’S privacy by design and data minimization requirements, data controllers must consider whether blockchain is the appropriate technology for the intended use case and whether they will be able to comply with GDPR requirements.  The CNIL notes that data transfers on a public blockchain may be especially problematic since miners may be validating transactions outside of the EU.
    • If personal data cannot be stored off-chain, hashing and encryption should be considered.
  • Right to erasure: The CNIL acknowledges that compliance with GDPR’s right to erasure may be technically impossible with respect to data on a blockchain, and notes that a more detailed analysis is needed as to how the right to erasure applies to blockchain. The CNIL strongly cautions against using blockchain to store unencrypted personal data and indicates that deletion of private keys should be considered when determining how to comply with the right to erasure requirement.
  • Security:  The CNIL recommends considering if a minimum number of miners should be required in order to help prevent a 51% attack.  In addition, there should be a contingency plan to modify algorithms in the event a vulnerability is detected.

The CNIL notes that its analysis is focused only on blockchain and not the broader category of distributed ledger technology (DLT).  Although the CNIL indicates that it may offer guidance on GDPR’s applicability to other DLTs in the future, it chose to focus its analysis on blockchain because DLT solutions that are not blockchains do not yet lend themselves to a generic analysis.  (The CNIL’s full report (in French) and introductory materials accompanying the report can be found here).

OTHER THOUGHT LEADERSHIP POSTS:

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry | Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes.

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley | On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson | Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry | The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.

D-Link Continues Challenges to FTC’s Data Security Authority

By Linda Henry | On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley | With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson | Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of, companies subject to COPPA. While the FTC has pursued companies for alleged violations of all facets of its COPPA Six Step Compliance Plan, most recently the FTC has focused on the obligation to promptly and securely delete all data collected if it is no longer needed. Taken as a whole, recent FTC activity may indicate a desire on the part of the FTC to expand its regulatory reach.

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley | Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that collected extensive metadata—average speeds and distances, routes routinely traveled, seat belt usage and other types of data.  But to what end?  Was the purpose of the monitor to learn more about my driving practices and to encourage better driving habits?  To share my data with advertisers wishing to serve up a buy-one, get-one free coupon for paper towels from my favorite grocery store (just as I pass by it) on my touchscreen dashboard?  Or to build a “risk profile” that could be sold to parties (AirBnB, banks, other insurance companies) who may have a vested interest in learning more about my propensity for making good decisions?  The answer could be, “all of the above.”

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry | As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use and monetization.  Data scraping has come a long way since its early days, which involved manually copying data visible on a website.  Today, data scraping is a thriving industry, and high-performance web scraping tools are fueling the big data revolution.  Like many technological advances though, the law has not kept up with the technology that enables scraping. As a result, the state of the law on data scraping remains in flux.

D-Link Continues Challenges to FTC’s Data Security Authority

D-Link Continues Challenges to FTC’s Data Security Authority

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.

In January 2017, the FTC sued D-Link for engaging in unfair or deceptive acts in violation of Section 5 of the FTC Act in connection with D-Link’s failure to take reasonable steps to secure its routers and Internet-protocol cameras from widely known and reasonably foreseeable security risks.   The FTC’s complaint focused on D-Link’s marketing practices, noting that D-Link’s marketing materials and user manuals included statements in bold, italicized, all-capitalized text that D-Link’s routers were “easy to secure” with “advanced network security.”  D-Link also promoted the security of its IP cameras in its marketing materials, specifically referencing the device’s security in large capital letters.  In addition, the IP camera packaging also listed security claims, such as “secure-connection” next to a lock icon as one of the product features.

Although a U.S. district court judge dismissed three of the FTC’s six claims in September 2017, the judge also rejected D-Link’s argument that the FTC lacked statutory authority to regulate data security for IoT companies as an unfair practice under Section 5 of the FTC Act.  In the court’s Order Regarding Motion to Dismiss, the court stated that “the fact that data security is not expressly enumerated as within the FTC’s enforcement powers is of no moment to the exercise of its statutory authority.”  With respect to the court’s dismissal of the FTC’s unfairness claim, the court agreed with D-Link that the FTC had failed to provide any concrete facts demonstrating actual harm to consumers, and reasoned that the absence of any concrete facts makes it just as possible that D-Link’s devices would not cause substantial harm to consumers and that “the FTC cannot rely on wholly conclusory allegations about potential injury to tilt the balance in its favor.”

Despite the court’s dismissal of the FTC’s unfairness claim, the court indicated that the claim might have survived a motion to dismiss if the FTC had tied the unfairness claim to the representations underlying the deception claims.  The court stated that “a consumer’s purchase of a device that fails to be reasonably secure — let alone as secure as advertised — would likely be in the ballpark of a “substantial injury,” particularly when aggregated across a large group of consumers.”   Although the court’s reasoning indicates that there are limits to the FTC’s data security enforcement capabilities, it did not completely foreclose the possibility that lax security practices might be deemed to violate the unfairness prong of the FTC Act even in the absence of evidence of actual harm to consumers.

The FTC argued in its September 2018 motion for summary judgment that summary judgment is appropriate because there is no dispute that D-Link made representations regarding the security of its devices from unauthorized access, the devices contained numerous vulnerabilities that made them susceptible to unauthorized access and D-Link’s security statements were material to consumers.  The FTC noted that “there is no genuine dispute that D-Link routers and IP cameras have contained serious, foreseeable, and easily preventable vulnerabilities permitting unauthorized access; that D-Link knew of these vulnerabilities; and that D-Link sold and marketed these devices as secure anyway.”

In D-Link’s motion for summary judgment, D-Link argued that the FTC’s remaining deception claims were based on “expert conjecture” with no evidentiary support.  D-Link stressed that the FTC’s failure to present any evidence that an identifiable consumer was deceived by D-Link’s marketing statements or that any of the routers or cameras were actually compromised demonstrated that there was no harm for the court to remedy.

D-Link is significant because the outcome may have a substantial impact on the FTC’s ability to successfully pursue a claim under Section 5 of the FTC Act in the absence of evidence that there has been an actual harm or injury to consumers. In addition, the outcome of D-Link may shape the FTC’s approach to classifying informational harm that impacts consumers following a data breach.

Even if the D-Link decision offers more clarity around the scope of the FTC’s regulatory authority on data security, the FTC’s past guidance regarding data security and privacy remains useful when evaluating a company’s data security practices.  Over the past few years, the FTC has repeatedly stressed that a company’s failure to implement reasonable security measures may be considered deceptive or unfair, and has stated that “the touchstone of the FTC’s approach to data security is reasonableness: a company’s data security measures must be reasonable in light of the sensitivity and volume of consumer information it holds, the size and complexity of its data operations, and the cost of available tools to improve security and reduce vulnerabilities.” In addition, the FTC’s motions in D-Link confirm that a company should ensure that it actually follows all security practices it claims to follow.

OTHER THOUGHT LEADERSHIP POSTS:

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry | Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes.

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley | On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson | Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry | The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.

D-Link Continues Challenges to FTC’s Data Security Authority

By Linda Henry | On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley | With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson | Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of, companies subject to COPPA. While the FTC has pursued companies for alleged violations of all facets of its COPPA Six Step Compliance Plan, most recently the FTC has focused on the obligation to promptly and securely delete all data collected if it is no longer needed. Taken as a whole, recent FTC activity may indicate a desire on the part of the FTC to expand its regulatory reach.

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley | Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that collected extensive metadata—average speeds and distances, routes routinely traveled, seat belt usage and other types of data.  But to what end?  Was the purpose of the monitor to learn more about my driving practices and to encourage better driving habits?  To share my data with advertisers wishing to serve up a buy-one, get-one free coupon for paper towels from my favorite grocery store (just as I pass by it) on my touchscreen dashboard?  Or to build a “risk profile” that could be sold to parties (AirBnB, banks, other insurance companies) who may have a vested interest in learning more about my propensity for making good decisions?  The answer could be, “all of the above.”

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry | As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use and monetization.  Data scraping has come a long way since its early days, which involved manually copying data visible on a website.  Today, data scraping is a thriving industry, and high-performance web scraping tools are fueling the big data revolution.  Like many technological advances though, the law has not kept up with the technology that enables scraping. As a result, the state of the law on data scraping remains in flux.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

Diversity of Backgrounds

Axon’s ethics board includes not only AI experts, but also a diverse sampling from the related fields of computer science and engineering, privacy and data protection and civil liberties.  A sampling of members includes the following individuals:

  • Ali Farhadi, Professor of Computer Science and Engineering, University of Washington
  • Barry Friedman, Professor and Director of the Policing Project at New York University School of Law
  • Jeremy Gillula, Privacy and Civil Liberties Technologist
  • Jim Bueerman, President of the Police Foundation
  • Miles Brundage, Research Fellow at the University of Oxford’s Future of Humanity Institute
  • Tracy Ann Kosa, Professor at Seattle University School of Law
  • Vera Bumpers, Chief of Houston Metro Police Department and President of the National Organization of Black Law Enforcement Executives
  • Walter McNeil, Sheriff of Leon County, Florida Sheriff’s Office and prior President of the International Association of Chiefs of Police

Obviously, Axon’s goal was to establish a team that could evaluate use and implementation from all angles, ranging from law enforcement officials who employ such technologies, to those experts who help to create and shape legislation governing use of the same.  Axon may be moving in the direction of facial recognition technologies; after all, police forces in both the United Kingdom and China have leveraged these types of technologies for years.  Thus far, one of the chief concerns surrounding facial recognition is the penchant for racial and gender bias—higher error rates for both females and African-Americans.  If Axon does, indeed, move in that direction, it is critical that its advisory group include constituents from all perspectives and demographics.

Core Values

In addition to its own commitment to diversity, DeepMind’s key principles are reflective of its owner’s more expansive footprint across technology platforms:

  • Social benefit: AI should “serve the global social and environmental good…to build fairer and more equal societies…”
  • Rigorous and evidence-based: Technical research must conform to the highest academic research standards, including peer review.
  • Transparent and open: DeepMind will be open as to “who we work with and what projects we fund.”
  • Collaboration and Inclusion: Research must be “accountable to all of society.”

DeepMind’s focus on managing the risk of AI is on an even broader canvas than that of Axon.  In furtherance of its key principles,  DeepMind seeks to answer key questions:

  • What are the societal risks when AI fails?
  • How can humans remain in control of AI?
  • How can dangerous applications of AI in the contexts of terrorism and warfare be avoided?

Though much of the AI industry has yet to provide details as to their own ethics programs, some of its blue chips have acted in unison to establish a more formalized set of guidelines.  In 2016, the Partnership on AI to Benefit People and Society was founded collectively by Amazon, Apple, Google, Facebook, IBM and Microsoft.  Seven pillars form the basis of this partnership:

  • Safety-critical AI: Tools used to perform human discretionary tasks must be “safe, trustworthy and aligned with ethics…”
  • Fair, transparent and accountable AI: Systems must designed so as to be alert to possible biases resulting from the use of AI.
  • Collaborations between people and AI systems: AI is best harnessed in a close collaboration between humans and the systems, themselves.
  • AI, labor and the economy: “Competition and innovation is encouraged and not stifled.”
  • Social and societal influences of AI: While AI has the potential to provide useful assistance and insights to humans, users must also be sensitive to its potential to subtly influence humans.
  • AI and social good: The sky’s the limit in terms of AI’s potential in addressing long-standing societal ills.

While these best practices are a promising start, the industry continues to lack more particulars in terms of how the guidelines will be put into practice.  It is likely that consumers will maintain a healthy skepticism until more particular guardrails are provided which offer compelling evidence of the good, rather than the bad and the ugly.

OTHER THOUGHT LEADERSHIP POSTS:

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry | Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes.

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley | On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson | Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry | The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.

D-Link Continues Challenges to FTC’s Data Security Authority

By Linda Henry | On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley | With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson | Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of, companies subject to COPPA. While the FTC has pursued companies for alleged violations of all facets of its COPPA Six Step Compliance Plan, most recently the FTC has focused on the obligation to promptly and securely delete all data collected if it is no longer needed. Taken as a whole, recent FTC activity may indicate a desire on the part of the FTC to expand its regulatory reach.

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley | Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that collected extensive metadata—average speeds and distances, routes routinely traveled, seat belt usage and other types of data.  But to what end?  Was the purpose of the monitor to learn more about my driving practices and to encourage better driving habits?  To share my data with advertisers wishing to serve up a buy-one, get-one free coupon for paper towels from my favorite grocery store (just as I pass by it) on my touchscreen dashboard?  Or to build a “risk profile” that could be sold to parties (AirBnB, banks, other insurance companies) who may have a vested interest in learning more about my propensity for making good decisions?  The answer could be, “all of the above.”

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry | As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use and monetization.  Data scraping has come a long way since its early days, which involved manually copying data visible on a website.  Today, data scraping is a thriving industry, and high-performance web scraping tools are fueling the big data revolution.  Like many technological advances though, the law has not kept up with the technology that enables scraping. As a result, the state of the law on data scraping remains in flux.