FTC Provides Guidance on Using Artificial Intelligence and Algorithms

May 6, 2020

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

The Director of the Federal Trade Commission (FTC) Bureau of Consumer Protection recently issued guidance in its Tips and Advice blog as to how companies can manage consumer protection risks that may arise as a result of using artificial intelligence and algorithms. The blog post includes the following key takeaways:

Be Transparent.

Companies should ensure that consumers are not mislead about their interactions with AI tools. The FTC offers the example of the 2017 Ashley Madison complaint that alleged, in part, that the website deceived consumers by using bots to send male users fake messages and entice individuals to subscribe to the service. Companies should also be transparent when collecting sensitive data, and secretly collecting audio or visual data could lead to an enforcement action.

The FTC guidance also notes that companies that make automated decisions based on information from a third-party vendor may be required to provide the consumer with an “adverse action” notice, which is required when an individual has a negative action taken against her or him because of information in a consumer report.  For example, if a company uses reports from a background check company to predict whether an individual will be a good tenant, and the background check company’s AI tool utilized credit reports to make the prediction, the company may be required to provide an adverse action notice if it uses the report to deny someone an apartment or charge higher rent.

Explain Your Decision to the Consumer.

Companies that deny consumers something of value based on algorithmic decision-making should explain why. Although the FTC acknowledges that it may not be easy to explain the many factors that are involved in algorithmic decision making, companies must know what data is being used (and how it is being used) to train their algorithms in order to make sufficient disclosures to individuals. In addition, companies that use algorithms to assign credit scores to consumers should also disclose the key factors that adversely impact an individual’s credit score.

The blog post also warns that companies must notify consumers if a company changes the terms of an agreement based on automated tools. For example, if a company decides to use an AI tool to determine whether it will reduce a consumer’s credit score (e.g., by taking into account purchases made by the consumer) and this was not initially disclosed, the company must now disclose this to the consumer.

Ensure that your decisions are fair.

Although AI tools have many beneficial uses, AI tools can also result in discrimination against a protected class. For example, if a company makes credit decisions based on consumers’ Zip Codes and this results in a disparate impact on a protected group, the company may be in violation of the Equal Credit Opportunity Act. Companies should also give consumers an opportunity to correct information used to make decisions about the consumer.

Ensure that your data and models are robust and empirically sound.

If a company provides consumer data to third parties to train algorithms that will make decisions about consumer eligibility for credit, employment, insurance, housing, or similar benefits, the company may be considered a consumer reporting agency. Consequently, the company would be required to comply with the FCRA, and would be responsible for ensuring that the data is accurate. The company would also be required to give consumers access to and the opportunity to correct their own information.

In addition, even if a company is not deemed to be a consumer reporting agency, companies that provide data about their customers to third parties for use in automated decision making may have an obligation to ensure the data is accurate. Under the FCRA, a company is considered a “furnisher” if it provides data about customers to consumer reporting agencies. Furnishers are prohibited from providing data to third parties that they have reason to believe may be inaccurate, and are required to maintain written policies and procedures to ensure information is accurate and investigate consumer disputes related to such data.

Hold yourself accountable for compliance, ethics, fairness and nondiscrimination.

Before using an automated decision tool, Companies should consider four key issues:  whether the data set is representative, whether the model accounts for bias, whether there are problems with the accuracy of the predictions and whether reliance on big data raise ethical or fairness concerns. The FTC also stresses that companies should protect their algorithms from unauthorized use, and consider whether the inclusion of access controls or other safeguards could prevent abuse. Lastly, companies may want to consider using third party objective observers to independently test their algorithms for potential problems.

OTHER THOUGHT LEADERSHIP POSTS:

FTC Provides Guidance on Using Artificial Intelligence and Algorithms

The Director of the Federal Trade Commission (FTC) Bureau of Consumer Protection recently issued guidance in its Tips and Advice blog as to how companies can manage consumer protection risks that may arise as a result of using artificial intelligence and algorithms.

Is Robotic Process Automation Reducing or Increasing your Software Licensing Fees?

While statistics regarding the increase in the use of Robotic Process Automation (RPA) vary, it is clear that the use of RPA is on the rise. Companies are rolling out RPA in an effort to increase productivity, cut costs and reduce errors.

A Few More Thoughts About Improving Our Force Majeure Provisions

The Coronavirus pandemic has brought the force majeure provision into the spotlight. A quick Google search brings up countless articles published in the past few weeks by lawyers worldwide about how to use force majeure provisions offensively and defensively in these uncertain times.

Government Efforts to Fight a Pandemic Challenge Data Privacy Concerns

Media outlets reported this week that representatives from Facebook, Google, Amazon, and Apple are meeting with members of the White House to brainstorm about ways in which the “Big Four,” can leverage the consumer information they possess to help in the war against COVID–19.

School or Parent? Factors Playing into the FTC’s Analysis of who should provide Parental Consent under COPPA in the Age of EdTech

The use of education technologies (EdTech) has exploded in recent years. In fact, between online learning sites, one to one device deployments in school districts and personalized curriculum services, virtually every student today has some online or digital component to their learning.

NYC’s Task Force to Tackle Algorithmic Bias Issues Final Report

In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

While you’ve been focused on CCPA Compliance Efforts, Elon has Been Developing Cyborgs

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

DHS Cybersecurity Arm Directs Executive Agencies to Develop Vulnerability Disclosure Policies

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

Open Internet Advocates Rejoice: Ninth Circuit Finds Web Scraping of Publicly Accessible Data Likely Does Not Violate CFAA

The Ninth Circuit Court of Appeals recently handed open internet advocates a big win by upholding the right of a data analytics startup to use automated bots to scrape publicly available data.

The ABA Speaks on AI

By Jennifer Thompson | Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI.