U.S. Lawmakers Want Companies to Check their Bias

Apr 26, 2019

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Although algorithms are often presumed to be objective and unbiased, technology companies are under increased scrutiny for alleged discriminatory practices related to their use of artificial intelligence. 

As a result of the rising concern that certain AI tools are resulting in unfair, biased or discriminatory decisions, U.S. lawmakers have introduced a bill intended to regulate certain artificial intelligence systems.  

The Algorithmic Accountability Act, introduced by Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ), authorizes and directs the Federal Trade Commission to issue regulations that will require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.  The Act is the first legislative attempt in the United States at the federal level to regulate AI systems, and is intended to address both the ethical issues that may arise from the use of AI systems and the data security and privacy issues related to the personal information being used to train such algorithms.  

The bill’s sponsors emphasized that the Act is a key step toward mitigating potential bias in AI systems. Booker, one of the Act’s sponsors, recalled that 50 years ago his parents encountered real estate steering, a practice which involved real estate brokers steering prospective home buyers away from certain neighborhoods based on race. Booker noted that his family was able to prevail against such discrimination with the assistance of local advocates and federal legislation, however, the type of discrimination his family faced 50 years ago can be much more difficult to detect in 2019 as a result of biased algorithms.  Booker stated that the Act is “a key step toward ensuring more accountability from the entities using software to make decisions that can change lives.”

The Act directs the FTC to enact regulations within two years that would require covered entities to conduct impact assessments on automated decision systems to examine any such system’s impact on accuracy, fairness, bias, discrimination, privacy and security.  The assessments would also examine whether use of an automated decision system may contribute to inaccurate, unfair, biased or discriminatory decisions that impact consumers.  The Act includes a broad definition of automated decision system, and would include any computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, and that impacts consumers.

Entities covered under the Act would include companies with over $50 million in average annual gross receipts, companies that possess or control personal information of more than one million individuals or devices, and data brokers that collect consumers’ personal information in order to sell or trade the information or provide third party access to the information. Personal information would include any information that is reasonably linkable to a specific consumer or consumer device, regardless as to how the information is collected, inferred, or obtained.  

Companies with automated decision systems considered high-risk would be required to provide the FTC  with a detailed description of the system (including the system’s design, training, data and purposes) and a cost-benefit analysis of the system, taking into account the system’s purpose, data retention and minimization practices, consumers’ access to information about the system and the extent to which consumers have access to results from the algorithms and may correct or object to the results.

In addition, covered entities would be required to provide an assessment of the risks posed by the system to the privacy and security of consumers’ personal information, the risks that the system may contribute to or result in inaccurate, unfair, biased, or discriminatory decisions that impact consumers and the measures the covered entity will implement to minimize such risks, including technological and physical safeguards. Companies would be required to correct any issues discovered during the impact assessments. The bill also provides that companies should conduct the assessments, if reasonably possible, in consultation with external third parties, including independent auditors. Violations of the Act would be treated as an unfair or deceptive act or practice under the Federal Trade Commission Act.

Although the Act has been endorsed by certain technology and civil rights groups, the proposed bill leaves many details unresolved, including the actions companies will be required to take to correct algorithmic bias discovered during an impact assessment and how companies will verify that  bias has been removed from problematic algorithms.  Representative Yvette Clark (D-NY) has sponsored a parallel bill in the House of Representatives.

OTHER THOUGHT LEADERSHIP POSTS:

FTC Provides Guidance on Using Artificial Intelligence and Algorithms

The Director of the Federal Trade Commission (FTC) Bureau of Consumer Protection recently issued guidance in its Tips and Advice blog as to how companies can manage consumer protection risks that may arise as a result of using artificial intelligence and algorithms.

Is Robotic Process Automation Reducing or Increasing your Software Licensing Fees?

While statistics regarding the increase in the use of Robotic Process Automation (RPA) vary, it is clear that the use of RPA is on the rise. Companies are rolling out RPA in an effort to increase productivity, cut costs and reduce errors.

A Few More Thoughts About Improving Our Force Majeure Provisions

The Coronavirus pandemic has brought the force majeure provision into the spotlight. A quick Google search brings up countless articles published in the past few weeks by lawyers worldwide about how to use force majeure provisions offensively and defensively in these uncertain times.

Government Efforts to Fight a Pandemic Challenge Data Privacy Concerns

Media outlets reported this week that representatives from Facebook, Google, Amazon, and Apple are meeting with members of the White House to brainstorm about ways in which the “Big Four,” can leverage the consumer information they possess to help in the war against COVID–19.

School or Parent? Factors Playing into the FTC’s Analysis of who should provide Parental Consent under COPPA in the Age of EdTech

The use of education technologies (EdTech) has exploded in recent years. In fact, between online learning sites, one to one device deployments in school districts and personalized curriculum services, virtually every student today has some online or digital component to their learning.

NYC’s Task Force to Tackle Algorithmic Bias Issues Final Report

In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

While you’ve been focused on CCPA Compliance Efforts, Elon has Been Developing Cyborgs

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

DHS Cybersecurity Arm Directs Executive Agencies to Develop Vulnerability Disclosure Policies

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

Open Internet Advocates Rejoice: Ninth Circuit Finds Web Scraping of Publicly Accessible Data Likely Does Not Violate CFAA

The Ninth Circuit Court of Appeals recently handed open internet advocates a big win by upholding the right of a data analytics startup to use automated bots to scrape publicly available data.

The ABA Speaks on AI

By Jennifer Thompson | Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI.