U.S. Lawmakers Want Companies to Check their Bias

Published on JD Supra on April 26, 2019

Although algorithms are often presumed to be objective and unbiased, technology companies are under increased scrutiny for alleged discriminatory practices related to their use of artificial intelligence. 

As a result of the rising concern that certain AI tools are resulting in unfair, biased or discriminatory decisions, U.S. lawmakers have introduced a bill intended to regulate certain artificial intelligence systems.  

The Algorithmic Accountability Act, introduced by Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ), authorizes and directs the Federal Trade Commission to issue regulations that will require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.  The Act is the first legislative attempt in the United States at the federal level to regulate AI systems, and is intended to address both the ethical issues that may arise from the use of AI systems and the data security and privacy issues related to the personal information being used to train such algorithms.  

The bill’s sponsors emphasized that the Act is a key step toward mitigating potential bias in AI systems. Booker, one of the Act’s sponsors, recalled that 50 years ago his parents encountered real estate steering, a practice which involved real estate brokers steering prospective home buyers away from certain neighborhoods based on race. Booker noted that his family was able to prevail against such discrimination with the assistance of local advocates and federal legislation, however, the type of discrimination his family faced 50 years ago can be much more difficult to detect in 2019 as a result of biased algorithms.  Booker stated that the Act is “a key step toward ensuring more accountability from the entities using software to make decisions that can change lives.”

The Act directs the FTC to enact regulations within two years that would require covered entities to conduct impact assessments on automated decision systems to examine any such system’s impact on accuracy, fairness, bias, discrimination, privacy and security.  The assessments would also examine whether use of an automated decision system may contribute to inaccurate, unfair, biased or discriminatory decisions that impact consumers.  The Act includes a broad definition of automated decision system, and would include any computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, and that impacts consumers.

Entities covered under the Act would include companies with over $50 million in average annual gross receipts, companies that possess or control personal information of more than one million individuals or devices, and data brokers that collect consumers’ personal information in order to sell or trade the information or provide third party access to the information. Personal information would include any information that is reasonably linkable to a specific consumer or consumer device, regardless as to how the information is collected, inferred, or obtained.  

Companies with automated decision systems considered high-risk would be required to provide the FTC  with a detailed description of the system (including the system’s design, training, data and purposes) and a cost-benefit analysis of the system, taking into account the system’s purpose, data retention and minimization practices, consumers’ access to information about the system and the extent to which consumers have access to results from the algorithms and may correct or object to the results.

In addition, covered entities would be required to provide an assessment of the risks posed by the system to the privacy and security of consumers’ personal information, the risks that the system may contribute to or result in inaccurate, unfair, biased, or discriminatory decisions that impact consumers and the measures the covered entity will implement to minimize such risks, including technological and physical safeguards. Companies would be required to correct any issues discovered during the impact assessments. The bill also provides that companies should conduct the assessments, if reasonably possible, in consultation with external third parties, including independent auditors. Violations of the Act would be treated as an unfair or deceptive act or practice under the Federal Trade Commission Act.

Although the Act has been endorsed by certain technology and civil rights groups, the proposed bill leaves many details unresolved, including the actions companies will be required to take to correct algorithmic bias discovered during an impact assessment and how companies will verify that  bias has been removed from problematic algorithms.  Representative Yvette Clark (D-NY) has sponsored a parallel bill in the House of Representatives.

Skip to content