U.S. Lawmakers Want Companies to Check their Bias

Apr 26, 2019

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Although algorithms are often presumed to be objective and unbiased, technology companies are under increased scrutiny for alleged discriminatory practices related to their use of artificial intelligence. 

As a result of the rising concern that certain AI tools are resulting in unfair, biased or discriminatory decisions, U.S. lawmakers have introduced a bill intended to regulate certain artificial intelligence systems.  

The Algorithmic Accountability Act, introduced by Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ), authorizes and directs the Federal Trade Commission to issue regulations that will require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.  The Act is the first legislative attempt in the United States at the federal level to regulate AI systems, and is intended to address both the ethical issues that may arise from the use of AI systems and the data security and privacy issues related to the personal information being used to train such algorithms.  

The bill’s sponsors emphasized that the Act is a key step toward mitigating potential bias in AI systems. Booker, one of the Act’s sponsors, recalled that 50 years ago his parents encountered real estate steering, a practice which involved real estate brokers steering prospective home buyers away from certain neighborhoods based on race. Booker noted that his family was able to prevail against such discrimination with the assistance of local advocates and federal legislation, however, the type of discrimination his family faced 50 years ago can be much more difficult to detect in 2019 as a result of biased algorithms.  Booker stated that the Act is “a key step toward ensuring more accountability from the entities using software to make decisions that can change lives.”

The Act directs the FTC to enact regulations within two years that would require covered entities to conduct impact assessments on automated decision systems to examine any such system’s impact on accuracy, fairness, bias, discrimination, privacy and security.  The assessments would also examine whether use of an automated decision system may contribute to inaccurate, unfair, biased or discriminatory decisions that impact consumers.  The Act includes a broad definition of automated decision system, and would include any computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, and that impacts consumers.

Entities covered under the Act would include companies with over $50 million in average annual gross receipts, companies that possess or control personal information of more than one million individuals or devices, and data brokers that collect consumers’ personal information in order to sell or trade the information or provide third party access to the information. Personal information would include any information that is reasonably linkable to a specific consumer or consumer device, regardless as to how the information is collected, inferred, or obtained.  

Companies with automated decision systems considered high-risk would be required to provide the FTC  with a detailed description of the system (including the system’s design, training, data and purposes) and a cost-benefit analysis of the system, taking into account the system’s purpose, data retention and minimization practices, consumers’ access to information about the system and the extent to which consumers have access to results from the algorithms and may correct or object to the results.

In addition, covered entities would be required to provide an assessment of the risks posed by the system to the privacy and security of consumers’ personal information, the risks that the system may contribute to or result in inaccurate, unfair, biased, or discriminatory decisions that impact consumers and the measures the covered entity will implement to minimize such risks, including technological and physical safeguards. Companies would be required to correct any issues discovered during the impact assessments. The bill also provides that companies should conduct the assessments, if reasonably possible, in consultation with external third parties, including independent auditors. Violations of the Act would be treated as an unfair or deceptive act or practice under the Federal Trade Commission Act.

Although the Act has been endorsed by certain technology and civil rights groups, the proposed bill leaves many details unresolved, including the actions companies will be required to take to correct algorithmic bias discovered during an impact assessment and how companies will verify that  bias has been removed from problematic algorithms.  Representative Yvette Clark (D-NY) has sponsored a parallel bill in the House of Representatives.

OTHER THOUGHT LEADERSHIP POSTS:

Beyond GDPR: How Brexit Affects Other Data Laws

By Dawn Ingley | Since the United Kingdom (UK) voted in June, 2016, to exit the European Union (i.e., “Brexit”), the question in many minds has been, “Whither GDPR?” After all, the UK was a substantial contributor to this legislation. The UK has offered assurances that that it intends to, in large part, harmonize its data protection laws with GDPR.

San Francisco Says The Eyes Don’t Have It: Setting Limits on Facial Recognition Technology

By Jennifer Thompson | On May 14, 2019, the San Francisco Board of Supervisors voted 8-1 to approve a proposal that will ban all city agencies, including law enforcement entities, from using facial recognition technologies in the performance of their duties.

NYC’s Task Force to Tackle Algorithmic Bias: A Study in Inertia

By Linda Henry | In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

U.S. Lawmakers Want Companies to Check their Bias

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, technology companies are under increased scrutiny for alleged discriminatory practices related to their use of artificial intelligence.

The Weight of “GDPR Lite”

By Dawn Ingley | In June, 2018, California’s legislature took the first steps to ensure that the state’s approach to data privacy was trending more closely to the European Union’s General Data Protection Regulation (GDPR), the de facto global industry standard for data protection. Though legislators have acknowledged that further refinements to the California Consumer Privacy Act (CCPA) will be necessary in the coming months, its salient requirements are known.

The ABA’s Valentine’s Gift to Same-Sex Couples: Formal Opinion 458 Requires Judges to Perform Marriages

By Jennifer Thompson | On Valentine’s Day, the American Bar Association (ABA) Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 485, entitled “Judges Performing Same-Sex Marriages,” stating that judges may not decline to perform marriages for couples of the same sex.

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry | Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes.

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley | On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson | Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry | The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.