NYC’s Task Force to Tackle Algorithmic Bias Issues Final Report

Jan 31, 2020

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies. Local Law 49 created an Automated Decision Systems Task Force (“Task Force”) to monitor algorithms used by municipal agencies, and directed the Task Force to provide recommendations by the fall of 2019 as to how to make algorithms used by the City fairer and more transparent. Despite the Task Force’s daunting task of balancing the need for more transparency with respect to the City’s use of algorithms against the right of companies to protect their intellectual property, many people were hopeful that Local Law 49 would encourage other cities and states to acknowledge the problem of algorithmic bias.

During the course of 2019, it became apparent that the Task Force was facing significant challenges in its efforts to fulfill the Task Force’s mission. First, although the law included a definition of “automated decision system” (“ADS”), after 18 meetings, the Task Force was still unable to reach a consensus as to what technology even met the definition of ADS.

In addition, Local Law 49 did not mandate that City agencies provide the Task Force with requested information and made the provision of requested information voluntary. Over five months after formation of the Task Force, certain Task Force members expressed their frustration that the City had not yet identified even a single ADS or provided any information about ADS used by the City. Although the City did eventually provide five examples of ADS to the Task Force, the City did not ever provide a full list of ADS used by City agencies.

Task Force members also noted that they had developed a promising methodology for obtaining relevant information about ADS from developers and operators, however, the City inexplicably directed the Task Force to abandon their use of this methodology, further frustrating the Task Force’s efforts. Task Force members warned that they would be unable to provide meaningful or credible recommendations as to how to address algorithmic discrimination if the City failed to provide the requested information.

In November 2019, the Task Force released its final report. Not surprisingly, the report did not make specific recommendations, and only identified key principles where the Task Force was able to reach consensus. Although the report included many aspirational principles, the lack of specificity raises the question of the report’s true utility. At a very high level, the report offered the following principles as recommendations:

Build capacity for an effective, fair, and responsible approach to the City’s ADS

The Task Force recommended developing centralized resources within the City government to guide policy and assist City agencies in the development, implementation and use of ADS. In addition, the Task Force recommended developing specific guidelines and a method for determining the potential for an ADS tool to result in a disproportionate impact or harm. The report did not, however, provide any specifics regarding the substance of such guidelines or methods.

The report noted that the broad definition of ADS included in Local Law 49 was overly broad and made identification of tools and systems that should be considered ADS rather difficult. As a result, the Task Force recommended narrowing the definition to exclude tools that perform only ministerial function. In particular, the report recommended replacing the single definition of ADS with a series of questions or criteria that can evolve over time.

Broaden public discussion on ADS

The Task Force recommended providing opportunities to the public to learn about ADS that impact individuals, including through use of publicly accessible, digital platforms through which educational materials would be available. The report also cited the importance of involving impacted communities in discussions about specific ADS, however, the report did not include any guidance as to how this would be achieved.

Formalize ADS management functions

The report also recommended establishing a framework for reporting and publishing information related to ADS, including the development of a process for escalating urgent matters involving actual or suspected harm to an individual or community in relation to use of an ADS.

Shortly after the report was released, Mayor Bill de Blasio issued an executive order creating an Algorithms Management and Policy Officer who will serve as a centralized resource on algorithm policy and develop guidelines and best practices to assist City agencies in their use of algorithms. The new Officer will also be responsible for ensuring that algorithms used by the City to deliver services promote equity, fairness and accountability.

OTHER THOUGHT LEADERSHIP POSTS:

NYC’s Task Force to Tackle Algorithmic Bias Issues Final Report

In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

While you’ve been focused on CCPA Compliance Efforts, Elon has Been Developing Cyborgs

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

DHS Cybersecurity Arm Directs Executive Agencies to Develop Vulnerability Disclosure Policies

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

Open Internet Advocates Rejoice: Ninth Circuit Finds Web Scraping of Publicly Accessible Data Likely Does Not Violate CFAA

The Ninth Circuit Court of Appeals recently handed open internet advocates a big win by upholding the right of a data analytics startup to use automated bots to scrape publicly available data.

The ABA Speaks on AI

By Jennifer Thompson | Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI.

Is Anonymized Data Truly Safe From Re-Identification? Maybe not.

By Linda Henry | Across all industries, data collection is ubiquitous. One recent study estimates that over 2.5 quintillion bytes of data are created every day, and over 90% of the data in the world was generated over the last two years.

FTC Settlement Reminds IoT Companies to Employ Prudent Software Development Practices

By Linda Henry | Smart home products manufacturer D-Link Systems Inc. (D-Link) has reached a proposed settlement with the Federal Trade Commission after several years of litigation over D-Link’s security practices.

Beyond GDPR: How Brexit Affects Other Data Laws

By Dawn Ingley | Since the United Kingdom (UK) voted in June, 2016, to exit the European Union (i.e., “Brexit”), the question in many minds has been, “Whither GDPR?” After all, the UK was a substantial contributor to this legislation. The UK has offered assurances that that it intends to, in large part, harmonize its data protection laws with GDPR.

San Francisco Says The Eyes Don’t Have It: Setting Limits on Facial Recognition Technology

By Jennifer Thompson | On May 14, 2019, the San Francisco Board of Supervisors voted 8-1 to approve a proposal that will ban all city agencies, including law enforcement entities, from using facial recognition technologies in the performance of their duties.

NYC’s Task Force to Tackle Algorithmic Bias: A Study in Inertia

By Linda Henry | In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.