San Francisco Says The Eyes Don’t Have It: Setting Limits on Facial Recognition Technology

May 16, 2019

By Jennifer Thompson

See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

On May 14, 2019, the San Francisco Board of Supervisors voted 8-1 to approve a proposal that will ban all city agencies, including law enforcement entities, from using facial recognition technologies in the performance of their duties.  San Francisco is the first major city to make such a move, even though debates recently have ramped up regarding the use of facial recognition technologies and whether and to what extent those technologies should be regulated.

Supervisor Aaron Peskin led the proposal, called the Stop Secret Surveillance Ordinance (“Ordinance”).  Peskin was quoted as saying the Ordinance “is not an anti-technology policy,” rather it is “an ordinance about having accountability around surveillance technology.” The sole dissenter felt that the Ordinance failed to adequately address concerns regarding public safety.

The Ordinance remains subject to vote at a second meeting, currently slated for May 21, and approval by the City Mayor.  If approved, the Ordinance would become effective 30 days after the Mayor’s approval or the Board of Supervisors’ override of any veto. 

Pursuant to the Ordinance, city agencies must prepare a surveillance policy, including an impact report, before implementing any new surveillance technologies.  The surveillance policy must identify the purpose, proposed deployments and related costs (both financial and in terms of infringement of personal rights) of using the technology.  Each surveillance policy is subject to review and approval by the Board of Supervisors after a public hearing.  For the surveillance policy to be approved, the related impact report must show that the overall benefits obtained by use of the surveillance technology outweigh the financial and civil liberties costs and that there is no disparate impact on any specific group or community. Expedited reviews may be obtained by the Sheriff or District Attorney if the use of a surveillance technology is required for investigative or prosecutorial functions, and the temporary and short term use of unapproved surveillance technologies may be permitted in exigent circumstances such as imminent danger of death or serious physical injury. 

Even currently deployed technologies must be assessed within 120 days of the Ordinance’s effective date.  Further, city agencies may be subject to annual audits of their compliance with approved surveillance policies and must prepare an annual report on existing technologies designed to assess the seriousness of public complaints about the technology, weigh those complaints and any resulting infringement of rights against the benefits obtained in the use of the technologies, and then evaluate that against the financial and resource outlay in employing the technologies. 

Pekin’s comments and the proposed public oversight mechanisms embodied in the Ordinance reflect recent concerns raised by civil liberties activists and experts in the field about the rise of the surveillance state.  While the technologies can be useful, there are concerns that the widespread use of the technology infringes on basic human rights such as privacy and freedom of expression.  Critics also cite studies indicating that facial recognition technologies have alarming error rates and may reflect biases in application, especially for people of color.  Even Microsoft, who creates and provides many technologies using the technology, as early as July 2018 called on Congress to create and enforce some regulations on the use of the technology.

A stated purpose of the Ordinance is to ensure “safeguards, including robust transparency, oversight, and accountability measures” are in place “before any surveillance technology is deployed.” As the first major city to take such a step, San Francisco is solidifying California’s place as a leader in the development of protections and regulations in the technology space and in particular in the regulation of personal privacy and data protection in connection with the use of technology.


DHS Cybersecurity Arm Directs Executive Agencies to Develop Vulnerability Disclosure Policies

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

Open Internet Advocates Rejoice: Ninth Circuit Finds Web Scraping of Publicly Accessible Data Likely Does Not Violate CFAA

The Ninth Circuit Court of Appeals recently handed open internet advocates a big win by upholding the right of a data analytics startup to use automated bots to scrape publicly available data.

The ABA Speaks on AI

By Jennifer Thompson | Earlier this week, the American Bar Association (“ABA”) House of Delegates, charged with developing policy for the ABA, approved Resolution 112 which urges lawyers and courts to reflect on their use (or non-use) of artificial intelligence (“AI”) in the practice of law, and to address the attendant ethical issues related to AI.

Is Anonymized Data Truly Safe From Re-Identification? Maybe not.

By Linda Henry | Across all industries, data collection is ubiquitous. One recent study estimates that over 2.5 quintillion bytes of data are created every day, and over 90% of the data in the world was generated over the last two years.

FTC Settlement Reminds IoT Companies to Employ Prudent Software Development Practices

By Linda Henry | Smart home products manufacturer D-Link Systems Inc. (D-Link) has reached a proposed settlement with the Federal Trade Commission after several years of litigation over D-Link’s security practices.

Beyond GDPR: How Brexit Affects Other Data Laws

By Dawn Ingley | Since the United Kingdom (UK) voted in June, 2016, to exit the European Union (i.e., “Brexit”), the question in many minds has been, “Whither GDPR?” After all, the UK was a substantial contributor to this legislation. The UK has offered assurances that that it intends to, in large part, harmonize its data protection laws with GDPR.

San Francisco Says The Eyes Don’t Have It: Setting Limits on Facial Recognition Technology

By Jennifer Thompson | On May 14, 2019, the San Francisco Board of Supervisors voted 8-1 to approve a proposal that will ban all city agencies, including law enforcement entities, from using facial recognition technologies in the performance of their duties.

NYC’s Task Force to Tackle Algorithmic Bias: A Study in Inertia

By Linda Henry | In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

U.S. Lawmakers Want Companies to Check their Bias

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, technology companies are under increased scrutiny for alleged discriminatory practices related to their use of artificial intelligence.

The Weight of “GDPR Lite”

By Dawn Ingley | In June, 2018, California’s legislature took the first steps to ensure that the state’s approach to data privacy was trending more closely to the European Union’s General Data Protection Regulation (GDPR), the de facto global industry standard for data protection. Though legislators have acknowledged that further refinements to the California Consumer Privacy Act (CCPA) will be necessary in the coming months, its salient requirements are known.