San Francisco Says The Eyes Don’t Have It: Setting Limits on Facial Recognition Technology

May 16, 2019

By Jennifer Thompson


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

On May 14, 2019, the San Francisco Board of Supervisors voted 8-1 to approve a proposal that will ban all city agencies, including law enforcement entities, from using facial recognition technologies in the performance of their duties.  San Francisco is the first major city to make such a move, even though debates recently have ramped up regarding the use of facial recognition technologies and whether and to what extent those technologies should be regulated.

Supervisor Aaron Peskin led the proposal, called the Stop Secret Surveillance Ordinance (“Ordinance”).  Peskin was quoted as saying the Ordinance “is not an anti-technology policy,” rather it is “an ordinance about having accountability around surveillance technology.” The sole dissenter felt that the Ordinance failed to adequately address concerns regarding public safety.

The Ordinance remains subject to vote at a second meeting, currently slated for May 21, and approval by the City Mayor.  If approved, the Ordinance would become effective 30 days after the Mayor’s approval or the Board of Supervisors’ override of any veto. 

Pursuant to the Ordinance, city agencies must prepare a surveillance policy, including an impact report, before implementing any new surveillance technologies.  The surveillance policy must identify the purpose, proposed deployments and related costs (both financial and in terms of infringement of personal rights) of using the technology.  Each surveillance policy is subject to review and approval by the Board of Supervisors after a public hearing.  For the surveillance policy to be approved, the related impact report must show that the overall benefits obtained by use of the surveillance technology outweigh the financial and civil liberties costs and that there is no disparate impact on any specific group or community. Expedited reviews may be obtained by the Sheriff or District Attorney if the use of a surveillance technology is required for investigative or prosecutorial functions, and the temporary and short term use of unapproved surveillance technologies may be permitted in exigent circumstances such as imminent danger of death or serious physical injury. 

Even currently deployed technologies must be assessed within 120 days of the Ordinance’s effective date.  Further, city agencies may be subject to annual audits of their compliance with approved surveillance policies and must prepare an annual report on existing technologies designed to assess the seriousness of public complaints about the technology, weigh those complaints and any resulting infringement of rights against the benefits obtained in the use of the technologies, and then evaluate that against the financial and resource outlay in employing the technologies. 

Pekin’s comments and the proposed public oversight mechanisms embodied in the Ordinance reflect recent concerns raised by civil liberties activists and experts in the field about the rise of the surveillance state.  While the technologies can be useful, there are concerns that the widespread use of the technology infringes on basic human rights such as privacy and freedom of expression.  Critics also cite studies indicating that facial recognition technologies have alarming error rates and may reflect biases in application, especially for people of color.  Even Microsoft, who creates and provides many technologies using the technology, as early as July 2018 called on Congress to create and enforce some regulations on the use of the technology.

A stated purpose of the Ordinance is to ensure “safeguards, including robust transparency, oversight, and accountability measures” are in place “before any surveillance technology is deployed.” As the first major city to take such a step, San Francisco is solidifying California’s place as a leader in the development of protections and regulations in the technology space and in particular in the regulation of personal privacy and data protection in connection with the use of technology.

OTHER THOUGHT LEADERSHIP POSTS:

New York City Council Enters the Anti-Surveillance Fray

On Thursday, June 18, 2020 the New York City Council overwhelmingly approved the Public Oversight of Surveillance Technology Act (or “POST Act”) to institute oversight regarding the New York City Police Department’s use of surveillance technologies.

Harm or Deterrence?: FTC Civil Penalty Assessments Under COPPA

The Federal Trade Commission announced its most recent settlement action under the Children’s Online Privacy Protection Act (COPPA) on June 4. The settlement included a $4 million penalty (suspended to $150,000 due to the defendant’s proven inability to pay) against Hyperbeard, Inc. for alleged violations of COPPA.

FTC Provides Guidance on Using Artificial Intelligence and Algorithms

The Director of the Federal Trade Commission (FTC) Bureau of Consumer Protection recently issued guidance in its Tips and Advice blog as to how companies can manage consumer protection risks that may arise as a result of using artificial intelligence and algorithms.

Is Robotic Process Automation Reducing or Increasing your Software Licensing Fees?

While statistics regarding the increase in the use of Robotic Process Automation (RPA) vary, it is clear that the use of RPA is on the rise. Companies are rolling out RPA in an effort to increase productivity, cut costs and reduce errors.

A Few More Thoughts About Improving Our Force Majeure Provisions

The Coronavirus pandemic has brought the force majeure provision into the spotlight. A quick Google search brings up countless articles published in the past few weeks by lawyers worldwide about how to use force majeure provisions offensively and defensively in these uncertain times.

Government Efforts to Fight a Pandemic Challenge Data Privacy Concerns

Media outlets reported this week that representatives from Facebook, Google, Amazon, and Apple are meeting with members of the White House to brainstorm about ways in which the “Big Four,” can leverage the consumer information they possess to help in the war against COVID–19.

School or Parent? Factors Playing into the FTC’s Analysis of who should provide Parental Consent under COPPA in the Age of EdTech

The use of education technologies (EdTech) has exploded in recent years. In fact, between online learning sites, one to one device deployments in school districts and personalized curriculum services, virtually every student today has some online or digital component to their learning.

NYC’s Task Force to Tackle Algorithmic Bias Issues Final Report

In December, 2017 the New York City Council passed Local Law 49, the first law in the country designed to address algorithmic bias and discrimination occurring as a result of algorithms used by City agencies.

While you’ve been focused on CCPA Compliance Efforts, Elon has Been Developing Cyborgs

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).

DHS Cybersecurity Arm Directs Executive Agencies to Develop Vulnerability Disclosure Policies

On November 27, 2019, the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security (DHS) released for public comment a draft of Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy (the “Directive”).