When 2017 Becomes 1984: Facial Recognition Technologies – Face a Growing Legal Landscape

Oct 5, 2017

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Recently, Stanford University professor and researcher Michal Kosinski caused a stir of epic proportions and conjured up visions of George Orwell’s 1984 in the artificial intelligence (AI) community.  Kosinski posited that several AI tools are now able to determine the sexual orientation of a person, just based on a photograph, and has gone on to speculate that AI could also predict political ideology, IQ, and propensity for criminal behavior.  Indeed, using a complex algorithm, Kosinski accurately pinpointed a male’s sexual orientation over 90% of the time.  While technology advances frequently outpace corresponding changes in the law, the implications of this technology are alarming.  Could the LGBTQ community be targeted for violence or other discrimination based on this analysis?  Could “potential criminals” be turned away from gainful employment based on mere speculation about future behavior?  Would Facebook account photographs be an unintentional window into the most private facets of one’s life?  In a country already divided over sociopolitical issues, the answer to all of these questions unfortunately seems to be not if, but when.  The urgency for laws and regulations to police the exponential proliferation of AI’s potential intrusions cannot be overstated as the threat of a 1984 world becomes more of a reality.

Although Kosinski’s revelation is a recent one, concerns over facial recognition technologies and biometric data are hardly novel.  In 2012, Ireland forced Facebook to disable its facial recognition software in all of Europe—the EU data privacy directive (and the upcoming GDPR) obviously would have required explicit consent from Facebook users, and such consent was never requested or received from Facebook account owners.  Facebook was also required to delete all facial profiles collected in Europe.

In the United States, Illinois appears to be ground zero for the battle over facial recognition technologies, predominantly because it’s one of the few states with a specific law on the books.  The Biometric Information Privacy Act, 740 ILCS 14, (“BIPA”) initially was a reaction to Pay by Touch, a technology available in the mid 2000s as a means to connect biometric information (e.g., fingerprints) to credit card and other accounts.  A wave of privacy-geared litigation spelled doom for Pay by Touch and its handful of competitors.  With increasing adoption of facial recognition software into popular technology platforms (such as the iPhone) BIPA is front and center once again.

The scope of BIPA includes “retina or iris scan, fingerprint, voiceprint or scan of hand or face geometry…”  Key provisions of BIPA are as follows:

  • Prohibits private entities from collecting, selling, leasing, trading or otherwise profiting from biometric data, without express written consent;
  • Requires private entities that collect biometric data to protect such data using a reasonable standard of care that is at least as protective as the methods used by the entity to protect other forms of confidential and sensitive information;
  • Requires private entities that collect such biometric data to comply with written policies “made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information, on the earlier of: i) when the initial purpose for collecting or obtaining such identifiers or information has been satisfied; or ii) within 3 years of the individual’s last interaction with the private entity.

One of the latest lawsuits filed pursuant to this law is against Shutterfly, which is accused of collecting facial, fingerprint and iris scans without express written consent from website visitors, as well as people who may be “tagged” in photos (even though they may have never used Shutterfly’s services or held a Shutterfly account).  In a similar lawsuit, filed against Facebook in 2015, users also charged that Facebook was collecting and using biometric data without specific consent.

It is likely that Apple is monitoring these cases closely.  After all, iPhone X, according to Apple’s website, uses facial recognition technology to obtain access to the phone—in essence, a “facial password.”  Apple is quick to point out that it encrypts the mapping of users’ faces and that such actual data exists only on the physical device and not elsewhere (i.e., not in a cloud).

It is also likely that lawmakers in the other two states with statutes similar to BIPA are keeping a watchful eye on the Illinois docket.  Texas and Washington both have biometric laws on the books, along the lines of BIPA (though unlike Illinois, Texas and Washington do not provide a private cause of action).  While residents of these states can take comfort in legislative remedies available there, where does that leave residents of other states?  Given that the federal approach to privacy in the United States generally tends to be sector-specific (e.g., HIPAA for medical data; Gramm-Leach Bliley for financial institutions), it seems clear that change must surface at the state level.  Until then, state residents without legal protections are left with the following options:

  • Obscuracam: an anti-facial recognition app that, as its name suggests, obscures visual characteristics in those individuals photographed.
  • Opt-out whenever possible: Facebook settings can be modified so as to allow an account holder to both opt out of facial recognition technologies, and to delete any data already collected.  Users of Google+ have to affirmatively opt in to facial recognition technologies.
  • Tangible solutions: 3-D printed glasses are sometimes effective in disrupting and/or scrambling features, thereby thwarting facial recognition technologies.

However, realistically, until the law catches up with technology, the Orwellian threat is real. As the saying goes and as 1984 illustrates time and time again, “knowledge is power.”  And when knowledge gleaned from facial recognition technology falls into the wrong hands, “absolute power absolutely corrupts.”  For lawmakers, the time is yesterday (if not sooner) for laws to catch up with the break-neck pace of facial recognition technologies and the potential slippery slope of use cases.

OTHER THOUGHT LEADERSHIP POSTS:

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, I wrote an article outlining the growing body of state legislation designed to address and mitigate emerging privacy concerns over facial recognition technologies.  It now appears that...