Pros and Cons of Hiring a Security Rating Agency

Pros and Cons of Hiring a Security Rating Agency

By Jennifer Thompson


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

One can hardly check out any news outlet today without reading or hearing about a security breach.  Experts frequently advocate performing internal assessments to identify security weaknesses.  Commentators tout the importance of assessing the security of the entities with which you do business.  Investors, partners and markets shy away from companies that are not proactive enough with respect to security. Given the multitude of variables involved and security measures available, how can a company convey the effectiveness of its own security program in a meaningful manner? Further, given how fact and business-specific that security is, how can one company compare its own security measures to those taken by another company?  Many companies turn to independent ratings agencies for an objective evaluation and systematized rating.

Security rating agencies are becoming instrumental in helping companies evaluate security risks and potential transactions.  Investors in start-ups will use such ratings to evaluate risks and identify future investment needs of the entity.  Security ratings are a critical part of due diligence review in mergers, acquisitions and joint ventures.  Procurement departments routinely require vendors to obtain ratings before entering into agreements.  Some companies may even request that a rating agency evaluate its own operations to identify weak points and opportunities for improvement.

In compiling the ratings, the rating agencies compile public and private data points, feed them into the agency’s proprietary algorithm and generate a “score.”  Scores can be used to measure one entity’s security efforts against others.  Of course, the rating is only as reliable as the entity providing it, so is it worth it to expend the money on these services?  Similarly, once a score is obtained, can it harm your business?  Will others deem your score too low?  Will the publication of your score actually hamper prospects operations?  Or worse, will having a low score published ultimately make you a more attractive target to would-be hackers?

Fortunately, some forty-odd companies and the US Chamber of Commerce identified the need for suggesting a standardized methodology for the security rating agencies.  In June 2017, these companies and the US Chamber of Commerce issued the Principles for Fair and Accurate Security Ratings (the “Principles”).

The Principles seek to establish guidelines for fair and accurate reporting of security ratings and promote standards for the appropriate use and disclosure of the scores.  The Principles suggest that all security rating agencies should:

  • provide transparency of the methodologies and data used to create the rating;
  • provide a mechanism by which entities that are rated can dispute, correct and/or appeal any rating published by the ratings agencies;
  • provide advance notice of any changes to ratings methodologies so that rated companies are clear on how the procedural changes may affect their scores;
  • remain independent of the entities they rate; and
  • maintain confidentiality of all sensitive information (including information shared during disputes, non-public ratings and other private information).

Hopefully, the Principles will result in greater consistency among rating agencies, increased reliability in the scores and more efficiency in the ratings process itself.  All of these Principles ideally will lead to candid discussions among business partners as to how entities can improve their security and, more importantly, suffer fewer breaches.

Of course, every company must assess whether to have itself rated and how to utilize and share any scores it obtains from the various rating agencies.  But assuming the agency retained to provide the security rating is in compliance with the Principles, at least buyers of these services can be reasonably certain they are receiving a truly objective measure with full opportunity to appeal or clarify any questions with respect to the score.

So, if your entity uses a security rating agency, make sure it is one that is operating in compliance with the Principles for Fair and Accurate Security Ratings espoused by the Chamber of Commerce.

OTHER THOUGHT LEADERSHIP POSTS:

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, I wrote an article outlining the growing body of state legislation designed to address and mitigate emerging privacy concerns over facial recognition technologies.  It now appears that...

Part II of III | FTC Provides Guidance on Reasonable Data Security Practices

Part II of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

This is the second in a series of three articles on the FTC’s Stick with Security blog. Part I and Part III of this series can be found here.

Over the past 15 years, the Federal Trade Commission (FTC) has brought more than 60 cases against companies for unfair or deceptive data security practices that put consumers’ personal data at unreasonable risk.  Although the FTC has stated that the touchstone of its approach to data security is reasonableness, the FTC has faced considerable criticism from the business community for lack of clarity as to as to what it considers reasonable data security.

Earlier this year, FTC Acting Chairman Maureen Ohlhausen pledged greater transparency concerning practices that contribute to reasonable data security.  As a follow-up to Ohlhausen’s pledge, the FTC published a weekly blog over the past few months, Stick with Security, that focuses on the ten principles outlined in its Start with Security Guide for Businesses. In the blog, the FTC uses examples taken from complaints and orders to offer additional clarity on each principle included in the Start with Security guidelines.

This is the second of three articles reviewing the security principles discussed by the FTC in its Stick with Security blog.

Store sensitive personal information securely and protect it during transmission.

Keep sensitive information secure throughout its lifecycle.  Businesses must understand how data travels through their network in order to safeguard sensitive information throughout the entire data life cycle. For example, a real estate company that collected sensitive financial data from prospective home buyers encrypted information sent from the customer’s browser to the company’s servers.   After data had entered the company’s system, however, the data was decrypted and sent in readable text to various branch offices.  Consequently, as a result of such decryption, the company failed to maintain appropriate security through the entire life cycle of the sensitive information being collected by the company.

Use industry-tested and accepted methods.  Although the market may reward products that are novel and unique, the FTC makes clear that when it comes to encryption methods, the FTC expects companies to use industry-tested and approved encryption.  As an example of a company that may not have employed reasonable security practices, the FTC describes an app developer that utilized its own proprietary method to encrypt data.  The more prudent decision would have been to deploy industry accepted encryption algorithms rather than the company’s own proprietary method, which was not industry tested and approved.

Ensure proper configuration.  Strong encryption is necessary, but not enough to protect sensitive data.  For example, the FTC found that a company misrepresented the security of its mobile app and failed to secure the transmission of sensitive personal information by disabling the SSL certification verification that would have protected consumers’ data.   A second example provided by the FTC of problematic configuration involved a travel company that used a Transport Layer Security (TLS) protocol to establish encrypted connections with consumers.  Although the company’s use of the TLS protocol was a prudent security practice,  the company then disabled the process to validate the TLS certificate.  The FTC notes that the company failed to follow recommendations from app developer platform providers by disabling the default validation settings and thus may not have used reasonable data security practices.

Segment your network and monitor who’s trying to get in and out.

Segment your network.  Companies that segment their network may minimize the impact of a data breach.  For example, use of firewalls to create separate areas on a network may reduce the amount of data that is accessed in the event hackers are able to gain access to a network.  The FTC provides the example of a retail chain that failed to adequately segment its network by permitting unrestricted data connections across its stores (e.g., allowing a computer from a store in one location to access employee information from another store).  By allowing unrestricted data connections across locations, hackers were able to use a security lapse in one location to gain access to sensitive data in other locations on the network.

Monitor activities on your network.  Companies should take advantage of the various tools available to alert businesses of suspicious activities, including unauthorized attempts to access a network, attempts to install malicious software and suspicious data exfiltration.

Secure remote access to your network.

               Ensure endpoint security.   Every device with a remote network connection creates a possible entry point for unauthorized access.  Consequently, securing the various endpoints on a network has become increasingly important with the rise in mobile threats.  Companies should establish security rules for employees, clients and service providers, including requirements concerning software updates and patches.  Establishing security protocols is not sufficient alone though, as companies should also verify that the security requirements are being followed.  In addition, companies must continually re-evaluate possible security threats and update endpoint security requirements and controls.

Put sensible access limits in place.  Companies should establish sensible limitations on remote network access.  For example, a company that engaged multiple vendors to remotely install and maintain software on the company’s network provided user accounts with full administrative privileges for each vendor.  The FTC notes that instead of providing all vendors with administrative access, the company should have provided full administrative privileges only to those vendors who truly required such access, and only for a limited period of time.  In addition, the company should have ensured that it could audit all vendor activities on the network, and attribute account use to individual vendor employees.

Part III of this series will discuss the importance of applying sound security practices, the security practices of service providers, procedures to keep security current and the need to secure paper, physical media and devices.

OTHER THOUGHT LEADERSHIP POSTS:

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, I wrote an article outlining the growing body of state legislation designed to address and mitigate emerging privacy concerns over facial recognition technologies.  It now appears that...

Part I of III | FTC Provides Guidance on Reasonable Data Security Practices

Part I of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

This is the first in a series of three articles on the FTC’s Stick with Security blog. Part II and Part III of this series can be found here.

Over the past 15 years, the Federal Trade Commission (FTC) has brought more than 60 cases against companies for unfair or deceptive data security practices that put consumers’ personal data at unreasonable risk.  Although the FTC has stated that the touchstone of its approach to data security is reasonableness, the FTC has faced considerable criticism from the business community for lack of clarity as to as to what it considers reasonable data security.

Earlier this year, FTC Acting Chairman Maureen Ohlhausen pledged greater transparency concerning practices that contribute to reasonable data security.  As a follow-up to Ohlhausen’s pledge, the FTC published a weekly blog over the past few months, Stick with Security, that focuses on the ten principles outlined in its Start with Security Guide for Businesses. In the blog, the FTC uses examples taken from complaints and orders to offer additional clarity on each principle included in the Start with Security guidelines.

This is the first of three articles reviewing the security principles discussed by the FTC in its Stick with Security blog.

Start with Security 

Don’t collect personal information you don’t need.  After a security incident, many businesses realize that collecting sensitive information just because a company has the ability to do so is no longer a good business strategy.  In addition, it is easier for companies to protect a limited set of sensitive data than large amounts of personal information located on a company’s network.  Consequently, a company that limits the data it collects may be better positioned to demonstrate that its security practices are reasonable.  For example, following one security breach that resulted in the exposure of information of over 7,000 consumers, the FTC decided not to pursue a law enforcement action, in part, because the company had deliberately limited the sensitive information it collected.

Hold on to information only as long as you have a legitimate business need.  Companies should routinely review the data it has collected and dispose of data that is no longer needed.  As an example of inadequate data purging practices, the FTC cited the example of a large company that stored personal information collected at recruiting fairs on an unencrypted company laptop. The company used the same laptop at each recruiting event, never removing sensitive information from the laptop.  The company should have, as the FTC points out, removed candidates’ sensitive information that was no longer needed.

Don’t use personal information when it’s not necessary.  The FTC recognizes that companies have legitimate business reasons to use sensitive data, however, it stresses that companies should not use sensitive information in contexts that create unnecessary risks.

Train your staff on your standards – and make sure they’re following through.  Company staff are both the greatest security risk and also a company’s first line of defense against security breaches. Training is not a one-time endeavor – companies must continue to train staff on new security practices and provide refresher training on current company policies.  The FTC also stressed the importance of deputizing staff to provide suggestions and practical advice that C-suite executives may not have.

When feasible, offer consumers more secure choices.  Companies should make it easy for consumers to make choices that result in greater security of their data, and should consider setting default settings for their products at the most protective levels.  As an example of inadequate security practices, the FTC cited a manufacturer that configured the default settings on its routers so that anyone online could gain access to the files on the storage devices connected to the routers.  The manufacturer failed to adequately explain the default settings to consumers, and could have possibly avoided unauthorized access had it configured the default setting in a more secure manner.

Control access to data sensibly.

Restrict access to sensitive data.  Employers should limit the access employees and other individuals have to sensitive data, both through physical access (e.g., locking a desk drawer) or by restricting sensitive network files to a limited number of employees with password protected access.

Limit administrative access.  The FTC compares a company’s need to safeguard and limit access to administrative rights to a bank’s need to safeguard the combination to the bank’s vault.  Limiting the number of employees who have administrative access can reduce a company’s security risk.

Require secure passwords and authentication

               Insist on long, complex and unique passwords and store passwords securely. Companies should require that employees create strong, unique passwords.  In addition, companies should configure consumer products so that consumers are required to change the default password upon first use. Of course, strong passwords are of little use if passwords are not stored properly and are compromised. In addition, Companies can guard against brute force attacks by configuring their network so that user credentials can be suspended or disabled after a specified number of unsuccessful login attempts.

Protect sensitive accounts with more than just a password. Because individuals often use the same passwords for various online accounts, such login credentials can leave companies and consumers vulnerable to credential stuffing attacks.  Companies should consider requiring multiple authentication methods for access to accounts or applications with sensitive data.

Protect against authentication bypass.  If hackers are not able to access their targeted application through the front door, they will look for other available access points.  One way to reduce the risk of authentication bypass is to limit entry to an authentication point that can be monitored by the Company.

Part II will discuss the storage and protection of sensitive information, segmenting your network and securing remote access. Click here for Part II.

OTHER THOUGHT LEADERSHIP POSTS:

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, I wrote an article outlining the growing body of state legislation designed to address and mitigate emerging privacy concerns over facial recognition technologies.  It now appears that...

Data Scraping, Bots and First Amendment Rights

Data Scraping, Bots and First Amendment Rights

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

A recent case involving a small workforce analytics startup fighting for its right to extract data from the largest professional networking site on the Internet may set a precedent for applying constitutional principles to social medial platforms.  hiQ Labs, Inc., the company seeking to protect its right to scrape publicly available data from LinkedIn, maintains that social media platforms should be treated as a public forum, and consequently, hiQ’s data scraping activities are protected by the First Amendment.

Data scraping has come a long way since its early days, which involved manually copying data visible on a website.  Today, data scraping is a thriving industry, and high performance web scraping tools allow individuals and businesses to take advantage of the massive amount of data available on the Internet by collecting specific data from targeted websites.  Many companies are increasingly reliant on big data as an important part of their business strategy and now view data scraping as a business necessity.

Just as data extraction methods have evolved, so have the legal theories used to either defend or challenge data scraping activities.  In one of the earliest cases challenging unwanted data scraping, eBay, Inc. v. Bidder’s Edge, Inc., eBay successfully used a trespass to chattels theory to obtain a preliminary injunction against an auction aggregator that was compiling a database of eBay’s auction listings by extracting data from eBay’s site.   Other legal theories currently used in cases involving data scraping include claims alleging breach of contract, violation of terms of use, copyright infringement, violation of the Computer Fraud and Abuse Act (CFAA), unfair competition, and now, in HiQ Labs Inc. v. LinkedIn Corp., violation of a company’s constitutional rights.

The saga began in May 2017, when LinkedIn delivered a cease-and-desist letter to hiQ, warning hiQ that it was violating LinkedIn’s terms of use as both a user and an advertiser by using bots to scrape data from LinkedIn users’ public profiles.  LinkedIn threatened to bring an action against hiQ under the CFAA and also advised that LinkedIn would be taking measures to block hiQ’s bots from scraping data on LinkedIn’s site.

hiQ responded by filing suit against LinkedIn, alleging that by blocking hiQ’s bots, LinkedIn sought to gain a competitive advantage through unlawful and unfair business practices and also violated the free speech clause of the California Constitution.  hiQ maintained that because LinkedIn is a public forum, hiQ had a free speech right “to access that marketplace on equal terms with all other people and that LinkedIn’s private property rights in controlling access to its computers cannot take precedence.”

In its Complaint for Declaratory Judgment, hiQ reminded the U.S. District Court that the California Supreme Court had clearly interpreted the free speech rights guaranteed by the California Constitution as precluding an owner of private property from prohibiting access if the property constitutes a public forum. hiQ argued that because the United States Supreme Court upheld this California constitutional right, LinkedIn cannot promise a public forum and public access, but then selectively exclude members of the public from such forum.

During oral arguments, hiQ argued that that social media sites such as LinkedIn are the modern equivalent of the town square, and that allowing LinkedIn to choose who can access the site is a violation of the First Amendment and will have grave constitutional consequences.  In response, LinkedIn drew an analogy between books at a public library and the publicly available information on LinkedIn.  LinkedIn argued that just as a public library conditions access to its books on compliance with certain library policies, LinkedIn conditions access to LinkedIn’s website on its privacy policies and terms of service.

U.S. District Court Judge Edward Chen granted the preliminary injunction requested by hiQ, and ordered LinkedIn to remove any technology within 24 hours that would prevent hiQ from accessing information on public profiles.  Judge Chen found that because authorization is not necessary to access publicly available profile pages, LinkedIn was not likely to prevail on its CFAA claim.  In addition, “hiQ has raised serious questions as to whether LinkedIn, in blocking hiQ’s access to public data, possibly as a means of limiting competition, violates state law,” Judge Chen wrote.  Although the court did not hold that publicly available websites should constitute a public form, the court clearly limited its decision on the free speech claim to the preliminary injunction.  LinkedIn has since filed an appeal with the Ninth Circuit, requesting that the court vacate the preliminary injunction.

The current legal battle between LinkedIn and hiQ could have wide implications on the future of data scraping, data ownership and the control of publicly available information that users post on social media sites.  Should, as hiQ argued, private social media platforms be treated as a public forum? Or, should social media sites have the right to limit access to publicly available information? And, do individuals that post information to social media sites agree that they are essentially making data available in a public square? As Judge Chen noted at the conclusion of oral arguments, “I’ve got a feeling it’s not going to end here.”

OTHER THOUGHT LEADERSHIP POSTS:

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, I wrote an article outlining the growing body of state legislation designed to address and mitigate emerging privacy concerns over facial recognition technologies.  It now appears that...

When 2017 Becomes 1984: Facial Recognition Technologies – Face a Growing Legal Landscape

When 2017 Becomes 1984: Facial Recognition Technologies – Face a Growing Legal Landscape

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Recently, Stanford University professor and researcher Michal Kosinski caused a stir of epic proportions and conjured up visions of George Orwell’s 1984 in the artificial intelligence (AI) community.  Kosinski posited that several AI tools are now able to determine the sexual orientation of a person, just based on a photograph, and has gone on to speculate that AI could also predict political ideology, IQ, and propensity for criminal behavior.  Indeed, using a complex algorithm, Kosinski accurately pinpointed a male’s sexual orientation over 90% of the time.  While technology advances frequently outpace corresponding changes in the law, the implications of this technology are alarming.  Could the LGBTQ community be targeted for violence or other discrimination based on this analysis?  Could “potential criminals” be turned away from gainful employment based on mere speculation about future behavior?  Would Facebook account photographs be an unintentional window into the most private facets of one’s life?  In a country already divided over sociopolitical issues, the answer to all of these questions unfortunately seems to be not if, but when.  The urgency for laws and regulations to police the exponential proliferation of AI’s potential intrusions cannot be overstated as the threat of a 1984 world becomes more of a reality.

Although Kosinski’s revelation is a recent one, concerns over facial recognition technologies and biometric data are hardly novel.  In 2012, Ireland forced Facebook to disable its facial recognition software in all of Europe—the EU data privacy directive (and the upcoming GDPR) obviously would have required explicit consent from Facebook users, and such consent was never requested or received from Facebook account owners.  Facebook was also required to delete all facial profiles collected in Europe.

In the United States, Illinois appears to be ground zero for the battle over facial recognition technologies, predominantly because it’s one of the few states with a specific law on the books.  The Biometric Information Privacy Act, 740 ILCS 14, (“BIPA”) initially was a reaction to Pay by Touch, a technology available in the mid 2000s as a means to connect biometric information (e.g., fingerprints) to credit card and other accounts.  A wave of privacy-geared litigation spelled doom for Pay by Touch and its handful of competitors.  With increasing adoption of facial recognition software into popular technology platforms (such as the iPhone) BIPA is front and center once again.

The scope of BIPA includes “retina or iris scan, fingerprint, voiceprint or scan of hand or face geometry…”  Key provisions of BIPA are as follows:

  • Prohibits private entities from collecting, selling, leasing, trading or otherwise profiting from biometric data, without express written consent;
  • Requires private entities that collect biometric data to protect such data using a reasonable standard of care that is at least as protective as the methods used by the entity to protect other forms of confidential and sensitive information;
  • Requires private entities that collect such biometric data to comply with written policies “made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information, on the earlier of: i) when the initial purpose for collecting or obtaining such identifiers or information has been satisfied; or ii) within 3 years of the individual’s last interaction with the private entity.

One of the latest lawsuits filed pursuant to this law is against Shutterfly, which is accused of collecting facial, fingerprint and iris scans without express written consent from website visitors, as well as people who may be “tagged” in photos (even though they may have never used Shutterfly’s services or held a Shutterfly account).  In a similar lawsuit, filed against Facebook in 2015, users also charged that Facebook was collecting and using biometric data without specific consent.

It is likely that Apple is monitoring these cases closely.  After all, iPhone X, according to Apple’s website, uses facial recognition technology to obtain access to the phone—in essence, a “facial password.”  Apple is quick to point out that it encrypts the mapping of users’ faces and that such actual data exists only on the physical device and not elsewhere (i.e., not in a cloud).

It is also likely that lawmakers in the other two states with statutes similar to BIPA are keeping a watchful eye on the Illinois docket.  Texas and Washington both have biometric laws on the books, along the lines of BIPA (though unlike Illinois, Texas and Washington do not provide a private cause of action).  While residents of these states can take comfort in legislative remedies available there, where does that leave residents of other states?  Given that the federal approach to privacy in the United States generally tends to be sector-specific (e.g., HIPAA for medical data; Gramm-Leach Bliley for financial institutions), it seems clear that change must surface at the state level.  Until then, state residents without legal protections are left with the following options:

  • Obscuracam: an anti-facial recognition app that, as its name suggests, obscures visual characteristics in those individuals photographed.
  • Opt-out whenever possible: Facebook settings can be modified so as to allow an account holder to both opt out of facial recognition technologies, and to delete any data already collected.  Users of Google+ have to affirmatively opt in to facial recognition technologies.
  • Tangible solutions: 3-D printed glasses are sometimes effective in disrupting and/or scrambling features, thereby thwarting facial recognition technologies.

However, realistically, until the law catches up with technology, the Orwellian threat is real. As the saying goes and as 1984 illustrates time and time again, “knowledge is power.”  And when knowledge gleaned from facial recognition technology falls into the wrong hands, “absolute power absolutely corrupts.”  For lawmakers, the time is yesterday (if not sooner) for laws to catch up with the break-neck pace of facial recognition technologies and the potential slippery slope of use cases.

OTHER THOUGHT LEADERSHIP POSTS:

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of,...

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, I wrote an article outlining the growing body of state legislation designed to address and mitigate emerging privacy concerns over facial recognition technologies.  It now appears that...

Page 4 of 18« First...23456...10...Last »