Predictive Algorithms in Sentencing: Are We Automating Bias?

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

The criminal justice system is one area in which governments are increasingly using algorithms, particularly in connection with creating risk assessment profiles of defendants.  For example, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a computer algorithm used to score a defendant’s risk of recidivism and is one of the risk assessment tools most widely used by courts to predict recidivism.  COMPAS creates a risk assessment by comparing information regarding a defendant to historical data from groups of similar individuals.

COMPAS is only one example of proprietary software being used by courts to make sentencing decisions, and states are increasingly using software risk assessment tools such as COMPAS as a formal part of the sentencing process.  Because many of the algorithms such as COMPAS are proprietary, the source code is not published and is not subject to state or federal open record laws. As a result, the opacity inherent in proprietary programs such as COMPAS prevents third parties from seeing the data and calculations that impact sentencing decisions.

Challenges by defendants to the use of such algorithms in criminal sentencing have been unsuccessful.  In 2017, Eric Loomis, a Wisconsin defendant, unsuccessfully challenged the use of the COMPAS algorithm as a violation of his due process rights.  In 2013, Loomis was arrested and charged with five criminal counts related to a drive by shooting.  Loomis maintained that he was not involved in the shooting but pled guilty to driving a motor vehicle without the owner’s permission and fleeing from police.  At sentencing, the trial court judge sentenced Loomis to six years in prison, noting that the court ruled out probation based in part on the COMPAS risk assessment that suggested Loomis presented a high risk to re-offend.[1]Loomis appealed his sentence, arguing that the use of the risk assessment violated his constitutional right to due process.  The Wisconsin Supreme Court ultimately affirmed the lower court’s decision that it could utilize the risk assessment tool in sentencing, and also found no violation of Loomis’ due process rights.  In 2017, the U.S. Supreme Court denied Loomis’ petition for writ of certiorari.

The use of computer algorithms in risk assessments have been touted by some as a way to eliminate human bias in sentencing.  Although COMPAS and other risk assessment software programs use algorithms that are race neutral on their face, the algorithms frequently use data points that can serve as proxies for race, such as ZIP codes, education history and family history of incarceration.[2]  In addition, critics of such algorithms question the methodologies used by programs such as COMPAS, since methodologies (which are necessarily created by individuals) may unintentionally reflect human bias.  If the data sets being used to train the algorithms are not truly objective, human bias may be unintentionally baked into the algorithm, effectively automating human bias.

The investigation by ProPublica that prompted New York City’s algorithmic accountability bill found that COMPAS risk assessments were more likely to erroneously identify black defendants as presenting a high risk for recidivism at almost twice the rate as white defendants (43 percent vs 23 percent).  In addition, ProPublica’s research revealed that COMPAS risk assessments erroneously labeled white defendants as low-risk 48 percent of the time, compared to 28 percent for black defendants.  Black defendants were also 45 percent more likely to receive a higher risk score than white defendants, even after controlling for variables such as prior crimes, age and gender. [3]ProPublica’s findings raise serious concerns regarding COMPAS, however, because the calculations used to assess risk are proprietary, neither defendants nor the court systems utilizing COMPAS have visibility into why such assessments have significant rates of mislabeling among black and white defendants.

Although New York City’s algorithmic accountability bill hopes to curb algorithmic bias and bring more transparency to algorithms used across all New York City agencies, including those used in criminal sentencing, the task force faces significant hurdles.  It is unclear how the task force will make the threshold determination as to whether an algorithm disproportionately harms a particular group, or how the City will increase transparency and fairness without access to proprietary source code.  Despite the task force’s daunting task of balancing the need for more transparency against the right of companies to protect their intellectual property, critics of the use of algorithms in the criminal justice system are hopeful that New York City’s bill will encourage other cities and states to acknowledge the problem of algorithmic bias.

 


[1] State v. Loomis, 881 N.W.2d 749 (2016)

[2] Hudson, L., Technology Is Biased Too. How Do We Fix It?, FiveThirtyEight (Jul. 20, 2017), https://fivethirtyeight.com/features/technology-is-biased-too-how-do-we-fix-it/.

[3] Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www .propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

OTHER THOUGHT LEADERSHIP POSTS:

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, I wrote an article outlining the growing body of state legislation designed to address and mitigate emerging privacy concerns over facial recognition technologies.  It now appears that...

Pros and Cons of Hiring a Security Rating Agency

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below One can hardly check out any news outlet today without reading or hearing about a security breach.  Experts frequently advocate performing internal assessments to identify security...

Linda Henry | Achieva Graduate

Linda Henry | Achieva Graduate

Congratulations to Deputy Counsel and VP of Operations, Linda Henry, for her graduation from Achieva®, a year-long program with Pathbuilders. It is a program focused on helping women become high-impact leaders.  The program included monthly educational workshops, peer networking and one-on-one mentoring. https://www.pathbuilders.com/pathbuilders-achieva-packs-years-into-hours/

Lizz Patrick serves as a mentor for the Pathbuilders organization and believes in its mission wholeheartedly. It goes without saying that Lizz is extremely proud of Linda for having reached the Achieva® level in the Pathbuilders Program Series!

Pathbuilders Achieva Mentors

11/20/17 #Achieva #mentees & #mentors heard from an executive panel on strategic communicatons. Lizz Patrick moderated our panel of Cathy Adams, Mike Page, Alicia Thompson, and Lynne Zappone.

My Car Made Me Do It: Tales from a Telematics Trial

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that collected extensive metadata—average speeds and distances, routes routinely traveled, seat belt usage and other types of data.  But to what end?  Was the purpose of the monitor to learn more about my driving practices and to encourage better driving habits?  To share my data with advertisers wishing to serve up a buy-one, get-one free coupon for paper towels from my favorite grocery store (just as I pass by it) on my touchscreen dashboard?  Or to build a “risk profile” that could be sold to parties (AirBnB, banks, other insurance companies) who may have a vested interest in learning more about my propensity for making good decisions?  The answer could be, “all of the above.”

Wireless technology integrated into vehicles is nothing new—indeed, diagnostic systems, cellular connections and in-dash navigation have been the norm for years.  However, the breadth of data collection and manner in which data is monetized are evolving quickly.  Telematics actually are very akin to a social media platform in terms of the sheer volumes of data collected and the purposes for which data can (and could) be used.  To be sure, many use cases stand to benefit drivers—predicting oil changes and locating the nearest gas stations–for example.  Future functionality may even detect texting while driving or sleeping drivers.

Opportunities abound, and at least one company’s early success is proof of the sheer potential of telematics data mining.  In the past three years, Otonomo has carved out a niche for itself by brokering sales of mined telematic data to parties such as insurance companies and general retail businesses.  Otonomo’s technology efficiently packages telematics data into a user-friendly, anonymized platform that takes into account worldwide regulations governing telematics.

Not surprisingly, several automobile manufacturers predict the sale of automobile analytics as a key profit center in coming years.  What is surprising, however, are the lack of legal “rules of the road” that exist today in the United States.  While laws do clarify that an automobile’s event data recorders are owned by the automobile owner (and provide that these “black boxes” may be obtained only by court order), other laws governing telematics are few and far between.  A driver’s consent often occurs upon registration of embedded GPS platforms or other navigation tools, but according to Government Accounting Office research, these types of notices often are lacking in terms of explaining how data is used and whether it is shared.  The Federal Trade Commission maintains jurisdiction over consumer data and related privacy issues, but there are not yet rules specific to telematic data collected by the automobile industry.

Much like the credit card industries’ promulgation of Payment Card Industr Data Security Standard (PCI DSS) rules, the automotive industry, in 2014, responded with its own Privacy Principles for Vehicle Technologies and Services, which include the following:

  • Transparency: a commitment to provide both owners and registered users of vehicles with access to “clear, meaningful notices” as to what data is collected, used and shared.
  • Choice: a commitment to provide owners and registered users with certain choices “regarding the collection, use and sharing” of information.
  • Respect for Context: a commitment to use and share information in a manner consistent with the context in which information was collected.
  • Data Minimization, De-identification, and Retention: a commitment to collect information only as needed for legitimate business purposes, and to retain it no longer than needed for such legitimate business purposes.
  • Data Security: a commitment to implement reasonable measures to protect information against loss and unauthorized access or use.
  • Integrity and Access: a commitment to implement measures to maintain the accuracy of information, along with a means for owners and registered users to correct information.
  • Accountability: a commitment to take reasonable steps to ensure that any parties receiving the information adhere to the principles.

To date, twenty automakers have signed on to the principles, including Honda, Toyota, Nissan, Subaru and Hyundai.

Congress has also responded to concerns over privacy and security in automobiles.  In early 2017, Representatives Joe Wilson (R-SC, 2nd District) and Ted Lieu (D-CA, 33rd District) introduced the SPY Car Study Act.  The Act does not introduce any new laws or regulations, but does require the National Highway Traffic Safety Administration (NHTSA) to investigate technological threats to automobiles.  More specifically, Congress tasked the NHTSA with identifying:

  • Measures necessary to separate critical software systems that affect a driver’s control of a vehicle from other technology systems;
  • Measures necessary to detect and prevent codes associated with malicious behaviors;
  • Techniques necessary to detect and prevent, discourage or mitigate intrusions into vehicle software systems and other cybersecurity risks in automobiles;
  • Best practices to secure driver data collected by electronic systems; and
  • A timeline for implementation of technology to reflect such best practices.

Otonomo has indicated that the current market for automobile telematics data focuses on user experience and convenience, but, in reality, no future use case is off the table.  And as with many technologies and, in particular, IoT platforms, drivers must weigh the benefits and dangers of use.  The calculus would look something like this:

  • Benefits (current and future):
    • Traffic and navigation services save drivers time and reduce risk of further traffic accidents;
    • Automobile diagnostics can not only remind drivers of to-do’s such as oil changes, but also alert drivers to issues such as dangerous behaviors (texting or sleeping while driving, blood alcohol level); and
    • Automobile insurance discounts may be a “reward” for drivers supplying metadata.
  • Risks:
    • Customer “lock-in”—could data as to driving habits (miles driven, speeds, use of turn signals) keep a customer from changing insurance carriers, if prospective carriers refuse coverage based on a driver’s metrics?
    • Will lenders factor risky driving behaviors into decisions as to whether credit or loans are extended?
    • Will current insurers raise premiums based on activities tracked via collection of metadata?

Indeed, the answer to this calculus may vary across geographies and cultures.  In the United States, there is not an across-the-board approach to privacy and data protection; rather, protections are extended across particular industries (ex: HIPAA for healthcare data).  U.S. citizens have proven more likely to provide the types of information contemplated if they receive some benefit from such sharing.  The European Union, on the other hand, has adopted a stringent, uniform approach to data protection, which is wide-ranging and extends across all industries.  It follows that EU citizens may be more sensitive than other geographies to sharing information.  It is likely that automobile manufacturers will need to take such variations into account when implementing telematics systems.  Regardless of geography, drivers should not only look to the manner in which data is being used today, but also contemplate tomorrow, as the expansion of use case is likely a “not if, but when” scenario.  For this reason, the answer to why a person drives more cautiously may be the same as to why his or her grocery bill mysteriously increased last month: “My car made me do it!”

OTHER THOUGHT LEADERSHIP POSTS:

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, I wrote an article outlining the growing body of state legislation designed to address and mitigate emerging privacy concerns over facial recognition technologies.  It now appears that...

Pros and Cons of Hiring a Security Rating Agency

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below One can hardly check out any news outlet today without reading or hearing about a security breach.  Experts frequently advocate performing internal assessments to identify security...

When Data Scraping and the Computer Fraud and Abuse Act Collide

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use and monetization.  Data scraping has come a long way since its early days, which involved manually copying data visible on a website.  Today, data scraping is a thriving industry, and high-performance web scraping tools are fueling the big data revolution.  Like many technological advances though, the law has not kept up with the technology that enables scraping. As a result, the state of the law on data scraping remains in flux.

The federal Computer Fraud and Abuse Act (CFAA) is one statute frequently used by companies who seek to stop third-parties from harvesting data.  The CFAA imposes liability on anyone who “intentionally accesses a computer without authorization, or exceeds authorized access, and thereby obtains … information from any protected computer.”  The Supreme Court has held that the CFAA “provides two ways of committing the crime of improperly accessing a protected computer: (1) obtaining access without authorization; and (2) obtaining access with authorization but then using that access improperly.” (Musacchio v. United States).

The CFAA’s applicability to data scraping is not clear though, as it was originally intended as an anti-hacking statue, and scraping typically involves accessing publicly available data on a public website.  In order to meet the CFAA’s requirement that a third party engage in unauthorized or improper access of a website, companies often argue that use of a website in violation of the applicable terms of use (e.g., by harvesting data), constitutes unauthorized access in violation of the CFAA.

Over the past year, a handful of cases in California challenging the legality of web scraping offer a few clues as to how courts may approach future challenges to web scraping using the CFAA.   In one of the most high-profile cases involving data scraping during 2017 (HiQ Labs, Inc. v. LinkedIn Corp.), a U.S. District Court granted a preliminary injunction requested by HiQ Labs, a small workforce analytics startup, and ordered LinkedIn to remove technology that would prevent hiQ Labs from accessing information on public profiles.  LinkedIn argued that hiQ Labs was violating LinkedIn’s terms of use as both a user and an advertiser by using bots to scrape data from LinkedIn users’ public profiles.   hiQ Labs rejected LinkedIn’s argument that the CFAA applied, and maintained that because social media platforms should be treated as a public forum, hiQ Labs’s data scraping activities are protected by the First Amendment.

In hiQ, U.S. District Court Judge Chen found, in part, that because authorization is not necessary to access publicly available profile pages, LinkedIn was not likely to prevail on its CFAA claim even if hiQ Labs had violated the terms of use.  Judge Chen did note that LinkedIn’s construction of the CFAA was not without basis, because “visiting a website accesses the host computer in one literal sense, and where authorization has been revoked by the website host, that “access” can be said to be “without authorization.  However, whether access to a publicly viewable site may be deemed “without authorization” under the CFAA where the website host purports to revoke permission is not free from ambiguity.”

Judge Chen reasoned that LinkedIn’s interpretation of the CFAA would allow a company to revoke authorization to a publicly available website at any time and for any reason, and then invoke the CFAA for enforcement, exposing an individual to both criminal and civil liability.  He characterized the possibility of criminalizing the act of viewing of a public website in violation of an order from a private entity as “effectuating the digital equivalence of Medusa.”

While LinkedIn waits for the Ninth Circuit to hear oral arguments in hiQ, yet another company (3taps Inc.) has filed a similar suit against LinkedIn, seeking a declaratory judgement that 3taps is not violating the CFAA and thus should be permitted to continue to extract data on public LinkedIn profile pages. (3taps Inc. v. LinkedIn Corp.).  In addition, because 3taps successfully argued that the court should deem the 3taps and hiQ matters related and heard by the same judge, on February 22, 2018, Judge Chen ordered the reassignment of the 3taps case from the Northern District of California’s San Jose court to Judge Chen’s court in San Francisco.

In addition to hiQ, the recent dismissal of a CFAA claim brought by Ticketmaster against a company engaged in data scraping further calls into question whether companies will be successful in using the CFAA to stop web scraping. (Ticketmaster L.L.C. v. Prestige Entertainment, Inc.).  In January 2018, a California district court dismissed Ticketmaster’s CFAA claim with leave to amend against a ticket broker that used bots to purchase tickets in bulk from the Ticketmaster site.  The court noted that although Ticketmaster outlined the defendants’ terms of use violations in a cease and desist letter, Ticketmaster did not actually revoke access authority and implied that defendants could continue to use Ticketmaster’s website as long as the defendants abided by the terms of use. In addition, the court maintained that Ticketmaster could not base a CFAA claim on an argument that the defendants exceeded authorized access unless Ticketmaster could demonstrate that the defendants were inside hackers who accessed unauthorized information.

hiQ, 3taps and Ticketmaster demonstrate the inherent difficulty in trying apply a statute that pre-dates the internet age to modern technology.  Although courts have not been consistent in their opinion as to whether violation of a company’s terms of use constitutes unauthorized or improper access under the CFAA, Ticketmaster and hiQ offer data scrapers hope that courts will continue to question whether the CFAA should prohibit harvesting publicly available data.  Companies who utilize data scraping should, however, consider that a court would be more likely to impose liability under the CFAA if the data collected is not publicly available or the methods used to obtain the data can more clearly be characterized as unauthorized access.  The Ninth Circuit is expected to hear oral arguments in hiQ in March, and the court’s interpretation of the CFAA is likely to have a significant impact on the use of automated processes to use third-party data.

OTHER THOUGHT LEADERSHIP POSTS:

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, I wrote an article outlining the growing body of state legislation designed to address and mitigate emerging privacy concerns over facial recognition technologies.  It now appears that...

Pros and Cons of Hiring a Security Rating Agency

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below One can hardly check out any news outlet today without reading or hearing about a security breach.  Experts frequently advocate performing internal assessments to identify security...

Is Your Bug Bounty Program Uber Risky?

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States Uber drivers had been hacked.  Uber, like many companies, leveraged a vulnerability disclosure or “bug bounty” program that invited hackers to test Uber’s systems for certain vulnerabilities, and offered financial rewards for qualifying vulnerabilities.  In fact, Uber has paid out over $1,000,000 pursuant to its program, which is administered through HackerOne, a third-party vendor.  Uber initially identified the breach as an authorized vulnerability disclosure, paid the hackers $100,000, and the hackers deleted the records.  Yet, Uber has faced lawsuits, governmental inquiry and much public criticism in connection with this payment.

What did Uber do wrong and how can organizations ensure that their programs are not subject to the same risks?  To answer this question, one must first understand why and how companies create bug bounty programs.

Why Institute a Bug Bounty Program?

Many companies feel it is far better to pay money upfront to identify vulnerabilities before those vulnerabilities turn into public relations and regulatory nightmares that not only drain manpower and financial resources, but also may result in litigation and increased governmental oversight.  In addition, advance knowledge of a vulnerability allows a company to develop the right solution for the problem, rather than reacting hastily in the pressure-filled post-breach environment.  A properly structured bug bounty program also can be a great way to test the viability of a new concept, product or platform.  It should be noted that the payments made pursuant to bug bounty programs are rewards and NOT ransoms.  The difference is the collaboration and structure of permission-based interactions.

Hackers may be motivated for a number of reasons, including financial gain, intellectual challenge and the personal prestige to be gained from discoveries.  Most of all, by participating in a sanctioned bug bounty program, hackers are able to do what they love without fear of legal repercussion.  The Computer Fraud and Abuse Act prohibits the unauthorized entry into a “protected computer” for purposes of obtaining information from it.  Put simply, any individual who hacks a system could be criminally prosecuted.  However, when a company sanctions and even invites the activity, as is the case with a bug bounty program, a hacker can be reasonably certain that it will not be prosecuted, provided that it complies with the program’s requirements.

Having made the decision to launch a bug bounty program, a company must then decide whether to create an ad hoc program, or to engage a third-party service provider to run the program.  Going it alone requires a dedicated team of employees that not only understand the bug bounty program and the technology or functionality being tested, but also are qualified to evaluate any discovered vulnerabilities and issue resulting compensation. Alternatively, outsourcing can provide some comfort, not to mention the expertise of a vendor who can scale the program as needed.  Essentially, the vendor acts as an intermediary to help the client formulate goals for its use of the program, communicate with hackers, evaluate the identified vulnerabilities and administer payment of the bounties.  Bug bounty vendors offer a measure of safety to all parties, not just in terms of bug bounty program reliability, but also in ensuring increased accuracy and quality of the hackers.

Components of a Bug Bounty Program

In June 2017, the Department of Justice’s Criminal Division Cybersecurity Unit (the “CU”) provided written guidance for companies seeking to implement a vulnerability disclosure program.  Presented as more of a list of considerations, as opposed to a list of requirements, the CU specifically recognizes that each company’s program must be driven by its unique business purposes and needs.  Nevertheless, the CU suggests that a good program will be:

  • Appropriately designed with a clear scope – The CU suggested that when developing the program, a company must designate both the components and/or data and the types of vulnerabilities or methods of attack it wants hackers to test.  The CU does include factors to consider when the company’s program targets protections of sensitive information, which include detailing restrictions on and handling requirements for sensitive data (such as prohibitions on saving, storing or transferring such sensitive data) and detailing what methods hackers can use to find vulnerabilities.  Lastly, the company should consider whether any of the vulnerabilities it is testing impact interests of third party business vendors or partners; if so, the company should obtain authorization from such third parties to proceed with the program.
  • Properly administered – Once a company has clearly outlined the scope of the program, it must include guidance as to how the program works.  Companies should include clearly stated points of contact, preferably nonpersonal email accounts that are regularly monitored.  A company will want to be able to reproduce or corroborate any identified vulnerability, and the CU suggests that while a company is free to set the rules as to documentation format, it should be mindful that if hackers are prevented from saving certain data, it may have to be willing to accept written descriptions.  Timing should also be addressed – some companies require a prompt disclosure, while some have a long-term deadline and others create short-term challenges which offer higher payouts for the more bugs identified in a defined time period.  Lastly, the company should determine its overall budget for the program and advertise the levels of bounties offered and the attendant requirements to attain each level of reward.
  • Accompanied by a stated vulnerability disclosure policy – The policy should clearly state the scope and administrative requirements of the bug bounty program.  The critical component here is having the policy publicized and accessible to potential participants.  In addition, the company should state the consequences for hackers that operate outside the parameters of the program.  While laws are a bit vague on this point, companies often promise not to prosecute if hackers fully comply with all elements of the program.  Lastly, the policy should provide a company contact to whom hackers can direct questions and receive guidance on program rules.

Conclusion

Once a bug bounty program is created and publicized, a company must hold itself to as strict a compliance standard as it applies to its hacker participants.  What challenged Uber was that it paid a drastically larger amount (ten times the highest advertised reward amount pursuant to the program) for just one breach, thereby circumventing its own publicized and authorized program.  Not only did the amount exceed the limits of its stated program, but Uber negotiated with the hacker, which appeared as extortion-like behavior.  In addition, officials at Uber were aware that this situation unfolded differently than the typical bug bounty interaction.  Rather than notify Uber of the potential vulnerability, the hacker downloaded the sensitive information, then contacted Uber to request payment.  At that point, Uber should have notified law enforcement.  Not only did Uber fail to contact authorities, it waited more than a year to notify the public or authorities about the breach.

Whether a company runs its own program or enlists the aid of a service such as HackerOne, it would be well-advised to ensure compliance with the terms of the program.    Furthermore, it must make certain that all relevant business and legal leaders at the organization are aware of the identified vulnerability, particularly if it is a significant one or involves sensitive information.  Only by involving all stakeholders in the discussion can the organization ensure that its program does not run afoul of relevant legal and other guidelines and requirements.

OTHER THOUGHT LEADERSHIP POSTS:

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling...

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that...

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use...

Is Your Bug Bounty Program Uber Risky?

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In October 2016, Uber discovered that the personal contact information of some 57 million Uber customers and drivers, as well as the driver’s license numbers of over 600,000 United States...

IoT Device Companies: COPPA Lessons Learned from VTech’s FTC Settlement

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below In “IoT Device Companies:  Add COPPA to Your "To Do" Lists,” I summarized the Federal Trade Commission (FTC)’s June, 2017 guidance that IoT companies selling devices used by children will be...

Beware of the Man-in-the-Middle: Lessons from the FTC’s Lenovo Settlement

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below The Federal Trade Commission’s recent approval of a final settlement with Lenovo (United States) Inc., one of the world’s largest computer manufacturers, offers a reminder that when it comes to...

#TheFTCisWatchingYou: Influencers, Hashtags and Disclosures 2017 Year End Review

Influencer marketing, hashtags and proper disclosures were the hot button topic for the Federal Trade Commission (the “FTC”) in 2017, so let’s take a look at just how the FTC has influenced Social Media Influencer Marketing in 2017. First, following up on the more...

Part III of III | FTC Provides Guidance on Reasonable Data Security Practices

By Linda Henry See all of Our JDSupra Posts by Clicking the Badge Below This is the third in a series of three articles on the FTC’s Stick with Security blog. Part I and Part II of this series can be found here and here. Over the past 15 years, the Federal Trade...

Apple’s X-Cellent Response to Sen. Franken’s Queries Regarding Facial Recognition Technologies

By Dawn Ingley See all of Our JDSupra Posts by Clicking the Badge Below Recently, I wrote an article outlining the growing body of state legislation designed to address and mitigate emerging privacy concerns over facial recognition technologies.  It now appears that...

Pros and Cons of Hiring a Security Rating Agency

By Jennifer Thompson See all of Our JDSupra Posts by Clicking the Badge Below One can hardly check out any news outlet today without reading or hearing about a security breach.  Experts frequently advocate performing internal assessments to identify security...

Page 1 of 512345