Predictive Algorithms in Sentencing: Are We Automating Bias?

Apr 24, 2018

By Linda Henry


See all of Our JDSupra Posts by Clicking the Badge Below

View Patrick Law Group, LLC

Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

The criminal justice system is one area in which governments are increasingly using algorithms, particularly in connection with creating risk assessment profiles of defendants.  For example, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a computer algorithm used to score a defendant’s risk of recidivism and is one of the risk assessment tools most widely used by courts to predict recidivism.  COMPAS creates a risk assessment by comparing information regarding a defendant to historical data from groups of similar individuals.

COMPAS is only one example of proprietary software being used by courts to make sentencing decisions, and states are increasingly using software risk assessment tools such as COMPAS as a formal part of the sentencing process.  Because many of the algorithms such as COMPAS are proprietary, the source code is not published and is not subject to state or federal open record laws. As a result, the opacity inherent in proprietary programs such as COMPAS prevents third parties from seeing the data and calculations that impact sentencing decisions.

Challenges by defendants to the use of such algorithms in criminal sentencing have been unsuccessful.  In 2017, Eric Loomis, a Wisconsin defendant, unsuccessfully challenged the use of the COMPAS algorithm as a violation of his due process rights.  In 2013, Loomis was arrested and charged with five criminal counts related to a drive by shooting.  Loomis maintained that he was not involved in the shooting but pled guilty to driving a motor vehicle without the owner’s permission and fleeing from police.  At sentencing, the trial court judge sentenced Loomis to six years in prison, noting that the court ruled out probation based in part on the COMPAS risk assessment that suggested Loomis presented a high risk to re-offend.[1]Loomis appealed his sentence, arguing that the use of the risk assessment violated his constitutional right to due process.  The Wisconsin Supreme Court ultimately affirmed the lower court’s decision that it could utilize the risk assessment tool in sentencing, and also found no violation of Loomis’ due process rights.  In 2017, the U.S. Supreme Court denied Loomis’ petition for writ of certiorari.

The use of computer algorithms in risk assessments have been touted by some as a way to eliminate human bias in sentencing.  Although COMPAS and other risk assessment software programs use algorithms that are race neutral on their face, the algorithms frequently use data points that can serve as proxies for race, such as ZIP codes, education history and family history of incarceration.[2]  In addition, critics of such algorithms question the methodologies used by programs such as COMPAS, since methodologies (which are necessarily created by individuals) may unintentionally reflect human bias.  If the data sets being used to train the algorithms are not truly objective, human bias may be unintentionally baked into the algorithm, effectively automating human bias.

The investigation by ProPublica that prompted New York City’s algorithmic accountability bill found that COMPAS risk assessments were more likely to erroneously identify black defendants as presenting a high risk for recidivism at almost twice the rate as white defendants (43 percent vs 23 percent).  In addition, ProPublica’s research revealed that COMPAS risk assessments erroneously labeled white defendants as low-risk 48 percent of the time, compared to 28 percent for black defendants.  Black defendants were also 45 percent more likely to receive a higher risk score than white defendants, even after controlling for variables such as prior crimes, age and gender. [3]ProPublica’s findings raise serious concerns regarding COMPAS, however, because the calculations used to assess risk are proprietary, neither defendants nor the court systems utilizing COMPAS have visibility into why such assessments have significant rates of mislabeling among black and white defendants.

Although New York City’s algorithmic accountability bill hopes to curb algorithmic bias and bring more transparency to algorithms used across all New York City agencies, including those used in criminal sentencing, the task force faces significant hurdles.  It is unclear how the task force will make the threshold determination as to whether an algorithm disproportionately harms a particular group, or how the City will increase transparency and fairness without access to proprietary source code.  Despite the task force’s daunting task of balancing the need for more transparency against the right of companies to protect their intellectual property, critics of the use of algorithms in the criminal justice system are hopeful that New York City’s bill will encourage other cities and states to acknowledge the problem of algorithmic bias.

 


[1] State v. Loomis, 881 N.W.2d 749 (2016)

[2] Hudson, L., Technology Is Biased Too. How Do We Fix It?, FiveThirtyEight (Jul. 20, 2017), https://fivethirtyeight.com/features/technology-is-biased-too-how-do-we-fix-it/.

[3] Julia Angwin et al., Machine Bias, ProPublica (May 23, 2016), https://www .propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

OTHER THOUGHT LEADERSHIP POSTS:

The Intersection of Artificial Intelligence and the Model Rules of Professional Conduct

By Linda Henry | Artificial intelligence is transforming the legal profession and attorneys are increasingly using AI-powered software to assist with a wide rage of tasks, ranging from due diligence review, issue spotting during the contract negotiation process and predicting case outcomes.

Follow the Leader: Will Congressional and Corporate Push for Federal Privacy Regulations Leave Some Technology Giants in the Dust?

By Dawn Ingley | On October 24, 2018, Apple CEO Tim Cook, one of the keynote speakers at the International Conference of Data Protection and Privacy Commissioners Conference, threw down the gauntlet when he assured an audience of data protection professionals that Apple fully supports a “GDPR-like” federal data privacy law in the United States.

Yes, Lawyers Too! ABA Formal Opinion 483 and the Affirmative Duty to Inform Clients of Data Breaches

By Jennifer Thompson | Developments in the rules and regulations governing data breaches happen as quickly as you can click through the headlines on your favorite news media site.  Now, the American Bar Association (“ABA”) has gotten in on the action and is mandating that attorneys notify current clients of real or substantially likely data breaches where confidential client information is or may be compromised.

GDPR Compliance and Blockchain: The French Data Protection Authority Offers Initial Guidance

By Linda Henry | The French Data Protection Authority (“CNIL”) recently became the first data protection authority to provide guidance as to how the European Union’s General Data Protection Regulation (“GDPR”) applies to blockchain.

D-Link Continues Challenges to FTC’s Data Security Authority

By Linda Henry | On September 21, 2018, the FTC and D-Link Systems Inc. each filed a motion for summary judgement in one of the most closely watched recent enforcement actions in privacy and data security law (FTC v. D-Link Systems Inc., No. 3:17-cv-00039).  The dispute, which dates back to early 2017, may have widespread implications on companies’ potential liability for lax security practices, even in the absence of actual consumer harm.

Good, Bad or Ugly? Implementation of Ethical Standards In the Age of AI

By Dawn Ingley | With the explosion of artificial intelligence (AI) implementations, several technology organizations have established AI ethics teams to ensure that their respective and myriad uses across platforms are reasonable, fair and non-discriminatory.  Yet, to date, very few details have emerged regarding those teams—Who are the members?  What standards are applied to creation and implementation of AI?  Axon, the manufacturer behind community policing products and services such as body cameras and related video analytics, has embarked upon creation of an ethics board.  Google’s DeepMind Ethics and Society division (DeepMind) also seeks to temper the innovative potential of AI with the dangers of a technology that is not inherently “value-neutral” and that could lead to outcomes ranging from good to bad to downright ugly.  Indeed, a peak behind both ethics programs may offer some interesting insights into the direction of all corporate AI ethics programs.

IoT Device Companies: The FTC is Monitoring Your COPPA Data Deletion Duties and More

By Jennifer Thompson | Recent Federal Trade Commission (FTC) activities with respect to the Children’s Online Privacy Protection Act (COPPA) demonstrate a continued interest in, and increased scrutiny of, companies subject to COPPA. While the FTC has pursued companies for alleged violations of all facets of its COPPA Six Step Compliance Plan, most recently the FTC has focused on the obligation to promptly and securely delete all data collected if it is no longer needed. Taken as a whole, recent FTC activity may indicate a desire on the part of the FTC to expand its regulatory reach.

Predictive Algorithms in Sentencing: Are We Automating Bias?

By Linda Henry | Although algorithms are often presumed to be objective and unbiased, recent investigations into algorithms used in the criminal justice system to predict recidivism have produced compelling evidence that such algorithms may be racially biased.  As a result of one such investigation by ProPublica, the New York City Council recently passed the first bill in the country designed to address algorithmic discrimination in government agencies. The goal of New York City’s algorithmic accountability bill is to monitor algorithms used by municipal agencies and provide recommendations as to how to make the City’s algorithms fairer and more transparent.

My Car Made Me Do It: Tales from a Telematics Trial

By Dawn Ingley | Recently, my automobile insurance company gauged my interest in saving up to 20% on insurance premiums.  The catch?  For three months, I would be required to install a plug-in monitor that collected extensive metadata—average speeds and distances, routes routinely traveled, seat belt usage and other types of data.  But to what end?  Was the purpose of the monitor to learn more about my driving practices and to encourage better driving habits?  To share my data with advertisers wishing to serve up a buy-one, get-one free coupon for paper towels from my favorite grocery store (just as I pass by it) on my touchscreen dashboard?  Or to build a “risk profile” that could be sold to parties (AirBnB, banks, other insurance companies) who may have a vested interest in learning more about my propensity for making good decisions?  The answer could be, “all of the above.”

When Data Scraping and the Computer Fraud and Abuse Act Collide

By Linda Henry | As the volume of data available on the internet continues to increase at an extraordinary pace, it is no surprise that many companies are eager to harvest publicly available data for their own use and monetization.  Data scraping has come a long way since its early days, which involved manually copying data visible on a website.  Today, data scraping is a thriving industry, and high-performance web scraping tools are fueling the big data revolution.  Like many technological advances though, the law has not kept up with the technology that enables scraping. As a result, the state of the law on data scraping remains in flux.