The Impact of AI on Incarceration: A Critical Examination
Written on
Chapter 1: Understanding AI's Role in the Justice System
Artificial Intelligence (AI) may not seem to play a significant role in daily life if your main interactions are through platforms like Facebook or Google. However, the recent Data for Black Lives conference highlighted its profound influence on America's criminal justice system, where algorithms can significantly alter an individual's future.
The United States leads the world in incarceration rates, with nearly 2.2 million adults incarcerated and an additional 4.5 million under some form of correctional supervision as of late 2016. This staggering reality is a rare point of consensus among politicians across party lines.
To address the urgent need for reducing prison populations without increasing crime rates, US courtrooms have begun utilizing automated tools to expedite legal processes. This shift marks the beginning of our discussion on AI's implications.
Police departments are increasingly employing predictive algorithms to determine deployment strategies, while law enforcement utilizes facial recognition technology to identify suspects. These methods have drawn significant criticism regarding their effectiveness and potential to reinforce existing biases. For instance, research has consistently shown that facial recognition systems can fail dramatically, especially for individuals with darker skin tones, sometimes mistaking public officials for convicted felons.
However, the most contentious technology arises after a suspect is arrested: criminal risk assessment algorithms.
These risk assessment tools aim to analyze a defendant's profile and generate a recidivism score — a numerical estimate of the likelihood of reoffending. Judges then incorporate this score into critical decisions regarding rehabilitation services, pre-trial detention, and sentencing severity. A lower score often leads to more lenient outcomes, while a higher score can result in harsher penalties.
The rationale behind using these algorithmic tools is that accurately predicting criminal behavior allows for better resource allocation for rehabilitation or sentencing. Ideally, this approach reduces bias, as judges base their decisions on data-driven insights rather than intuition.
The issue, however, arises from the fact that many of these modern risk assessment tools are based on algorithms trained using historical crime data.
Machine learning algorithms identify patterns within data through statistical analysis. If historical crime data is inputted, the algorithm will highlight correlations associated with criminal behavior. Nonetheless, these correlations do not imply causation. For example, if an algorithm discovers that low income correlates with high recidivism, it does not clarify whether low income causes crime. Unfortunately, risk assessment tools treat these correlational insights as causal factors.
As a result, individuals from historically marginalized communities — particularly low-income and minority groups — risk being assigned disproportionately high recidivism scores. This scenario perpetuates existing biases and feeds into a cycle of discrimination, as proprietary nature of most risk assessment algorithms prevents scrutiny or accountability.
The debate surrounding these tools continues to escalate. Over 100 civil rights and community organizations, including the ACLU and the NAACP, signed a statement last July advocating against the use of risk assessment tools. Despite this, numerous jurisdictions, including California, have begun implementing them in a desperate attempt to alleviate overcrowded jails and prisons.
Marbre Stahly-Butts, executive director of Law for Black Lives, remarked at the conference hosted at MIT Media Lab that data-driven risk assessment often serves to sanitize and legitimize oppressive systems. It diverts attention from the real issues impacting low-income and minority communities, such as underfunded schools and limited access to healthcare.
"We are not risks," she emphasized. "We are needs."
Karen Hao serves as the artificial intelligence reporter for MIT Technology Review, focusing on the ethical implications and social effects of technology, as well as its potential for societal benefit. She also oversees the A.I. newsletter, the Algorithm.
This article originally appeared in the Algorithm. To receive it directly in your inbox, subscribe here for free.
Section 1.1: The Use of Predictive Algorithms in Law Enforcement
In exploring the role of AI in law enforcement, it's vital to consider the implications of predictive algorithms. These tools are designed to assist police departments in making informed decisions about resource allocation and deployment strategies.
Video Description: This video discusses how AI is being co-opted in crime prediction and its potential pitfalls.
Subsection 1.1.1: Facial Recognition and Its Challenges
Section 1.2: The Controversy Surrounding Risk Assessment Algorithms
As the reliance on risk assessment algorithms increases, significant concerns arise regarding fairness and accountability. These tools can exacerbate biases and lead to disproportionate outcomes for marginalized communities.
Video Description: Peter Eckersley discusses the intersection of AI and the prison system, examining crime prediction and the automation of incarceration processes.