Pretrial risk assessment algorithms now influence bail and detention decisions in jurisdictions across the United States, affecting millions of people each year. Proponents argue these tools reduce human bias and promote consistency. Critics contend they encode and amplify existing inequalities. The evidence suggests both perspectives capture part of a complex truth.
These algorithms typically predict the likelihood that a defendant will fail to appear for court or be arrested for a new crime if released before trial. They consider factors like criminal history, age, employment status, and residential stability. Based on these inputs, they generate risk scores that inform—but don't determine—judicial decisions.
The fundamental problem is that the historical data used to train these systems reflects decades of discriminatory policing and prosecution. If Black defendants have historically been arrested more often for the same behaviors, the algorithm will 'learn' that race (or its proxies) predicts risk. The system perpetuates the very biases it was meant to overcome.
Studies of specific tools have documented these disparities. ProPublica's 2016 analysis of the COMPAS algorithm found it was nearly twice as likely to falsely label Black defendants as high-risk compared to white defendants. Other studies have found similar patterns across different tools and jurisdictions.
Some reformers argue for abandoning algorithmic tools entirely. Others contend that properly designed algorithms, with careful attention to fairness metrics and regular auditing, could still improve upon unaided human judgment—which has its own well-documented biases. The debate continues without clear resolution.
What seems certain is that algorithmic tools cannot be a substitute for addressing the underlying inequities in our criminal justice system. Risk assessment may help optimize decisions within a broken system, but it cannot fix that system. True reform requires confronting the structural factors that create disparate outcomes in the first place.
Share this article
Dr. Alexandra Chen
AI Ethics & Digital Justice Scholar
Leading expert in AI Ethics, Data Privacy, and Digital Justice. Advising governments and organizations on responsible AI governance worldwide.