Skip to content

- Emily Dickinson

You know that Portrait in the Moon --

So tell me who 'tis like --

The very Brow -- the stooping eyes --

A fog for -- Say -- Whose Sake?

...

Read full poem

noun

A decorated cloth hung at the back of a stage.

Know more
627 words~4 min read

The Case for Banning Predictive Policing Algorithms

In the name of efficiency and crime prevention, law enforcement agencies across the globe have begun deploying algorithms that predict where and when crime will occur. These systems, often marketed as objective and data-driven, promise to allocate police resources more effectively. Yet, beneath the veneer of neutrality lies a troubling reality: predictive policing is not only flawed but fundamentally unjust. The case for banning such technology is rooted in its propensity to perpetuate systemic bias, undermine due process, and erode public trust.

Consider the data that feeds these algorithms. Historical crime data, which forms the backbone of predictive models, is itself a product of biased policing practices. Decades of over-policing in minority communities have produced arrest records that reflect not actual crime rates but patterns of surveillance and enforcement. When an algorithm learns from this data, it inevitably reproduces and amplifies these biases, directing police attention to the same neighbourhoods and populations that have historically been targeted. The result is a self-fulfilling prophecy: more police presence leads to more arrests, which confirms the algorithm's predictions and deepens the cycle of over-policing. Research from the RAND Corporation and other institutions has demonstrated that predictive policing systems often flag low-income and predominantly non-white areas at disproportionate rates, even when controlling for actual crime incidence. How can we trust a system that replicates the very prejudices we seek to overcome?

Moreover, the opacity of these algorithms poses a serious challenge to accountability. Private companies develop and license predictive software, often refusing to disclose the inner workings of their models under the guise of trade secrets. This lack of transparency makes it impossible for defendants, civil rights advocates, and even judges to scrutinise the basis of police decisions. When an algorithm suggests that a particular individual is at high risk of committing a crime, that prediction can influence bail decisions, parole hearings, and even sentencing—yet the person affected has no way to challenge the logic or accuracy of the assessment. The right to confront one's accuser, a cornerstone of the justice system, is rendered meaningless when the accuser is a black box.

Research from the RAND Corporation and other institutions has demonstrated that predictive policing systems often flag low-income and predominantly non-white areas at disproportionate rates, even when controlling for actual crime incidence.

Proponents argue that predictive policing reduces crime by enabling proactive deployment of officers. They point to studies showing reductions in burglary and theft in areas where such systems have been used. But at what cost? The benefits are often marginal and unevenly distributed, while the harms—especially to communities already burdened by heavy policing—are profound. A cost-benefit analysis that ignores the erosion of civil liberties and the deepening of racial disparities is incomplete. Furthermore, alternative strategies, such as community-based violence intervention programmes, have shown comparable or better results without the attendant risks of algorithmic bias.

The call to ban predictive policing does not stem from a Luddite rejection of technology. Rather, it arises from a commitment to justice and fairness. Technology is not neutral; it reflects the values and biases of its creators. When those biases are encoded into systems that shape life-altering decisions, the state has a duty to intervene. We must demand that before any algorithm is deployed in policing, it must be proven to be accurate, transparent, and free from discriminatory impact. Until such standards are met, a ban is the only prudent course. The burden of proof should lie with those who wish to implement these systems, not with the communities that bear the consequences of their failures.

In conclusion, the case for banning predictive policing algorithms is compelling. They entrench historical injustices, operate in secrecy, and threaten the foundational principles of due process and equality. The time has come to say no to automated injustice and to invest in humane, evidence-based approaches to public safety. How can we claim to champion justice if we delegate it to machines that cannot understand it?