Predictive policing: The pitfalls of crime forecasting

The developers of an algorithm designed to identify potential lawbreakers warn that misusing their model can perpetuate racial bias

Patrol cars from the New York Police Department block a street next to a crime scene.
Patrol cars from the New York Police Department block a street next to a crime scene.AMR ALFIKY (Reuters)

Knowing where a crime is going to be committed before it happens is the dream of police departments all over the world, and data scientists and artificial intelligence experts want to make it a reality. Many law enforcement agencies, especially in the US where gun violence is rampant, have been using pattern-detecting information systems to predict crime hotspots. A research team from the University of Chicago (Illinois, USA) led by Professor Victor Rotaru has developed a model capable of predicting likely crime areas up to a week in advance. The tool has a 90% accuracy rate, which makes it one of the most successful examples of the predictive policing systems operated by companies such as PredPol, Azavea and KeyStats in big cities like Los Angeles and New York.

An article by Rotauru and his team published in Nature Human Behaviour describes a model designed for urban environments that was trained on historical property and violent crime data from the city of Chicago from 2014-2016. The model processed the historical data and then identified the areas most likely to experience the highest crime levels in the weeks following the testing period. With 90% reliability, the tool predicted the likelihood of crimes like homicides, assaults, muggings, and robberies for 1,000-square-feet (300 square meters) quadrants of the city. The model was later tested in seven other large US cities (Atlanta, Austin, Detroit, Los Angeles, Philadelphia, San Francisco, and Portland) with similar results.

But these algorithmic systems have not solved the thorny problem of how to avoid the widely recognized bias against Black and Latino neighborhoods. A 2021 study concluded that it is impossible for predictive policing systems to offset their intrinsic biases, which is one reason why the European Parliament called for a ban on these tools in the European Union.

Chicago police cars parked at a police station.
Chicago police cars parked at a police station.E. Jason Wambsgans (Tribune News Service via Getty I)

Rotaru’s team acknowledge the problem. “To prevent their tool from being detrimental to some groups, the authors turned the concept of predictive policing on its head and recommended that their models be used to monitor policing itself,” said Andrew V. Papachristos, a professor of sociology at Northwestern University (Illinois, USA).

Department of ‘Pre-Crime’

Papachristos believes the perspective offered by the Rotaru-led study is important, and can help develop early intervention systems and other initiatives to identify police abuse, a particularly sensitive issue in the US since the George Floyd death in 2020. He also thinks that it would be useful to “send social workers, response teams and victim assistance teams to the quadrants identified by these predictive policing systems.”

“Despite our caution,” says the study, “one of our key concerns in authoring this study is its potential for misuse, an issue with which predictive policing strategies have struggled. More important than making good predictions is how such capability will be used. Because policing is as much ‘person-based’ as ‘place-based,’ sending police to an area, regardless of how small that area is, does not dictate the optimal course of action when they arrive, and it is conceivable that good predictions (and intentions) can lead to over-policing or police abuses. For example, our results may be falsely interpreted to mean that there is ‘too much’ policing in low-crime (often predominantly white) communities, and too little policing in higher-crime (often more racially and ethnically diverse) neighborhoods.”

“We view our model as a police policy optimization tool,” said Ishanu Chattopardhayay, a co-author of the study. “That’s why we are so insistent that we have to be very careful about how we apply the knowledge it provides.”

Overcriminalization of Blacks and Latinos

In large US cities, neighborhoods are often closely associated with a particular race or ethnicity. “Disproportionate police responses in communities of color can contribute to biases in event logs, which might propagate into inferred models,” says the study, acknowledging that there is no way to statistically control for such deviations. “Anyone using these tools should be aware of this.”

A researcher from a different US university consulted by EL PAÍS questions the results of the Rotaru-led study, noting that the model’s effectiveness drops significantly when the crime itself rarely occurs. Another drawback of the model is that the predictive algorithm isn’t able to identify areas with the highest crime, only those the highest reported crime, which is important because in the US, Black communities are less likely to report crimes to the police. “Our tool is based on reported crimes. We cannot model crimes that are not reported,” said Chattopardhayay.

Predictive policing models must also cope with input biases. Since they use databases of arrest information, the model associates areas with higher arrest numbers with a greater need for policing, which in turn increases the number of arrests. This is why civic organizations object to the widespread use of these tools, and why Rotaru and his colleagues caution against misapplying the tools they created.

More information

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS