Artificial intelligence can predict crime rates in a city up to 90 percent accurately, and it can do this by predicting their location one week ahead. Similar systems have been proven to perpetuate racism in police work, and this AI could be the same. However, the AI’s creators claim it can be used to expose these biases.
Ishanu Chattopadhyay, University of Chicago, and his collaborators created an AI model that analysed Chicago’s historical crime data, then predicted the crime levels in the weeks following this training period.
The model predicted certain crimes in the city and was divided into squares measuring 300m by 300m. It was tested up to 90% accuracy. The model was also tested using data from seven other major US cities. It performed at a similar level.
AIs used to predict crime are controversial as they can lead to racial bias. Chicago Police Department recently tested an algorithm that identified those most likely to be involved in shootings, as victims or perpetrators. The algorithm and the list were initially kept under wraps, but it was eventually released. It included 56 percent of Black men between 20-29 years old.
Chattopadhyay admits that his data will be biased. However, he says that there have been efforts to minimize the bias effect and that the AI doesn’t identify suspects but only possible sites of crime. He says, “It’s Not Minority Report.”
“Law enforcement resources can be limited. You want to make the most of your law enforcement resources. He says it would be wonderful if you knew where homicides will occur.
Chattopadhyay believes that the AI’s predictions can be used more safely to inform policy at a higher level than to allocate resources to police departments. Chattopadhyay has made the algorithm and data used in the study public so that others can examine them.
Researchers also looked at the data in order to identify areas where human bias could be affecting policing. The researchers analyzed the arrests made in response to crimes in Chicago neighborhoods with different socioeconomic statuses. The police responded biasfully by pointing out that crime in wealthy areas led to more arrests than in areas with lower socioeconomic status.
Lawrence Sherman, UK’s Cambridge Centre for Evidence-Based Policing says he is concerned by the inclusion of reactive policing data. This could be crimes that are reported because people report them or crimes that are recorded because police go looking for them. He says that the latter type of data can be biased. He says that it could reflect intentional discrimination by the police in certain areas.