In theory, predictive policing should not come with an inherent bias. After all, it is the process of using computer programs to analyze data and determine where crime might take place.
For instance, the computer may recognize trends, such as criminal activities happening in a downtown area late at night. It can predict where car thefts or robberies are statistically most likely to take place. This isn’t necessarily predicting the specific crimes, but simply telling officers where they should look most often.
So, how could this possibly be biased if it’s just based on data?
Where did the data come from?
The problem is the source itself. Yes, the computer can analyze historical data, such as arrest rates. It can consider the background information of people who have been arrested, such as their age or their gender.
But the computer still gets all that information from police officers. For instance, studies have found that African Americans are five times as likely to be stopped by the police without a valid reason or justification. They’re also twice as likely to face arrest.
If officers are more likely to stop and arrest African Americans suspects, they’re going to feed that information into the algorithm. The algorithm then slowly reflects the same bias by assuming that African American citizens are more likely to commit crimes and dispatching more police officers to these specific neighborhoods. This, in turn, leads to more arrests and the creation of a data feedback loop.
Have you been arrested after you were racially profiled or subjected to an illegal stop because the officer was biased against you? Your rights may have been violated and you need to know what steps to take.