The idea behind predictive policing is simple: If you know where a crime is most likely to take place, you can go there first. Then you can stop that crime.
How do you determine where it will happen? You use computers. You feed them arrest data and criminal activity. They map out where things tend to happen — drug sales, car thefts, etc — and then predict the places where these events are most likely. It’s not perfect, but it gives officers an idea of where they can go on patrol to do the most good.
But there is a problem. Predictive policing sounds like it wouldn’t be biased. A computer can’t be racially biased, for instance. It’s just reading data. It doesn’t know or care about race. But the truth is that it actually is biased, and it’s much worse than many people realize.
Reports show that these techniques perpetuate racism. They do it because the officers who report the data may be biased. If an officer only goes to neighborhoods where minorities live, for instance, he will only make arrests in these neighborhoods. That makes the computer think that the neighborhood has more crime, when it really just has more arrests. Crime elsewhere could be overlooked, all while the computer keeps sending more and more officers back to that neighborhood. While the computer isn’t biased, the practice is.
Policing is complicated, and it’s clear that even technology can’t make it easier. Those who face arrest and believe it to be unjust need to know about their legal defense options.