Predictions are iffy things, at best.
Our natural confirmation biases cause us to learn and remember when the occasional prediction of something is correct; but it also leads us to ignore and/or forget about the vast majority of times when a said prediction did not come to pass.
So in some ways, the idea of predictive policing makes a great deal of sense. But only up to a point. Now we are using “predictive policing” software to not just predict crimes, but to supposedly prevent them from occurring based on algorithms and some form of computerized statistical analysis and flat out assumptions.
When you inevitably reach the limit of technology as far as policing goes, and the predictions become less about what may happen in the near future and more about what has happened in the past, perhaps even far in the past, a line is dangerously close to being crossed. That line is, of course, our Constitutionally protected fundamental rights of privacy, and protection against government abuses.
If we could be assured that there is no assumption of guilt based on the predictive policing software, it wouldn’t be so controversial. However, as we now see and despite assurances to the contrary, such is not necessarily the case.
More and more we see that Policing has become less about “Law & Order” and more about “preventing another 9/11.” In one case, going so far as to create a black site, arresting as many as 3600 American citizens on American City Streets and subjecting them to torture and other violations of fundamental Constitutional rights – all in the name of “prevention.”