As summer comes to a close, local governments are returning to their council chambers and facing massive pressures. Municipal budgets are losing hundreds of millions in revenue in the wake of coronavirus. Meanwhile, a generational uprising is pushing our government to divest from militarized, racist policing, calling instead for the resources that our neighborhoods have been starved of for generations—the resources that actually increase safety.
The broad-based support for these calls sends hopeful signals about where our cities and country are headed. But if we want to get there, we must take care not to repeat the mistakes of the past.
Hannah Sassaman is the policy director at Movement Alliance Project, a movement organization focused at the intersection of race, technology, and inequality in Philadelphia. She is a former Soros Justice Fellow focusing on community organizing around predictive technologies in the criminal legal system.
During the last great economic crisis this country faced, in 2008, local policymakers sought to save money while making their communities “safer” with new tech-based solutions. In the years since, police departments, probation officers, and courts have embedded this technology—like crime-predicting algorithms, facial recognition, and pretrial and sentencing software—deep inside America’s criminal legal system, even as budgets have risen and police forces have grown. But instead of actually predicting and reducing crime and violence, these algorithms promote systems of over-policing and mass incarceration, perpetuating racism and increasing tensions between police and communities.
Designers claim that predictive policing can save money through “smart” targeting of police resources, but algorithms meant to foresee where crime will occur only justified massive and often violent deployment to neighborhoods already suffering from poverty and disinvestment. Ultimately, these algorithms didn’t reduce the money taxpayers spend on the cops. In fact, as departments across the country installed predictive policing, police budgets continued to grow, especially as a percentage of overall municipal spending. At the same time, the criminal legal system grew more punishing, especially for Black and brown people. The accused communities of color caught up in predictive policing were then judged by another set of algorithms when taken for their arraignments in court: “pre-trial algorithms.” This software sorts accused people into “risky” and “non-risky” categories— keeping those who have yet to be tried or convicted incarcerated for longer, wrecking their chances to mount a defense, and defying the American notion of presumed innocence.
But the justification for all of this so-called “predictive policing” crumbles when you look at the criminal and legal data coming out of the first months of Covid.
As the grip of coronavirus tightened in Philadelphia, for example, incarcerated people, families, organizers, and legal system actors pushed the courts to release over a thousand people from jails where social distancing is near-impossible. At the same time, police officers, afraid of overcrowding jails while the courts were shut down and catching the coronavirus, stopped making low-level arrests. Police forces nationwide took similar approaches.
In city after city where these changes were made, local authorities are seeing many kinds of crime drop. While certain types of violence in many cities—including in Philadelphia, where I am—are slowly rising as unemployment climbs and poverty deepens, there’s no data supporting the belief that emptying jails and limiting arrests causes violence in our communities. The National Council of State Courts shows that the two important data points pretrial systems track—whether or not someone returns to court, and whether or not they get arrested again before facing trial—have both plummeted nationally. Research and our lived experience during the Covid-19 outbreak is proving that you can arrest and incarcerate far fewer people in our communities without compromising safety or spending unnecessary money to lock them up.
These promising signs underscore the importance of breaking with algorithmic decisionmaking, whether through “predictive policing” or other algorithms used in the criminal legal system. As our local governments return to even emptier coffers and major municipal budget pressures, we should quickly abolish these models across all criminal legal system contexts. Some cities have already begun to act: Chicago, after years of following a similar strategy as Philadelphia, dumped its notorious algorithmic “hot list” after admitting that the tool hadn’t reduced violence, while it had increased racist policing. And Santa Cruz banned predictive policing this summer, with Santa Cruz police chief Andy Mills describing the biased data and impacts of these algorithms as “a blind spot I didn’t see.”