Patrick Mays Headshot

In 2017, the North Carolina Department of Public Safety secured funding to create a system which uses predictive analytics to guide probation and parole. This program — the NC Predictive Analytics in Supervision Effort (NC-PASE) — received significant development funding and reflects a growing trend in law enforcement. The practice of using data to preemptively decide who’s going to violate the law is getting more popular, but it can also be dangerous. 

One major concern with predictive analytics is surveillance. The description of the NC-PASE program says it aims to “identify profiles within the high to moderate risk population and build Strategic Supervision Guidelines (SSG), or tailored supervision practices, for different risk profiles in different geographic areas within risk levels.” In order to do this, data collection needs to support predictive decisions about individual people. This almost necessitates intrusive methods. 

Take, for example, a program in the U.K. which used profiling systems to assess the risk that people would commit a crime. A study by Cambridge found the system used a plethora of intrusive and untested technologies which had “questionable compatibility with fundamental rights.” Programs like this need significant amounts of data, giving governments an incentive to expand data harvesting programs which can pose serious threats to personal privacy.

Another concern is accuracy. A study of 7,000 people arrested in an area using a predictive policing algorithm for risk profiling found that Black defendants were twice as likely to be falsely flagged as high risk compared to white ones. This bias persisted even when ignoring factors like criminal history. The accuracy problems also existed more generally — the program was only somewhat better than a coin flip at predicting the likelihood that someone would reoffend. 

When historical crime statistics are fed into algorithms, they can reflect biases present in that data, leading to problems like the one seen above. A New York University law review noted that historical criminology data often reflects the arbitrary overpolicing of minorities. For example, between 2004 and 2012, 80% of stop-and-frisk searches targeted people of color. 

Predictive data informs law enforcement’s decision-making and then reflects those biases. These actions create more data, which is then fed back into the algorithm, reinforcing its past choices. These feedback loops can scale up any error in the calculations of the program — if there’s a mistake and law enforcement acts on it, that data increases the confidence of the algorithm in its decision.

Although NC-PASE is focused on probation and parole, it can still affect people’s likelihood of going back to jail. A Human Rights Watch review of parole stressed the frequency of reimprisonment after parole violations and argued that the system significantly contributes to mass incarceration. Rather than a second chance, supervision often entails a state of being nominally free, but at risk of being put back in prison over any technical violation of guidelines. Even if the NC-PASE program creates a more lenient system, accuracy should still be a concern.

Despite all this, there’s hardly any publicly available detail on the NC-PASE project. There’s a UNC-Chapel Hill research website that notes the project is in progress but doesn’t give any way of getting in touch about details. There’s also a blurb on The Center for Advancing Correctional Excellence’s open projects website, which hardly details anything. Lastly, there’s the grant award website, which details the goals of the program but gives no updates on its progress. This does not bode well for future communication on the status of the project or the functioning of its algorithms.

Without a clear understanding of how predictive algorithms work, law enforcement can’t account for algorithmic bias, problems in decision-making, feedback loops or any of the other problems that can affect the data. More than this, it becomes nearly impossible to create accountability mechanisms. When the public has no idea how this sort of system affects them, how are they supposed to make rational democratic decisions on whether it should stay? Just as importantly, how are they supposed to take legal recourse if surveillance or algorithmic bias affects them in an unlawful manner? 

Creating a black box that masks predictive algorithms and allows them to function in the background is a problematic prospect, especially when we should be critically evaluating their decisions. North Carolina can’t risk poor oversight and accountability on a project with so many implications for public well-being.

Correspondent

I'm a first-year studying industrial engineering. I'm interested in social science and sustainability, but I always want to learn about new topics, which is why I joined Technician.