Will Artificial Intelligence Help Improve Prisons?

China and Hong Kong have started using tech to create “smart” prisons. Should the U.S. consider following their lead?
The California Institution for Men prison fence in Chino, California.

Artificial intelligence–connected sensors, tracking wristbands, and data analytics: We’ve seen this type of tech pop up in smart homes, cars, classrooms, and workplaces. And now, we’re seeing these types of networked systems show up in a new frontier—prisons.

Specifically, China and Hong Kong have recently announced that their governments are rolling out new artificial intelligence (AI) technology aimed at monitoring inmates in some prisons every minute of every day. In Hong Kong, the government is testing Fitbit-like devices to monitor individuals’ locations and activities, including their heart rates, at all times. Some prisons will also start using networked video surveillance systems programmed to identify abnormal behavior, such as self-harm or violence against others. Some will also start using robots tasked with searching for drugs in the feces from inmates.

In mainland China, the government is finishing up construction on a new “smart” surveillance system in Yancheng Prison that aims to monitor every one of their high-profile inmates in real time via networked hidden cameras and sensors placed in every cell. According to a report in the South China Morning Post, the network will stream the data it collects to “a fast, AI-powered computer that is able to recognize, track, and monitor every inmate around the clock” and, “At the end of each day, generate a comprehensive report, including behavioral analysis, on each prisoner using different AI functions such as facial identification and movement analysis.” Like in Hong Kong, these systems are also designed to flag suspicious behavior and alert human guards when it finds any activity it registers as abnormal. An employee at Tiandy Technologies, the surveillance tech company that helped develop the surveillance system, claimed that with the new technology, “prison breaks will be history,” and suggested that unethical behavior from guards, such as taking bribes, might become a thing of the past.

With everything else “smart,” why not smart prisons? Their potential is undeniable—as are their risks. And though the United States should be wary of copying the Chinese panopticon-style model of prison surveillance in its entirety, there are possible uses of this technology to consider that could make prisons better, safer places for inmates and guards alike—or, if misapplied, could make them even more nightmarish.

American prisons contend with several criminal justice goals, including incapacitation, retribution, deterrence, and rehabilitation. Increasingly, this fourth factor has become a larger part of the conversation—especially with the rising recognition that a vast majority of inmates (95 percent of those in state prisons) will one day re-enter society.

When we talk about using connected technology to help achieve these goals, replacing prison staff with AI shouldn’t be the objective— especially when considering the rights and dignity of the incarcerated and the human touch necessary for successful rehabilitation (something that AI, no matter how smart, can never replace). Instead, we should consider the costs and benefits of developing certain kinds of smart-prison tech that might ease certain burdens on human staff and augment their ability to provide safety, support, and education to inmates.

For one, using AI in prisons to analyze certain kinds of inmate behavior could be beneficial to both inmates and prison staff if used to identify situations that could escalate to violence, such as spotting the signs an inmate might be prone to self-harm.

It could also be used to help watch the guards. For example, researchers have already started exploring ways to use machine learning to help identify and reduce police violence by training systems to look through staff records and learn to spot and flag signs that an officer may be at a high risk of initiating an “adverse event.” This might include identifying officers that have a history of certain kinds of misconduct or may be under a lot of stress. A smart prison could use the data it collects on guards to provide a similar service. Additionally, a system designed to flag abnormal inmate behavior could likewise be programmed to spot and report potentially abusive behavior by guards.

But before attempting to implement these systems, we also have to weigh: Can current technology do this reliably? And should we want technology to do this at all?

With regard to the first question, our current technological capabilities should give us ample reason to pause before we install smart cameras in our prisons. AI-augmented cameras would likely rely on a combination of facial recognition and data analytics to analyze behavior. To be reliable the technology would, at a bare minimum, at least be able to correctly identify individuals.

Yet facial identification systems—most notably Amazon’s software Rekognition, which is already being sold to police departments—has been notoriously bad at identifying black and brown faces, particularly those of women. It’s not hard to imagine AI being used to punish the wrong individual, who might have little recourse given courts’ history of giving deference to prisons‘ disciplinary decisions.

Even if technology can identify individuals reliably, we should be suspicious of the claim that it can track “abnormal” behavior. Behind AI technologies—from Amazon’s Alexa to Facebook’s content moderation—are human beings training machines. The question, then: How do we develop appropriate metrics to code behavior as abnormal? If we are over-inclusive in our search for bad behavior, we run the risk of ignoring diverse interpretations of “normal” and creating enormous psychological pressure to conform. If we are under-inclusive, we risk missing issues that correctional staff, with the nuance and discretion that human beings possess, might have caught. (Though, especially with the critical understaffing of prisons across the U.S., it’s important to note that the human-staffed prisons we have are rarely exemplary.)

Even assuming we can get technology able to tag abnormal behavior with relative accuracy, we still have to weigh: at what cost? Beyond questions about the rights and dignity of the incarcerated, intense surveillance could harm rehabilitation efforts by creating an atmosphere of distrust and control. Research has indicated that prison buildings that emphasize constant surveillance can increase tensions between staff and inmates, worsening behavioral issues. It’s not a stretch to imagine a prison operating an algorithmically enhanced Big Brother-like system having the same effect, one that ends up countering the goal of having a smart prison in the first place.

In contrast, some of the prisons that have been the most successful in decreasing violence and behavioral issues—among them, facilities in Connecticut, Norway, and Germany—do not employ forms of strict surveillance or social control. Instead, they try to return agency to incarcerated individuals to help prepare them to re-enter society. But perhaps this isn’t an either or. Could an exceptionally well-designed system—finely tuned to flag and record only high-risk abnormal behavior with great accuracy, and stripped of many human biases—give inmates a greater sense of freedom, safety, and equal treatment? Especially compared to the flawed eyes and whims of human guards? Could it also free up more human staff to focus on supporting incarcerated individuals instead of surveilling and policing them?

Yet, again, even if we can get an AI system that works as advertised, it would still represent an unprecedented invasion of privacy. It will collect and analyze data points imperceptible to the human eye, invariably gathering information on a wider degree of behavior than previously possible and placing an incredible amount of sensitive data under the correctional microscope. The aggregation of so much biometric, behavioral, and identifying information raises additional risks. For instance, what if a cash-strapped correctional department that retains this information decides to market the treasure trove of data? And how long before an enterprising hacker steals it? After all, we’ve already seen things like this go wrong.

All this is to say that we must proceed with caution. These are technologies that are neither inherently good nor bad. Although “smart,” they’ll be made to reflect and amplify the goals of prison staff, not fundamentally shift their goals. As such, whether smart prisons alleviate or exacerbate some of the larger criminal justice issues we face in America will turn, in large part, on whether our prisons are already on the right trajectory when these technologies are introduced. Right now, they are likely not.

But maybe, if we start with a revolution in how our government treats incarcerated people—and then design AI technologies to follow—we could one day have truly “smarter” prisons.

This story originally appeared in New America’s digital magazine, New America Weekly, a Pacific Standard partner site. Sign up to get New America Weekly delivered to your inbox, and follow @NewAmerica on Twitter.

Related Posts