On the implications of the Dallas Police Department’s unprecedented use of a lethal robot to kill a suspect.
By Kristina Kutateli
(Photo: Northrop Grumman)
Last week, Dallas police killed gunman Micah Johnson — who fatally shot five officers during a Black Lives Matter protest—using a Northrop Grumman tactical robot, marking what is likely the first robot killing carried out by police in United States history. A statement issued by the Dallas Police Department said that they “utilized the mechanical tactical robot as a last resort,” and to “save the lives of officers and citizens.”
But the use of a lethal robot for the purpose of killing a suspect ushers in a conundrum of new ethical and legal questions, and in the midst of criticisms of increasing police militarization, this may set a precedent for other domestic law enforcement agencies. As Elizabeth Joh, a professor at the University of California–Davis, asks in the New York Times: “If armed robots can take police officers out of harm’s way, in what situations should we permit the police to use them?”
Moreover, what kind of weapons should the robots be equipped with — if at all — and what place does the question of robot use have in the national debate over racial bias in policing? At least robots can’t be racist … right?
While one might think the development of semi-autonomous robots for law enforcement might be positive — in that the robots can potentially be emancipatory, freeing police departments of racial bias — in fact, it is likely that any artificial intelligence (AI) would simply further entrench discrimination.
Last week’s events introduce a whole new slew of ethical concerns — ones that may have lethal consequences.
In her article on unintentional bias in AI for the New York Times, researcher Kate Crawford mentions the controversy surrounding Google’s photo application, which, in 2015, accidentally classified images of African Americans as gorillas. While mistakes like the one that befell Google (and Nikon and HP) shed light on AI’s susceptibility to human prejudice, last week’s events introduce a whole new slew of ethical concerns — ones that may have lethal consequences.
A study released in January by Heather Roff, a research scientist at Arizona State University’s Global Security Initiative, explains how biases can get programmed into AI unintentionally — particularly troublesome in the case of robots with lethal capabilities.
While’s Roff’s paper focuses on gender biases, programmed prejudices can also center around race: In testing computer programs that predict who will commit future crimes — programs that are used by courtrooms across the nation — ProPublica found that they were twice as likely to falsely flag black defendants than white defendants.
Roff argues that a biased bot is likely inevitable, regardless of the programmer’s intention. Consider, for example, the technical implications of actually making AI. “A weapon,” Roff explains, “becomes an artificially intelligent learning machine through its software architecture.” To create it, programmers will “begin from a sufficiently large dataset” with the idea that the AI can then simulate human problem-solving, “such as through searching for a solution by way of deduction, or reasoning by way of analogy or example.” To do this, the programmer will provide a set of “facts” and will write in a “fixed set of concepts to the software architecture from which the AI can draw when confronted with a task or problem.”
Whoever is creating and operating the machine — likely a white male — and choosing what data and “facts” to input will profoundly affect the actions of the robot. A formal system, like the one used for building AI, “is a political choice to maintain existing power structures.”
For Roff, the most pressing problem in programming AI, however, comes from analogical reasoning. In the likelihood that robots will encounter complex situations, robots will abstract the situation by use of analogy. The AI will “break down the situation into smaller objects that it can represent,” and will search its database for something that is analogous. “To find such an abstract analogy,” Roff writes, “the AI uses stereotypes.”
Building an AI from a blank slate rather than from a data set has its problems as well. Roff writes that bottom-up systems have no pre-programmed constraints and have “the potential to learn the ‘wrong thing.’” Within hours of Microsoft’s launch of its infamous AI, Tay, on Twitter, for example, Tay began to tweet things like “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.”
In 2015, North Dakota became the first state to legalize armed police drones. Although the robot used in Dallas was notably not autonomous — it was a remote-controlled bot — it marks the first use of a lethal robot by police, and will likely set a precedent.
While the use of semi-autonomous weapons may or may not make sense for the purposes of the U.S. military, police use should be much more controversial. “The military has many missions, but at its core is about dominating and eliminating an enemy,” Seth Stoughton, an assistant law professor at the University of South Carolina, told the Atlantic. “Policing has a different mission: protecting the populace.”
Robots are not neutral entities; they are subject to the same biases as the people operating them.
||