Are we ready for the “Brave New World” where robots are tasked to kill humans? The Dallas police shootings and subsequent “death by robot” of the suspect raise new questions that may haunt or hunt us in the future.
Last Thursday night Micah Xavier Johnson, the suspect who allegedly shot five Dallas police officers and injured nine other people (including two Black Lives Matter protesters) was blown up by a Dallas Police robot. Johnson was military trained, and initial police reports described the incident as an attack by multiple shooters with triangulated fire. While police now dismiss that theory and claim that Johnson was “the lone gunman” in the Dallas shootings, Johnson allegedly left the initials “RB” scrawled in various places at the scene of his death.
“Robotics experts said the incident was the first time law enforcement has used a robot in a targeted killing in the U.S.
These bomb-detecting robots are fairly inexpensive, often costing less than $10,000 each. But local, state, and federal law enforcement agencies can also request the devices at no cost through the military’s 1033 program, which gives used military equipment that would otherwise be thrown away to U.S. law enforcement agencies. The recipients have to justify a “need” for the equipment and cover the cost of transportation. The law enforcement agencies, not the military, are responsible for teaching their officers how to use the equipment.”
Nearly all of us are aware that the military has a very controversial program of targeted killing using drone aircraft against terrorist suspects in the middle east. Many civilian casualties have resulted from these drone strikes, which are remotely conducted from military bases in the U.S.
Now “The chickens have come home to roost”.
“The legal framework for police use of force assumes human decision-making about immediate human threats,” Elizabeth Joh, a professor of law specializing in policing and technology at the University of California Davis, told HuffPost. “What does that mean when the police are far away from a suspect posing a threat? What does ‘objectively reasonable’ lethal robotic force look like?”
Joh recognizes that this wasn’t a complex killing machine, but she argues its deployment indicates how easy it would be for police to launch more advanced weaponry without oversight. After all, they transformed a bomb disposal robot into a bomb delivery robot.”
In 2014 I heard a great interview with John Whitehead, author of “A Government of Wolves” – a book I now own. Whitehead described in great detail the new age of killer robots that is about to occur. Here’s what Whitehead says is on the horizon: Robots that will walk, communicate and kill on the battlefield. He talks about dragonfly drones that can shoot people. Mosquito drones that can land on you and either take some of your DNA or inject you with something. Robots that move like Panthers and can run you down. It’s the stuff nightmares are made of.
At this time, killer robots are drones directed by humans. Technology however, is changing very fast. When these machines are equipped with artificial intelligence and turned loose to hunt down people or other robots, we are stepping into “Bladerunner” territory.
From “Business Insider”:
The Moral Implications Of Robots That Kill
Jun. 5, 2014
Lethal autonomous weapons — robots that can kill people without human intervention — aren’t yet on our battlefields, but the technology is right there.
As you can imagine, the killer robot issue is one that raises a number of concerns in the arenas of wartime strategy, morality, and philosophy. The hubbub is probably best summarized with this soundbite from The Washington Post: “Who is responsible when a fully autonomous robot kills an innocent? How can we allow a world where decisions over life and death are entirely mechanized?”
They are questions the United Nations is taking quite seriously, discussing them in-depth at a meeting last month. Nobel Peace Prize laureates Jody Williams, Archbishop Desmond Tutu, and former South African President F.W. de Klerk are among a group calling for an outright ban on such technology, but others are skeptical about that method’s efficacy as there’s historical precedent that banning weapons is counterproductive:
While some experts want an outright ban, Ronald Arkin of the Georgia Institute of Technology pointed out that Pope Innocent II tried to ban the crossbow in 1139, and argued that it would be almost impossible to enforce such a ban. Much better, he argued, to develop these technologies in ways that might make war zones safer for non-combatants.
Arkin suggests that “if these robots are used illegally, the policymakers, soldiers, industrialists and, yes, scientists involved should be held accountable.” He’s quite literally suggesting that if a robot kills a person outside its rules or boundaries, the people involved in that robot’s creation are responsible, but here’s his hedge from a 2007 book called “Killer Robots”:
“It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield. But I am convinced that they can perform more ethically than human soldiers.”
This is one of several issues we’ll have to resolve as technology continues to develop like a runaway train.”
We’re totally screwed.