The apocalypse is at hand, Internet friends. Last week, the BBC reported that robotics experts and professors are to meet and discuss autonomous killer robots. The UN Convention on Certain Conventional Weapons (CCW) has never met on the subject of robots that can decide by themselves whether a being needs to be removed from the gene pool, and the hope here is that any such autonomous machine will face a ban (because, you know, we are sane?). With drone warfare a thing now, though, a hope for peaceable autonomous robots may be too much to ask for.
On the one side, you have the folks behind the Campaign to Stop Killer Robots, led by one of the two professors, Noel Sharkey, who believe that robots making their own decisions on human mortality is hugely dangerous to humanity. Robots in warfare is a scary thing, the complete automatization of conducting war not unlike every single science fiction story that predicts how robots will probably mess us up a great deal (never trust a robot!). According to Sharkey, robots may not have to care even a little about laws governing human warfare (as I said before, never trust a robot!).
Ronald Arkin, the professor more in favor of autonomous killing machines, suspects that automatization of fighting wars would reduce death and suffering from folks not participating in the wars, even going on to suggest robots would exercise more precaution in how they conducted combat. While humans do make some alarmingly bad calls in situations where violence is a factor, I have a hard time believing sending in machines would greatly change the amount of bloodshed already caused by fighting of any sort.
Experts will be discussing these frightening concepts mid-May, the hope being a moratorium on robots that, although not yet in functional existence, have the potential to be technically easy to throw together (if I’m getting the scientific terminology correct here). I can only imagine the truly terrifying possibility of autonomous killer robots; humans, although bloodthirsty creatures most of the time, have empathy, and that trumps reason in terms of the decision to take another creature’s life. A better option would still be to have a human behind all the major decisions robots make, such as how drones operate. Crazy, yes, but still ultimately with a person pushing buttons and making decisions.
The one benefit of having autonomous killer robots, though, is partly an inversion of why they sound like a nightmare. Much of the violence that occurs human to human is no longer really linked to logic, but to greed, wild emotional instability, and the tantalizing hunt for power. Robots, not being human, wouldn’t have any of these desires (this not actually being a science fiction novel, after all), and would make the best statistical decisions regarding killing humans. Humans controlling robots have no mental capacity to know the full weight of what they’re doing, but a robot won’t suffer from having a human behind their actions (it’s been reported that humans can’t keep that many people in their heads, so it’s no wonder ultra-violence often doesn’t compute like, say, the death of an individual – cue the argument against guns again).
Mostly, autonomous killing machines are a pretty scary, dangerous idea, and in my opinion more harm will come of them than good. Keep up with news about the CCW and other robotics related news, as the technology is moving at breakneck speed, and the last thing we want is to be in the dark about the potential of a robot apocalypse.