Last week, the I carried a story reporting the debate over the development of truly autonomous military drones. At the moment these killing machines require a human operator, but there are plans to give them AI and autonomy, so that they can fly and kill independently. I’m afraid I didn’t read the article, so can’t really tell you much about it, except what leapt out at me.
And what did leap out of me was that this is very dangerous. The I itself reported that there was a controversy over the proposals. Some scientists and other people have argued that it’s dangerous to remove humans from war, and leave to it cold, dispassionate machines. This is a valid point. A decade or so ago, one tech company announced it was planning to build war robots to be used in combat. There was immediately a storm of protest as people feared the consequences of sending robots out to kill. The fear is that these machines would continue killing in situations where a humane response is required.
whistleblowers on the American drone programme have also talked about its dehumanising effects. The human operator is miles, perhaps even an entire continent away from the drone itself, and this creates a sense of unreality about the mission. The deaths are only seen on a screen, and so the operator can forget that he is actually killing real human being. After one trainee drone operator continued killing long after he had completed his mission, he was reportedly hauled from his chair by the instructor, who told him sternly, ‘This is not a video game’. Similarly soldiers and pilots in combat may also become dehumanised and enjoy killing. One of the volumes I read against the Iraq War included a letter from a veteran American Air Force pilot to his son, entitled ‘Don’t Lose Your Humanity’. The father was concerned that this would happen to his lad, after seeing it happen to some of the men he’d served with. He wrote of a case where a man continued to shoot at the enemy from his plane, simply because he enjoyed the chaos and carnage he was creating.
Already humans can lose their own moral compass while controlling these machines, but the situation could become much worse if these machines became completely autonomous. They could continue to kill regardless of circumstance or morality, simply through the requirement to obey their programming.
There is also another danger: that the rise of these machines will eventually lead to the extinction and enslavement of the human race. The idea of the robot’s revolt has haunted Science Fiction since Mary Shelley first wrote Frankenstein at the beginning of the 19th century. It’s one of the clichéd themes of SF, but some scientists fear it the danger is all too real. Martin Rees, the Astronomer Royal, included it among the dangers to the survival of humanity in his book, Our Final Minute?, in the 1990s. Kevin Warwick, professor of robotics at Reading University and former cyborg, also sees it as a real possibility. His 1990s book, March of the Machines, opens with a chilling description of a world ruled by robots. Humanity has been decimated. The few survivors are enslaved, and used by the machines to hunt down the remaining free humans living wild in places which are inaccessible to the robots. Warwick was deeply troubled by the prospect of the machines eventually taking over and leaving humanity far behind. He turned to cyborgisation as a possible solution to the problem and a way for humanity to retain its superiority and survival against its creations.
These plans for the drones also remind very strongly of an SF story I read way back when I was a teenager, ‘Flying Dutchman’, by Ward Moore, in Tony Boardman, ed., Science Fiction Stories, illustrated by David Mitchell, Paul Desmond, and Graham Townsend (London: Octopus 1979). In this story, a bomber comes back to base to be refuelled and loaded up once again with bombs, to fly away again on another mission. This is all done automatically. There are no humans whatever in the story. It is implied that humanity has finally killed itself, leaving just its machines continuing to function, flying and bombing in an endless cycle, forever.
Many of the other stories in the volume were first published in the SF pulp magazines. I don’t know when Moore’s story was written, but the use of bombers, rather than missiles, suggests it was around the time of the Second World War or perhaps the Korean. Not that bombers have been entirely superseded by modern missiles and combat aircraft. The Americans used the old B54s against the Serbs during the war in Yugoslavia. These plans to create autonomous drones brings the world of Moore’s story closer to horrifying reality.
SF has often been the literature of warning. Quite often its predictions are hilariously wrong. But this is one instance where we need to pay very serious attention indeed.