Posts Tagged ‘‘Robusters’’

Hyper Evolution – The Rise of the Robots Part 2

August 5, 2017

Wednesday evening I sat down to watch the second part of the BBC 4 documentary, Hyperevolution: the Rise of the Robots, in which the evolutionary biologist Ben Garrod and the electronics engineer Prof. Danielle George trace the development of robots from the beginning of the 20th century to today. I blogged about the first part of the show on Tuesday in a post about another forthcoming programme on the negative consequences of IT and automation, Secrets of Silicon Valley. The tone of Hyperevolution is optimistic and enthusiastic, with one or two qualms from Garrod, who fears that robots may pose a threat to humanity. The programme states that robots are an evolving species, and that we are well on the way to developing true Artificial Intelligence.

Last week, Garrod went off to meet a Japanese robotics engineer, whose creation had been sent up to keep a Japanese astronaut company of the International Space Station. Rocket launches are notoriously expensive, and space is a very, very expensive premium. So it was no surprise that the robot was only about four inches tall. It’s been designed as a device to keep people company, which the programme explained was a growing problem in Japan. Japan has a falling birthrate and thus an aging population. The robot is programmed to ask and respond to questions, and to look at the person, who’s speaking to it. It doesn’t really understand what is being said, but simply gives an answer according to its programming. Nevertheless, it gives the impression of being able to follow and respond intelligently to conversation. It also has the very ‘cute’ look that characterizes much Japanese technology, and which I think comes from the conventions of Manga art. Garrod noted how it was like baby animals in having a large head and eyes, which made the parents love them.

It’s extremely clever, but it struck me as being a development of the Tamagotchi, the robotic ‘pet’ which was all over the place a few years ago. As for companionship, I could help thinking of a line from Andrei Tarkovsky’s epic Solaris, based on the novel by the Polish SF writer, Stanislaw Lem. The film follow the cosmonaut, Kris, on his mission to a space station orbiting the planet Solaris. The planet’s vast ocean is alive, and has attempted to establish contact with the station’s crew by dredging their memories, and sending them replicas of people they know. The planet does this to Kris, creating a replica of a former girlfriend. At one point, pondering the human condition in a vast, incomprehensible cosmos, Kris states ‘There are only four billion of us…a mere handful. We don’t need spaceships, aliens…What man needs is man.’ Or words to that effect. I forget the exact quote. I dare say robots will have their uses caring for and providing mental stimulation for the elderly, but this can’t replace real, human contact.

George went to America to NASA, where the space agency is building Valkyrie to help with the future exploration of Mars in 2030. Valkyrie is certainly not small and cute. She’s six foot, and built very much like the police machines in Andrew Blomkamp’s Chappie. George stated that they were trying to teach the robot how to walk through a door using trial and error. But each time the machine stumbled. The computer scientists then went through the robot’s programming trying to find and correct the error. After they thought they had solved it, they tried again. And again the machine stumbled.

George, however, remained optimistic. She told ‘those of you, who think this experiment is a failure’, that this was precisely what the learning process entailed, as the machine was meant to learn from its mistakes, just like her own toddler now learning to walk. She’s right, and I don’t doubt that the robot will eventually learn to walk upright, like the humanoid robots devised by their competitors over at DARPA. However, there’s no guarantee that this will be the case. People do learn from their mistakes, but if mistakes keep being made and can’t be correctly, then it’s fair to say that a person has failed to learn from them. And if a robot fails to learn from its mistakes, then it would also be fair to say that the experiment has failed.

Holy Joe Smith! I was also a reminded of another piece of classic SF in this segment. Not film, but 2000 AD’s ‘Robohunter’ strip. In its debut story, the aged robohunter, Sam Slade – ‘that’s S-L-A-Y-E-D to you’ – his robometer, Kewtie and pilot, Kidd, are sent to Verdus to investigate what has happened to the human colonists. Verdus is so far away, that robots have been despatched to prepare it for human colonization, and a special hyperdrive has to be used to get Slade there. This rejuvenates him from an old man in his seventies to an energetic guy in his thirties. Kidd, his foul mouthed, obnoxious pilot, who is in his 30s, is transformed into a foul-mouthed, obnoxious, gun-toting baby.

The robot pioneers have indeed prepared Verdus for human habitation. They’ve built vast, sophisticated cities, with shops and apartments just waiting to be occupied, along with a plethora of entertainment channels, all of whose hosts and performers are robotic. However, their evolution has outpaced that of humanity, so that they are now superior, both physically and mentally. They continue to expect humans to be the superiors, and so when humans have come to Verdus, they’ve imprisoned, killed and experimented on them as ‘Sims’ – simulated humans, not realizing that these are the very beings they were created to serve. In which case, Martian colonists should beware. And carry a good blaster, just in case.

Garrod and George then went to another lab, where the robot unnerved Garrod by looking at him, and following him around with its eye. George really couldn’t understand why this should upset him. Talking about it afterwards, Garrod said that he was worried about the threat robots pose to humanity. George replied by stating her belief that they also promise to bring immense benefits, and that this was worth any possible danger. And that was the end of that conversation before they went on to the next adventure.

George’s reply isn’t entirely convincing. This is what opponents of nuclear power were told back in the ’50s and ’60s, however. Through nuclear energy we were going to have ships and planes that could span the globe in a couple of minutes, and electricity was going to be so plentiful and cheap that it would barely be metered. This failed, because the scientists and politicians advocating nuclear energy hadn’t really worked out what would need to be done to isolate and protect against the toxic waste products. Hence nearly six decades later, nuclear power and the real health and environmental problems it poses are still very much controversial issues. And there’s also that quote from Bertrand Russell. Russell was a very staunch member of CND. When he was asked why he opposed nuclear weapons, he stated that it was because they threatened to destroy humanity. ‘And some of us think that would be a very great pity’.

Back in America, George went to a bar to meet Alpha, a robot created by a British inventor/showman in 1932. Alpha was claimed to be an autonomous robot, answering questions by choosing appropriate answers from recordings on wax cylinders. George noted that this was extremely advanced for the time, if true. Finding the machine resting in a display case, filled with other bizarre items like bongo drums, she took an access plate off the machine to examine its innards. She was disappointed. Although there were wires to work the machine’s limbs, there were no wax cylinders or any other similar devices. She concluded that the robot was probably worked by a human operator hiding behind a curtain.

Then it was off to Japan again, to see another robot, which, like Valkyrie, was learning for itself. This was to be a robot shop assistant. In order to teach it to be shop assistant, its creators had built an entire replica camera shop, and employed real shop workers to play out their roles, surrounded by various cameras recording the proceedings. So Garrod also entered the scenario, where he pretended to be interested in buying a camera, asking questions about shutter speeds and such like. The robot duly answered his questions, and moved about the shop showing him various cameras at different prices. Like the robotic companion, the machine didn’t really know or understand what it was saying or doing. It was just following the motions it had learned from its human counterparts.

I was left wondering how realistic the role-playing had actually been. The way it was presented on camera, everything was very polite and straightforward, with the customer politely asking the price, thanking the assistant and moving on to ask to see the next of their wares. I wondered if they had ever played at being a difficult customer in front of it. Someone who came in and, when asked what they were looking for, sucked their teeth and said, ‘I dunno really,’ or who got angry at the prices being asked, or otherwise got irate at not being able to find something suitable.

Through the programme, Japanese society is held up as being admirably progressive and accepting of robots. Earlier in that edition, Garrod finished a piece on one Japanese robot by asking why it was that a car manufacturer was turning to robotics. The answer’s simple. The market for Japanese cars and motorcycles is more or less glutted, and they’re facing competition from other countries, like Indonesia and Tokyo. So the manufacturers are turning to electronics.

The positive attitude the Japanese have to computers and robots is also questionable. The Japanese are very interested in developing these machines, but actually don’t like using them themselves. The number of robots in Japan can easily be exaggerated, as they include any machine tool as a robot. And while many British shops and businesses will use a computer, the Japanese prefer to do things the old way by hand. For example, if you go to a post office in Japan, the assistant, rather than look something up on computer, will pull out a ledger. Way back in the 1990s someone worked out that if the Japanese were to mechanise their industry to the same extent as the West, they’d throw half their population out of work.

As for using robots, there’s a racist and sexist dimension to this. The Japanese birthrate it falling, and so there is real fear of a labour shortage. Robots are being developed to fill it. But Japanese society is also extremely nationalistic and xenophobic. Only people, whose parents are both Japanese, are properly Japanese citizens with full civil rights. There are third-generation Koreans, constituting an underclass, who, despite having lived there for three generations, are still a discriminated against underclass. The Japanese are developing robots, so they don’t have to import foreign workers, and so face the problems and strains of a multicultural society.

Japanese society also has some very conservative attitudes towards women. So much so, in fact, that the chapter on the subject in a book I read two decades ago on Japan, written by a Times journalist, was entitled ‘A Woman’s Place Is In the Wrong’. Married women are expected to stay at home to raise the kids, and the removal of a large number of women from the workplace was one cause of the low unemployment rate in Japan. There’s clearly a conflict between opening up the workplace to allow more married women to have a career, and employing more robots.

Garrod also went off to Bristol University, where he met the ‘turtles’ created by the neuroscientist, Grey Walter. Walter was interested in using robots to explore how the brain functioned. The turtles were simple robots, consisting of a light-detecting diode. The machine was constructed to follow and move towards light sources. As Garrod himself pointed out, this was like the very primitive organisms he’d studied, which also only had a light-sensitive spot.

However, the view that the human brain is really a form of computer have also been discredited by recent research. Hubert L. Dreyfus in his book, What Computers Still Can’t Do: A Critique of Artificial Intelligence, describes how, after the failure of Good Old Fashioned A.I. (GOFAI), computer engineers then hoped to create it through exploring the connections between different computing elements, modelled on the way individual brain cells are connected to each by a complex web of neurons. Way back in 1966, Walter Rosenblith of MIT, one of the pioneers in the use of computers in neuropsychology, wrote

We no longer hold the earlier widespread belief that the so-called all-or-none law from nerve impulses makes it legitimate to think of relays as adequate models for neurons. In addition, we have become increasingly impressed with the interactions that take place among neurons: in some instances a sequence of nerve impulses may reflect the activities of literally thousands of neurons in a finely graded manner. In a system whose numerous elements interact so strongly with each other, the functioning of the system is not necessarily best understood by proceeding on a neuron-by-neuron basis as if each had an independent personality…Detailed comparisons of the organization of computer systems and brains would prove equally frustrating and inconclusive. (Dreyfus, What Computers Still Can’t Do, p. 162).

Put simply, brain’s don’t work like computers. This was written fifty years ago, but it’s fair to ask if the problem still exists today, despite some of the highly optimistic statements to the contrary.

Almost inevitably, driverless cars made their appearance. The Germans have been developing them, and Garrod went for a spin in one, surrounded by two or three engineers. He laughed with delight when the car told him he could take his hands off the wheel and let the vehicle continue on its own. However, the car only works in the comparatively simply environment of the autobahn. When it came off the junction, back into the normal road system, the machine told him to start driving himself. So, not quite the victory for A.I. it at first appears.

Garrod did raise the question of the legal issues. Who would be responsible if the car crashed while working automatically – the car, or the driver? The engineers told him it would be the car. Garrod nevertheless concluded that segment by noting that there were still knotty legal issues around it. But I don’t know anyone who wants one, or necessarily would trust one to operate on its own. A recent Counterpunch article I blogged about stated that driverless cars are largely being pushed by a car industry, trying to expand a market that is already saturated, and the insurance companies. The latter see it as a golden opportunity to charge people, who don’t want one, higher premiums on the grounds that driverless cars are safer.

Garrod also went to meet researchers in A.I. at Plymouth University, who were also developing a robot which as part of their research into the future creation of genuine consciousness in machines. Talking to one of the scientists afterwards, Garrod heard that there could indeed be a disruptive aspect to this research. Human society was based on conscious decision making. But if the creation of consciousness was comparatively easy, so that it could be done in an afternoon, it could have a ‘disruptive’ effect. It may indeed be the case that machines will one day arise which will be conscious, sentient entities, but this does not mean that the development of consciousness is easy. You think of the vast ages of geologic time it took evolution to go from simple, single-celled organisms to complex creatures like worms, fish, insects and so on, right up to the emergence of Homo Sapiens Sapiens within the last 200,000 years.

Nevertheless, the programme ended with Garrod and George talking the matter over on the banks of the Thames in London. George concluded that the rise of robots would bring immense benefits and the development of A.I. was ‘inevitable’.

This is very optimistic, to the point where I think you could be justified by calling it hype. I’ve said in a previous article how Dreyfus’ book describes how robotics scientists and engineers have made endless predictions since Norbert Wiener and Alan Turing, predicting the rise of Artificial Intelligence, and each time they’ve been wrong. He’s also described the sheer rage with which many of those same researchers respond to criticism and doubt. In one passage he discusses a secret meeting of scientists at MIT to discuss A.I., in which a previous edition of his book came up. The scientists present howled at it with derision and abuse. He comments that why scientists should persist in responding so hostilely to criticism, and to persist in their optimistic belief that they will eventually solve the problem of A.I., is a question for psychology and the sociology of knowledge.

But there are also very strong issues about human rights, which would have to be confronted if genuine A.I. was ever created. Back in the 1970s or early ’80s, the British SF magazine, New Voyager, reviewed Roderick Random. Subtitled, ‘The Education of a Young Machine’, this is all about the creation of a robot child. The reviewer stated that the development of truly sentient machines would constitute the return of slavery. A similar point was made in Star Trek: The Next Generation, in an episode where another ship’s captain wished to take Data apart, so that he could be properly investigated and more like him built. Data refused, and so the captain sued to gain custody of him, arguing that he wasn’t really sentient, and so should be legally considered property. And in William Gibson’s Neuromancer, the book that launched the Cyberpunk SF genre, the hero, Case, finds out that the vast computer for which he’s working, Wintermute, has Swiss citizenship, but its programming are the property of the company that built it. This, he considers, is like humans having their thoughts and memories made the property of a corporation.

Back to 2000 AD, the Robusters strip portrayed exactly what such slavery would mean for genuinely intelligent machines. Hammerstein, an old war droid, and his crude sidekick, the sewer droid Rojaws and their fellows live with the constant threat of outliving their usefulness, and taking a trip down to be torn apart by the thick and sadistic Mek-Quake. Such a situation should, if it ever became a reality, be utterly intolerable to anyone who believes in the dignity of sentient beings.

I think we’re a long way off that point just yet. And despite Prof. George’s statements to the contrary, I’m not sure we will ever get there. Hyperevolution is a fascinating programme, but like many of the depictions of cutting edge research, it’s probably wise to take some of its optimistic pronouncements with a pinch of salt.

Forthcoming Programme on the Destructive Consequence of IT

August 1, 2017

Next Sunday, the 6th August, BBC 2 is showing a documentary at 8.00 pm on the negative aspects of automation and information technology. Entitled Secrets of Silicon Valley, it’s the first part of a two-part series. The blurb for it in the Radio Times reads

The Tech Gods – who run the biggest technology companies – say they’re creating a better world. Their utopian visions sound persuasive: Uber say the app reduces car pollution and could transform how cities are designed; Airbnb believes its website empowers ordinary people. some hope to reverser climate change or replace doctors with software.

In this doc, social media expert Jamie Bartlett investigates the consequences of “disruption” – replacing old industries with new ones. The Gods are optimistic about our automated future but one former Facebook exec is living off-grid because he fears the fallout from the tech revolution. (p. 54).

A bit more information is given on the listings page for the programmes on that evening. This gives the title of the episode – ‘The Disruptors’, and states

Jamie Bartlett uncovers the dark reality behind Silicon Valley’s glittering promise to build a better world. He visits Uber’s offices in San Francisco and hears how the company believes it is improving our cities. But Hyderabad, India, Jamie sees for himself the apparent human consequences of Uber’s utopian vision and asks what the next wave of Silicon Valley’s global disruption – the automation of millions of jobs – will mean for us. He gets a stark warning from an artificial intelligence pioneer who is replacing doctors with software. Jamie’s journey ends in the remote island hideout of a former social media executive who fears this new industrial revolution could lead to social breakdown and the collapse of capitalism. (p. 56).

I find the critical tone of this documentary refreshing after the relentless optimism of last Wednesday’s first instalment of another two-part documentary on robotics, Hyper Evolution: the Rise of the Robots. This was broadcast at 9 O’clock on BBC 4, with second part shown tomorrow – the second of August – at the same time slot.

This programme featured two scientists, the evolutionary biologist, Dr. Ben Garrod, and the electronics engineer Professor Danielle George, looking over the last century or so of robot development. Garrod stated that he was worried by how rapidly robots had evolved, and saw them as a possible threat to humanity. George, on the other hand, was massively enthusiastic. On visiting a car factory, where the vehicles were being assembled by robots, she said it was slightly scary to be around these huge machines, moving like dinosaurs, but declared proudly, ‘I love it’. At the end of the programme she concluded that whatever view we had of robotic development, we should embrace it as that way we would have control over it. Which prompts the opposing response that you could also control the technology, or its development, by rejecting it outright, minimizing it or limiting its application.

At first I wondered if Garrod was there simply because Richard Dawkins was unavailable. Dawko was voted the nation’s favourite public intellectual by the readers of one of the technology or current affairs magazines a few years ago, and to many people’s he’s the face of scientific rationality, in the same way as the cosmologist Stephen Hawking. However, there was a solid scientific reason he was involved through the way robotics engineers had solved certain problems by copying animal and human physiology. For example, Japanese cyberneticists had studied the structure of the human body to create the first robots shown in the programme. These were two androids that looked and sounded extremely lifelike. One of them, the earlier model, was modelled on its creator to the point where it was at one time an identical likeness. When the man was asked how he felt about getting older and less like his creation, he replied that he was having plastic surgery so that he continued to look as youthful and like his robot as was possible.

Japanese engineers had also studied the human hand, in order to create a robot pianist that, when it was unveiled over a decade ago, could play faster than a human performer. They had also solved the problem of getting machines to walk as bipeds like humans by giving them a pelvis, modeled on the human bone structure. But now the machines were going their own way. Instead of confining themselves to copying the human form, they were taking new shapes in order to fulfil specific functions. The programme makers wanted to leave you in new doubt that, although artificial, these machines were nevertheless living creatures. They were described as ‘a new species’. Actually, they aren’t, if you want to pursue the biological analogy. They aren’t a new species for the simple reason that there isn’t simply one variety of them. Instead, they take a plethora of shapes according to their different functions. They’re far more like a phylum, or even a kingdom, like the plant and animal kingdoms. The metal kingdom, perhaps?

It’s also highly problematic comparing them to biological creatures in another way. So far, none of the robots created have been able to reproduce themselves, in the same way biological organisms from the most primitive bacteria through to far more complex organisms, not least ourselves, do. Robots are manufactured by humans in laboratories, and heavily dependent on their creators both for their existence and continued functioning. This may well change, but we haven’t yet got to that stage.

The programme raced through the development of robots from Eric, the robot that greeted Americans at the World’s Fair, talking to one of the engineers, who’d built it and a similar metal man created by the Beeb in 1929. It also looked at the creation of walking robots, the robot pianist and other humanoid machines by the Japanese from the 1980s to today. It then hopped over the Atlantic to talk to one of the leading engineers at DARPA, the robotics technology firm for the American defence establishment. Visiting the labs, George was thrilled, as the company receives thousands of media requests, to she was exceptionally privileged. She was shown the latest humanoid robots, as well as ‘Big Dog’, the quadruped robot carrier, that does indeed look and act eerily like a large dog.

George was upbeat and enthusiastic. Any doubts you might have about robots taking people’s jobs were answered when she met a spokesman for the automated car factory. He stated that the human workers had been replaced by machines because, while machines weren’t better, they were more reliable. But the factory also employed 650 humans running around here and there to make sure that everything was running properly. So people were still being employed. And by using robots they’d cut the price on the cars, which was good for the consumer, so everyone benefits.

This was very different from some of the news reports I remember from my childhood, when computers and industrial robots were just coming in. There was shock by news reports of factories, where the human workers had been laid off, except for a crew of six. These men spent all day playing cards. They weren’t employed because they were experts, but simply because it would have been more expensive to sack them than to keep them on with nothing to do.

Despite the answers given by the car plant’s spokesman, you’re still quite justified in questioning how beneficial the replacement of human workers with robots actually is. For example, before the staff were replaced with robots, how many people were employed at the factory? Clearly, financial savings had to be made by replacing skilled workers with machines in order to make it economic. At the same time, what skill level were the 650 or so people now running around behind the machines? It’s possible that they are less skilled than the former car assembly workers. If that’s the case, they’d be paid less.

As for the fear of robots, the documentary traced this from Karel Capek’s 1920’s play, R.U.R., or Rossum’s Universal Robot, which gave the word ‘robot’ to the English language. The word ‘robot’ means ‘serf, slave’ or ‘forced feudal labour’ in Czech. This was the first play to deal with a robot uprising. In Japan, however, the attitude was different. Workers were being taught to accept robots as one of themselves. This was because of the animist nature of traditional Japanese religion. Shinto, the indigenous religion besides Buddhism, considers that there are kami, roughly spirits or gods, throughout nature, even inanimate objects. When asked what he thought the difference was between humans and robots, one of the engineers said there was none.

Geoff Simons also deals with the western fear of robots compared to the Japanese acceptance of them in his book, Robots: The Quest for Living Machines. He felt that it came from the Judeo-Christian religious tradition. This is suspicious of robots, as it allows humans to usurp the Lord as the creator of living beings. See, for example, the subtitle of Mary Shelley’s book, Frankenstein – ‘the Modern Prometheus’. Prometheus was the tAstritan, who stole fire from the gods to give to humanity. Victor Frankenstein was similarly stealing a divine secret through the manufacture of his creature.

I think the situation is rather more complex than this, however. Firstly, I don’t think the Japanese are as comfortable with robots as the programme tried to make out. One Japanese scientist, for example, has recommended that robots should not be made too humanlike, as too close a resemblance is deeply unsettling to the humans, who have to work with it. Presumably the scientist was basing this on the experience of Japanese as well as Europeans and Americans.

Much Japanese SF also pretty much like its western counterpart, including robot heroes. One of the long-time comic favourites in Japan is Astroboy, a robot boy with awesome abilities, gadgets and weapons. But over here, I can remember reading the Robot Archie strip in Valiant in the 1970s, along with the later Robusters and A.B.C. Warriors strips in 2000 AD. R2D2 and C3PO are two of the central characters in Star Wars, while Doctor Who had K9 as his faithful robot dog.

And the idea of robot creatures goes all the way back to the ancient Greeks. Hephaestus, the ancient Greek god of fire, was a smith. Lame, he forged three metal girls to help him walk. Pioneering inventors like Hero of Alexandria created miniature theatres and other automata. After the fall of the Roman Empire, this technology was taken up by the Muslim Arabs. The Banu Musa brothers in the 9th century AD created a whole series of machines, which they simply called ‘ingenious devices’, and Baghdad had a water clock which included various automatic figures, like the sun and moon, and the movement of the stars. This technology then passed to medieval Europe, so that by the end of the Middle Ages, lords and ladies filled their pleasure gardens with mechanical animals. The 18th century saw the fascinating clockwork machines of Vaucanson, Droz and other European inventors. With the development of steam power, and then electricity in the 19th century came stories about mechanical humans. One of the earliest was the ‘Steam Man’, about a steam-powered robot, which ran in one of the American magazines. This carried on into the early 20th century. One of the very earliest Italian films was about a ‘uomo machina’, or ‘man machine’. A seductive but evil female robot also appears in Fritz Lang’s epic Metropolis. Both films appeared before R.U.R., and so don’t use the term robot. Lang just calls his robot a ‘maschinemensch’ – machine person.

It’s also very problematic whether robots will ever really take human’s jobs, or even develop genuine consciousness and artificial intelligence. I’m going to have to deal with this topic in more detail later, but the questions posed by the programme prompted me to buy a copy of Hubert L. Dreyfus’ What Computers Still Can’t Do: A Critique of Artificial Reason. Initially published in the 1970s, and then updated in the 1990s, this describes the repeated problems computer scientists and engineers have faced trying to develop Artificial Intelligence. Again and again, these scientists predicted that ‘next year’ ,’in five years’ time’, ‘in the next ten years’ or ‘soon’, robots would achieve human level intelligence, and would make all of us unemployed. The last such prediction I recall reading was way back in 1999 – 2000, when we were all told that by 2025 robots would be as intelligent as cats. All these forecasts have proven wrong. But they’re still being made.

In tomorrow’s edition of Hyperevolution, the programme asks the question of whether robots will ever achieve consciousness. My guess is that they’ll conclude that they will. I think we need to be a little more skeptical.

Frontiers Magazine on Robot Weapons

October 23, 2016

The popular science magazine, Frontiers, way back in October 1998 ran an article on robots. This included two pages on the ‘Soldiers of Tomorrow’, military robots then under development. This included drones. These are now extremely well-known, if not notorious, for the threat they pose to privacy and freedom. The article notes that they were developed from the unmanned planes used for target practice. They were first used in the 1960s to fly reconnaissance missions in Vietnam after the US air force suffered several losses from surface to air missiles. Drones were also used during the Cold War to spy on the Soviet Union, though instead of beaming the pictures back to their operators, they had to eject them physically. They were further developed by the Israelis, who used them to spy on their Arab neighbours during their many wars. Their next development was during the Gulf War, when they broadcast back to their operators real-time images of the battlefields they were surveying.

Apart from drones, the article also covered a number of other war machines under development. This included remotely operated ground vehicles like SARGE, and the Mobility Module and remotely controlled buggy shown below.


SARGE was a scout vehicle adapted from a Yamaha four-wheel drive all-terrain jeep. Like the drones, it was remotely controlled by a human operator. The top photo of the two above showed the Mobility Module mounted aboard another army vehicle, which contained a number of reconnaissance, surveillance and target acquisition sensors. Below it is a missile launcher fixed to another remote-control buggy. The article also carried a photo of a Rockwell Hellfire missile being launched from another of this type of adapted vehicle.


Next to this was a photo of the operator in his equipment, who controlled the Tele-Operated Vehicle, or TOV, as the developers were calling such machines.


Another of the machines described in the article was the Telepresent Rapid Aiming System, a robot gun designed by Graham Hawkes and Precision Remotes of California as a sentry robot. As the article itself notes, it’s similar to the tunnel machine guns used by the Space Marines in the film Aliens. It could either be operated by remote control, or made fully automatic and configured to shoot live ammunition. At the time the article was written it had already been tested by a number of different law enforcement agencies.

The only vaguely humanoid robot was the Robart III, shown below.


This machine was able to track a target automatically using its video vision, and possessed laser guidance to allow it to be operated remotely. In demonstrations it carried a pneumatic dart gun, capable of firing tranquillizer darts at intruders. In combat situations this would be replaced with a machine gun. It was designed to be used as a mechanical security guard.

The article also stated that miniature crawling robots were also under development. These would be used to creep up on enemy positions, sending back to their operators video images of their progress. If such machines were mass-produced, their price could fall to about £10. This would mean that it would be easily affordable to saturate an area with them. (pp. 56-7).

The article describes the state of development of these machines as it was nearly 20 years ago. Drones are now so widespread, that they’ve become a nuisance. I’ve seen them in sale in some of the shops in Cheltenham for anything from £36 to near enough £400. Apart from the military, they’re being used by building surveyors and archaeologists.

And while robots like the above might excite enthusiasts for military hardware, there are very serious issues with them. The Young Turks, Secular Talk and Jimmy Dore have pointed out on their shows that Bush and Obama have violated the American constitution by using drones to assassinate terrorists, even when they are resident in friendly or at least non-hostile countries. Despite all the talk by the American army about ‘surgical strikes’, these weapons in fact are anything but precise instruments that can kill terrorists while sparing civilians. The three programmes cited, along with no doubt many other shows and critics, have stated that most of the victims of drone attacks are civilians and the families of terrorists. The drones may be used to home in on mobile signals, so that the person killed has been someone using their phone, rather than the terrorists themselves. Others have been worried about the way the operation of these weapons through remote control have distanced their human operators, and by extension the wider public, from the bloody reality of warfare.

Way back in the first Gulf War, one of the French radical philosophers in his book, The Gulf War Never Happened, argued that the extensive use of remotely controlled missiles during the war, and the images from them that were used in news coverage at the time, meant that for many people the Gulf War was less than real. It occurred in Virtual Reality, like a simulation in cyberspace. Recent criticism of the military use of drones as killing machines by whistleblowers have borne out these fears. One, who was also an instructor on the drone programme, described the casual indifference to killing, including killing children, of the drone pilots. They referred to their actions as ‘mowing the lawn’, and their child victims as ‘fun-sized terrorists’, justifying their deaths by arguing that as the children of terrorists, they would have grown up to be terrorists themselves. Thus they claimed to have prevented further acts of terrorism through their murder. And they did seem to regard the operation of the drones almost as a video game. The instructor describes how he threw one trainee off the controls after he indulged in more, unnecessary bloodshed, telling him, ‘This is not a computer game!’

And behind this is the threat that such machines will gain their independence to wipe out or enslave humanity. This is the real scenario behind Dr Kevin Warwick’s book, March of the Machines, which predicts that by mid-century robots will have killed the majority of humanity and enslaved the rest. A number of leading scientists have called for a halt on the development of robot soldiers. About 15 or twenty years ago there was a mass outcry from scientists and political activists after one government announced it was going to develop fully autonomous robot soldiers.

I’m a fan of the 2000 AD strip, ‘ABC Warriors’, which is about a group of robot soldiers, who now fight to ‘increase the peace’, using their lethal skills to rid the galaxy of criminals and tyrants and protect the innocent. The robots depicted in the strip are fully conscious, intelligent machines, with individual personalities and their own moral codes. The Frontiers article notes elsewhere that we’re a long way from developing such sophisticated AI, stating that he did not believe he would see it in his lifetime. On the other hand, Pat Mills, the strips’ writer and creator, says in the introduction to one of the collected volumes of the strips on the ‘Volgan War’, that there is a Russian robot, ‘Johnny 5’, that looks very much like Mechquake, the stupid, psychopathic robot bulldozer that appeared in the strip and its predecessor, ‘Robusters’. None of the machines under development therefore have the humanity and moral engagement of Hammerstein, Ro-Jaws, Mongrol, Steelhorn, Happy Shrapnel/ Tubalcain, Deadlok or even Joe Pineapples. The real robotic killing machines now being developed and used by the military represent a real threat to political liberty, the dehumanisation of warfare, and the continuing safety of the human race.