Why there won’t be a robot uprising any time soon

There’s no need to panic about the future of robotics, say Ruth Aylett and Patricia Vargas, authors of Living with Robots.

When you read the word ‘robot’, what comes into your mind?

Most people think of a ‘metal man’, a large humanoid figure with a square head, rather like the Tin Man in the film The Wizard of Oz or R2D2 in Star Wars.

But if you ask people whether they have come across a robot in their own lives, they will usually describe a robot vacuum cleaner, or lawnmower. A small, single-minded cylinder, slowly covering the ground. Less impressive, but much more useful. Successful robots keep it simple.

Robots lawnmowers and vacuum cleaners are really not amazingly intelligent. The grass-cutter needs an edging-tape to stop it wandering off the lawn. Robot vacuum cleaners can get Lego pieces or other debris jammed into their mechanisms. Or in the worst case, as at least one owner of an un-toilet trained puppy discovered, they might run over something messy and distribute it over a wide area. One reason that cleaning robots have not yet displaced human cleaners is that cleaners are more skilled than we often assume.

In our book, Living With Robots, we explore the gap between robots of the imagination and real robots. This gap has a big impact on how we see robots in our future. The robots of the imagination will take everyone’s jobs in the next fifteen years or even worse, enslave and then replace humanity. But those of us that work with real robots see a very different future.

In our imaginations (and fears), robots are taking over our jobs, particularly in factories. Yes, we see lots of new factories, mainly in Asia, with assembly lines entirely operated by robot arms. But these robots are fixed to the spot and very specialised; 90 per cent of the cost of the factory is in engineering the sort of predictable environment robot arms need to function, which is why we see new factories.

In fact, fiddly, complex, assembly is still an entirely human task, and so we will see new generations of equally specialised helper robots that can co-operate safely alongside people and make their job less arduous.

There will also be completely novel niche robots, especially in the field of health applications. A good current example is the PARO Seal, a lap-size robot with white fur, appealing ‘eyes’ and long eye-lashes, used with dementia patients to provide extra stimulation. It responds to stroking by wriggling and making pleasurable noises, and can recognise and turn towards specific voices. Clinical studies have shown it has real benefits even though it doesn’t resemble a human.

Elderly people at home who have mobility problems often use a walker-trolley with shelves so they can make a drink in the kitchen and wheel themselves and the drink to a sitting room. This too could become a robot with a map of the home, and an ability to assist with navigation, perhaps also reminding people to take their medication or keep hydrated.

Meanwhile robot technology is being applied to stroke rehabilitation through small exoskeletons that help people carry out repetitive exercises and could also be enhanced to include motivational conversation or video games. A larger robot exoskeleton of the future might help paraplegics.

Roboticists have intensively studied human arms and legs in trying to improve robots’ rather rudimentary walking and grasping skills. Two-legged walking is tricky, not to mention power-hungry, so most robot designs stick to wheels. But this understanding of human movement is already being applied to more functional artificial limbs for amputees  – only cost and availability currently limit their wider use.

Robots of the imagination don’t lose their balance, fall downstairs or smack into a wall. They also don’t run out of battery power after an hour or two. Real robots do all of these things all too often. They are often remote-controlled, or partly controlled, by a human operator, using teleoperation.

Some so-called robots are wholly teleoperated. Bomb disposal ‘robots’ are a good example: the skill of the operator is much greater than anything an actual robot could apply. Not to mention the explosive consequences of getting it wrong! When you next see an impressive robot video, you should always ask yourself whether it is being teleoperated off-camera, rather than running autonomously.

However, teleoperation, combined with a degree of local autonomy – for example avoiding obstacles – works well when robots are sent into environments too hazardous for humans. Mars rovers, underwater vehicles and search-and-rescue robots for disaster relief all combine levels of control like this.

‘But what about all the intelligent stuff?’ you may be asking. Imaginary robots can carry on open-ended conversations, reason better then humans, learn new skills in no time flat. Again the story is rather different for real robots. Part of this is what we mean by ‘intelligent’. If chess grand masters are really intelligent people, surely a robot equipped with the Deep Blue software that beat Gary Kasparov way back in 1997 would also be really intelligent? But not so.

Chess is a closed world with a limited set of actions and clear rules about when to use them. This is not dissimilar in its way to a factory, engineered to support robots. The intelligence of living creatures – not just humans – lies in coping with the messiness and unpredictability of the real world. Deep Blue was a splendid generator of chess moves, but could not itself move the pieces.

We tend to underestimate how our intelligence links to our physical interaction with the world through our body and its senses. When we look at a camera image, we understand what we see: the objects in it, and also their context and meaning. A robot gets a huge set of numbers from its cameras. Its task is a little like trying to work out what an advertising billboard says while only seeing individual coloured dots.

We can add chatbot technology to a robot. But try a chatbot for any length of time and you will soon discover its limitations. Home speech-interaction systems function reasonably well as internet interfaces, but beyond question and answer, they run out of steam.

Worse, because they have no real language understanding, they can get questions badly wrong. Never rely on such systems for emergency health information! As with other robot functions, language technology works in specific areas, in niches, but often fails in dealing with the big wide world.

So maybe we are wrong to worry about real robots?

As always, what matters with technology is what we humans do with it. The imaginary could become real if free flying drones are allowed to decide which people are terrorists and then kill them. Cruise missiles are effectively robots, currently not allowed to choose their own targets, but automated lethal weapons are a definite threat.

Or consider the use of robots as a substitute for human carers, rather than as a support for them. Machines cannot ‘care’. They can, and do, model emotions, but cannot feel them. They can be programmed with rules about ethical actions but they cannot be ethical. We already see automated computer decisions that are inflexible and allow organisations to dodge responsibility: ‘the computer said No’.

It is up to us all, scientists, citizens, policy-makers, to insist that we do not use robot systems as an excuse for actions that cannot be justified in human terms.

Source: BBC