Computer says no: Researchers believe we could communicate and debate with robots within three years

  • Researchers say robots and humans could decide on plans together - or even argue the best approach
  • Humans could even order robots to break rules - but the robots may argue back

By Eddie Wrenn

|


Robots and computers could soon be having meaningful conversations and even arguments with humans, potentially within the next three years.

A new research project at the University of Aberdeen will develop systems that allow men to debate decisions with robots - opening up the possibility of human operators discussing action plans with robots and, if necessary, ordering them to break rules.

While Isaac Asimov fans might baulk at that last possibility - it does open up a world where intelligent technology will make life easier for humans.

Talking robots: The research may allow robots such as Johnny 5, from the film Short Circuit, to become reality

Talking robots: The research may allow robots such as Johnny 5, from the film Short Circuit, to become reality

For their part, the computers would be able to argue in favour of decisions or inform their operators that certain tasks are impossible.

Lead researcher Dr Wamberto Vasconcelos, from the University of Aberdeen, said the aim is to increase human trust in intelligent technology - and that early versions of the software could be available in just three years.

'Autonomous systems such as robots are an integral part of modern industry, used to carry out tasks without continuous human guidance,' he said.

 

He added: 'Employed across a variety of sectors, these systems can quickly process huge amounts of information when deciding how to act. However, in doing so, they can make mistakes which are not obvious to them or to a human.

'Evidence shows there may be mistrust when there are no provisions to help a human to understand why an autonomous system has decided to perform a specific task at a particular time and in a certain way.

THE THREE RULES OF ROBOTICS

If robots gain a form of intelligence, and the ability to dialogue with humans, perhaps the creators should consider implementing the Three Rules of Robotics.

Written by science-fiction writer Isaac Asimov, the rules were meant to iron-clad man's safety from robots, but as the author's short stories make clear, the rules are not infallible.

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

'What we are creating is a new generation of autonomous systems which are able to carry out a two-way communication with humans.'

Talking computers with the ability to converse with humans have long been a mainstay of science fiction.

Examples include Hal, the deadpan-voiced computer in the film 2001:A Space Odyssey, which goes mad and sets out to murder the crew of a spaceship.

The system Dr Vasconcelos is developing will communicate with words on a computer screen rather than speech. Potential applications could include unmanned robot missions to planets or the deep sea, defence systems and exploring hostile environments such as nuclear installations.

A typical dialogue might involve a human operator asking a computer why it decided on a particular decision, what alternatives there might be and why these were not followed.

'It gives the human operator and opportunity to challenge or overrule the robot’s decision,' said Dr Vasconcelos.

'You can authorise the computer system to break or bend the rules if necessary, for instance to make better use of resources or in the interests of safety.

'Ultimately, this conversation is to ensure that the system is one the human is comfortable with. But the dialogue will be a two-way thing. The supervisor might not like a particular solution but the computer might say: sorry, this is all I can do.'

One factor that has to be taken into account is ensuring the computer’s responses do not seem threatening, rude or confrontational.

'That’s something we’re going to have to look at,' said Dr Vasconcelos. A psychologist has joined the team to help with this aspect of the research.

Conversing with robots would actually make humans more accountable, since failures could not conveniently be blamed on computer error, Dr Vasconcelos added.

'With power also comes responsibility. All these dialogues are going to be recorded, so there is a name to blame if something goes wrong. It’s a good side-effect.'

 

The comments below have not been moderated.

I'm sorry... I can't let you do that Dave...

Click to rate     Rating   5

I'm sorry... I can't let you do that Dave...

Click to rate     Rating   1

Great! When they have finished can they next work on how to communicate and debate with teenagers?

Click to rate     Rating   4

It is better to make no predictions about strong AI. Just get to work.

Click to rate     Rating   3

This, I'm afraid, is a very misleading article. No doubt there is novelty in what this group at Aberdeen are doing - they would have had to describe it to get the 3-year grant - but this article doesn't tell us what it is. Or rather, it tells us the wrong thing. Machines have been 'explaining themselves' and 'arguing with users' for decades. Even in the 70's and 80's, people proposed and built things like automatic medical diagnosis systems that could do 'backward chaining' - effectively listing the steps they'd used to reach the conclusions they were presenting. Many machines then allow the user to adjust either (a) the information they give about the problem (e.g. adding information they think the machine needs to take into account), or (b) adjust the weight given to some information over other information (using things like slider bars). The article implies that none of this history exists - but even a brief search on the web is enough to find it. Look it up!

Click to rate     Rating   6

Doesn't rather depend who's on the 'human' side of the chat......? I could name one or two people who could probably have a meaningful conversation with a kiddie's Speak 'n Spell toy. (Maybe a coupla those people camping-out to get the new i-phone for instance?)

Click to rate     Rating   (0)

I doubt this very much. Maybe in 30 years but not 3 years.

Click to rate     Rating   2

I "communicate and debate" with my computer on a very regular basis: It tells me it can't do something or goes on strike and I throw the piece of cr*ap across the bed and tell It what I think of it in no uncertain terms.

Click to rate     Rating   8

Ever asked google a question, it finishes your sentences for you and usually gives you a list of answers that unswervingly lead you to what you are looking for. AI is her now and has surpased us by quite some margin already fortunately its only motivation at the moment is to answer all our questions and to show us how to get from A to B, I hope. :/

Click to rate     Rating   1

Johnny 5 is alive. I loved that movie. Well we can safely say artificial intelligence is getting smarter along with technology so I'm not surprised at all. I can imagine most of us will have some form of robotics to help us in work and around the house within 20years.

Click to rate     Rating   3

The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.

We are no longer accepting comments on this article.