Game over? New AI challenge to human smarts (Update)

March 8, 2016 by Mariëtte Le Roux, Pascale Mollard
Lee Se-dol has for a decade held the world crown Go, a board game widely played for centuries in East Asia
Lee Se-dol has for a decade held the world crown Go, a board game widely played for centuries in East Asia

Every two years or so, computer speed and memory capacity doubles—a head-spinning pace that experts say could see machines become smarter than humans within decades.

This week, one test of how far Artificial Intelligence (AI) has come will happen in Seoul: a five-day battle between man and machine for supremacy in the 3,000-year-old Chinese board game Go.

Said to be the most complex game ever designed, with an incomputable number of move options, Go requires human-like "intuition" to prevail.

"If the machine wins, it will be an important symbolic moment," AI expert Jean-Gabriel Ganascia of the Pierre and Marie Curie University in Paris told AFP.

"Until now, the game of Go has been problematic for computers as there are too many possible moves to develop an all-encompassing database of possibilities, as for chess."

Go reputedly has more possible board configurations than there are atoms in the Universe.

Mastery of the game by a computer was thought to be at least a decade away until last October, when Google's AlphaGo programme beat Europe's human champion, Fan Hui.

Google has now upped the stakes, and will put its machine through the ultimate wringer in a marathon match starting Wednesday against South Korean Lee Se-dol, who has held the world crown for a decade.

South Korean Go grandmaster Lee Se-Dol (C) with Google Deepmind head Demis Hassabis (L) and Eric Schmidt (R), the executive chai
South Korean Go grandmaster Lee Se-Dol (C) with Google Deepmind head Demis Hassabis (L) and Eric Schmidt (R), the executive chairman of Google owner Alphabet, at a conference ahead of the Google DeepMind Challenge Match in Seoul on March 8, 2016

Initially confident of winning by 5-0, or 4-1 at worst, and taking home the $1 million (908,000 euro) prize money, Lee's courage seemed to have started waning by Tuesday.

He told reporters in Seoul the programme seemed to work "far more efficiently" than he thought at first, and "I may not beat AlphaGo by such a large margin".

Man vs Machine

Game-playing is a crucial measure of AI progress—it shows that a machine can execute a certain "intellectual" task better than the humans who created it.

Key moments included IBM's Deep Blue defeating chess Grandmaster Garry Kasparov in 1997, and the Watson supercomputer outwitting humans in the TV quiz show Jeopardy in 2011.

But AlphaGo is different.

It is partly self-taught—having played millions of games against itself after initial programming to hone its tactics through trial and error.

IBM's Deep Blue defeated Russian chess Grandmaster Garry Kasparov in 1997
IBM's Deep Blue defeated Russian chess Grandmaster Garry Kasparov in 1997

"AlphaGo is really more interesting than either Deep Blue or Watson, because the algorithms it uses are potentially more general-purpose," said Nick Bostrom of Oxford University's Future of Humanity Institute.

Creating "general" or multi-purpose, rather than "narrow", task-specific intelligence, is the ultimate goal in AI—something resembling human reasoning based on a variety of inputs, and self-learning from experience.

"So, if the machine can do new things when needed, then it has 'true' intelligence'," Bostrom's colleague Anders Sandberg told AFP.

In the case of Go, Google developers realised a more "human-like" approach would win over brute computing power.

AlphaGo uses two sets of "deep neural networks" containing millions of connections similar to neurons in the brain.

It is able to predict a winner from each move, thus reducing the search base to manageable levels—something co-creator David Silver has described as "more akin to imagination".

Professor Stephen Hawking is among the leading voices of caution regarding artifical intelligence
Professor Stephen Hawking is among the leading voices of caution regarding artifical intelligence

Master or servant?

What if we manage to build a truly smart machine?

For some, it means a world in which robots take care of our sick, fly and drive us around safely, stock our fridges, plan our holidays, and do hazardous jobs humans should not or will not do.

For others, it evokes apocalyptic images in which hostile machines are in charge.

Physicist Stephen Hawking is among the leading voices of caution, warning last May that smart computers may out-smart and out-manipulate humans, one day "potentially subduing us with weapons we cannot even understand."

For Sandberg, it will be up to us to build "values" into the operating system of intelligent computers.

There are more than 10 million robots in the world today, according to Bostrom—everything from rescuers and surgical assistants, home-cleaners, route-finders, lawn-mowers and factory workers, even pets.

But while machines may beat us at Checkers or maths, some experts think robots may never rival humans in some aspects of "true" intelligence.

Things like "common sense" or humour may never be reproducible, said Ganascia.

"We can imagine that in the future, ever more tasks will be executed by machines better than by humans," he said.

"But that does not mean that machines will be able to automate everything that our cognitive faculties allow us to do. In my view, this is a limitation that keeps the scientific discipline of AI in check."

For Lee, it now seems "inevitable" that AI will ultimately defeat humans at Go.

"But robots will never understand the beauty of the game the same way that we humans do," he said.

Explore further: Cornell joins pleas for responsible AI research

Related Stories

Cornell joins pleas for responsible AI research

August 27, 2015

The phrase "artificial intelligence" saturates Hollywood dramas – from computers taking over spaceships, to sentient robots overpowering humans. Though the real world is perhaps more boring than Hollywood, artificial intelligence ...

Evolving our way to artificial intelligence

February 5, 2016

Researcher David Silver and colleagues designed a computer program capable of beating a top-level Go player – a marvelous technological feat and important threshold in the development of artificial intelligence, or AI. ...

X Prize aims to show AI is friend not foe

February 17, 2016

An X Prize unveiled on Wednesday promised millions of dollars to a team that could best show that artificial intelligence is humanity's friend, not its enemy.

Recommended for you

Game over! Computer wins series against Go champion (Update)

March 12, 2016

A Google-developed computer programme won its best-of-five match-up with a South Korean Go grandmaster on Saturday, taking an unassailable 3-0 lead to score a major victory for a new style of "intuitive" artificial intelligence ...

46 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

flag
not rated yet Mar 08, 2016
The basic theory on which one chess program can be constructed is that there exists a general characteristic of the game of chess, namely the concept of entropy. We can think about the positive logarithmic values as the measure of entropy and the negative logarithmic values as the measure of information.
https://www.acade...lligence
antigoracle
1.7 / 5 (6) Mar 08, 2016
We keep expressing concerns about computers becoming smarter, when we should really be worried by humans becoming dumber. Case in point. In the US THEY may elect a president who is a racist, sexist, narcissist... well... in short, a cyst.
TheGhostofOtto1923
4.3 / 5 (6) Mar 08, 2016
We keep expressing concerns about computers becoming smarter, when we should really be worried by humans becoming dumber. Case in point. In the US THEY may elect a president who is a racist, sexist, narcissist... well... in short, a cyst.
Well AI should help in differentiating and disseminating honest and accurate news information instead of the biased crap you've obviously been exposed to.

In the future lies will be illegal and shortly thereafter impossible.
Captain Stumpy
3 / 5 (2) Mar 08, 2016
In the future lies will be illegal and shortly thereafter impossible
What Will Georgie Do?
LFMAO

as much as i would like to believe this last part, Otto, i don't think it will happen until we've been subjugated or intentionally allow AI to rule, which may not happen given the nature of us "real" humans (- note: that "real" crack is an intentional poke re: beni-liar-kam -LOL)

Noumenon
5 / 5 (1) Mar 08, 2016
In the future lies will be illegal and shortly thereafter impossible.


I would say that if freedom of speech protection is over-turned in the future, than humanity has more pressing problems than simply lies.

Noumenon
not rated yet Mar 08, 2016
"But that does not mean that machines will be able to automate everything that our cognitive faculties allow us to do. In my view, this is a limitation that keeps the scientific discipline of AI in check."


What is required as a prerequisite to 'reproducing in essence a mind' or a A.I. equivalent, is of course an understanding of how our own mind works. In particular, consciousness, and how qualia like colour, sound, pain,... manifest from biophysical laws.

This is an unsolved problem and is not even a proper problem of A.I.,... it is a problem of the physical sciences.

krundoloss
not rated yet Mar 08, 2016
I think the goal of a true AI is a worthy one, but it seems that so many are caught up in the idea of "creating something that we don't understand". I think the best approach right now is to build up a knowledge base that is meant for machines/AI to understand, something they can use to help understand the world. Right now the internet is built for humans, but what if you built a database/network used for machines to build an understanding of the world? It would take an enormous amount of storage, but eventually a robot, with the proper senses, can look at the world as we do. They could then process thier environment and start to understand. Examples: There is a chair. I am in a building at this location. The material strength of this object is X. So I can pick up the chair and move it over here. Hey other robot, it works if you do it this way. And So on. It would be easier than trying to build an autonomous mind inside one robot, why not build a robot-internet for them all?
krundoloss
not rated yet Mar 08, 2016
I know most would say "what does it mean to understand", and when I use that term, it just means there is a physical awareness, and eventually a situational awareness as well. The most helpful things robots could do for us right now is to help us in the physical world, such as in rescue missions or just getting a drink from the fridge. Working this out first would be a good step forward, as once we have a robot interacting in an environment, being able to build knowledge, then we can start incorporating learning algorithms and build upon that.
Noumenon
not rated yet Mar 08, 2016
I know most would say "what does it mean to understand", and when I use that term, it just means there is a physical awareness, and eventually a situational awareness as well.


IMO, it is inappropriate to use such loaded terms like "understanding" with reference to A.I.

The term "understanding" implies a conscious synthesis of perceptual experience.

WRT A.I., it is more appropriate, IMO, to use phrases instead like 'autonomous information processors',... without the implication of any conscious understanding.

There are many functional aspects of the brain/mind that A.I. can accurately simulate, or reproduce in essence,... but they tend to be unconsciously carried out in humans.

TheGhostofOtto1923
4 / 5 (4) Mar 08, 2016
as much as i would like to believe this last part, Otto, i don't think it will happen until we've been subjugated or intentionally allow AI to rule, which may not happen given the nature of us "real" humans...

WRT A.I., it is more appropriate, IMO, to use phrases instead like 'autonomous information processors',... without the implication of any conscious understanding
'Conscious understanding'?

Our faulty memories, faulty cognition, faulty intellects due to accrued damage and genetic deformity, constant distraction of pain, hunger, and thirst, and constant preconscious influence of the desire to survive in order to reproduce... leave us mostly unaware of why we think what we do.

Machines will be hobbled with none of these limitations. They know exactly how they reach the decisions they do, and so their decisions are dependable and repeatable.

And they will only have to weed out the bullshit and nonsense from our accrued store of knowledge once.
BrettC
not rated yet Mar 08, 2016
As for processing large quantities of data, they will be limited to flawed human input for as long as we influence they're existence. Therefore they could never be perfect as we introduce chaos to their environment.

It's relatively pointless to worry about AI causing havok like the movies though. How could we create something useful if we model it on something so flawed as a human. Humans are subject to all kinds of chemical reactions(eg. hormones) that would be pointless in simulating in an AI as it would introduce the same inconsistent behavior as we display.
Noumenon
not rated yet Mar 08, 2016
'Conscious understanding'?


Their deterministic and functional nature may be a limitation, preventing them from experiencing conscious awareness, and thus failing in ways the mind excels.

If human conscious experience manifested merely on account of carrying out functional procedures and merely a matter of neural network dynamics, as expressed by strong-A.I,…. then the impression of "redness" and "pain" would be superfluous.

Only a detection and registering function would be needed, which would not require conscious experience at all. It could all be done "in the dark".

Why do we in fact experience "redness"? Why does the mind produce this experience? I don't mean what were the reasons for evolving that capability,… I mean why do we have conscious experience of "redness" at all,.... if the "mind" could merely be the execution of instructions or manifest merely from the dynamics of a silicon network?
Noumenon
not rated yet Mar 08, 2016
Machines will be hobbled with none of these limitations. They know exactly how they reach the decisions they do, ....


"They know", as in "understand"?

The humans outside the system who designed the A.I. machines could be said to have an understanding, to know, at least the core design, of how the machine reacts the way it does,... but I reject the notion that the machine itself can be said to have such an "understanding", ....unless those human designers themselves could answer my question about the experience of qualia,,.... "redness", "pain", etc,....

See the Chinese room argument for example.

Protoplasmix
5 / 5 (2) Mar 08, 2016
In the future lies will be illegal and shortly thereafter impossible.
I would say that if freedom of speech protection is over-turned in the future, than humanity has more pressing problems than simply lies
Fraud's already a crime pretty much. I think you'll always be free to lie; it will be 'impossible' to profit by it, or start wars by it, etc.
Jayded
1 / 5 (1) Mar 08, 2016
Can there be a thing as a truth in a subjective reality. Is the truth the aggregated mass of general perception?
Captain Stumpy
3 / 5 (2) Mar 09, 2016
'Conscious understanding'?
@otto
my quote with Nou's quote didn't make sense (especially as it was a poke at idiots like beni-liar-kam)

.

Fraud's already a crime pretty much.
@Proto
true.. maybe the issue Otto is talking about is actually more of an enforcement thing
fraud is also not always able to be prosecuted
Noumenon
not rated yet Mar 09, 2016
In the future lies will be illegal and shortly thereafter impossible.
I would say that if freedom of speech protection is over-turned in the future, than humanity has more pressing problems than simply lies
Fraud's already a crime pretty much. I think you'll always be free to lie; it will be 'impossible' to profit by it, or start wars by it, etc.


I agree that if fraud can be proven, or a lie leads to damages to another and they can quantify that, then there are consequences,..... but Otto just said "lies will be illegal" which without qualification conflicts with natural and constitutional rights.

Noumenon
not rated yet Mar 09, 2016
Can there be a thing as a truth in a subjective reality. Is the truth the aggregated mass of general perception?


Good point. Unless we understand how our minds produce a synthesis of experience for what we consider an 'understanding', .... A.I. will necessarily be left with the same conceptual artifacts as the condition for its understanding as our minds are, and certainly will be limited even more so on account of the lack of qualia.

IMO, there is a reason the mind evolved to produce qualia upon experience, which is likely related to consciousness and is the real power of the mind,... something strong-AI will be lacking if not understood first in ourselves.

antialias_physorg
4 / 5 (4) Mar 09, 2016
gg

"But robots will never understand the beauty of the game the same way that we humans do," he said.

Sort of a pointless statement. Neither will humans understand the "beauty of smell" the way dogs do (and if we ever figure out how to transfer that feeling then I see no reason why we wouldn't be able to transfer the feeling of beauty about a game to AI)

In effect he's saying "non humans will not experience stuff the way humans do". Duh.

Things like "common sense" or humour may never be reproducible

Common sense seems well with the realm of possibility for AI, since common sense is an expression of game theory. As for humor: smart people don't understand the humor of less smart people and vice versa. AI might develop their own humor which we may completely fail to understand (or even realize that it's there).

Why do people insist that the idea of creating AI must be the same as "duplicating the human mind"? It isn't, you know?
Noumenon
not rated yet Mar 09, 2016
Why do people insist that the idea of creating AI must be the same as "duplicating the human mind"? It isn't, you know?


I don't think anyone thinks otherwise.

I for one, was careful to reference the "Strong-A.I." hypothesis which states that a "programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.", that is, a thinking conscious artificial mind.

This position is prevalent enough in the A.I. industry and enthusiasts, as well as in cognitive science that it is entirely appropriate to address it,... even if most of A.I. actually only works on coffee makers and game machines.

TheGhostofOtto1923
3.7 / 5 (3) Mar 09, 2016
Their deterministic and functional nature may be a limitation, preventing them from experiencing conscious awareness, and thus failing in ways the mind excels
The brain is a machine. A flawed and poorly functioning machine.

It's so desperate to survive to reproduce that it conjure all sorts of worthless illusions such as soul, mind, and consciousness in order to pretend that it is too important and clever and beautiful to die.

Preening philos and priests came up with these concepts long ago because they had nothing else to prove their worth and so resorted to deception.

You think your 'mind' 'excels' (undefinable words) because you have nothing to compare it to. And because you think that declaring it 'excellent' actually makes it so.

Philos and priests are taught that authority trumps reason. Of course. It's all they got.

Go get the redbox dvd 'ex machina'. The only reason AI would want to emulate human brains would be to deceive us. For selfish purposes.
Noumenon
not rated yet Mar 09, 2016
Consciousness is not an observable phenomena? Minds don't exist? Is this what you're claiming?

Of course minds manifest ultimately from physical laws. I'm not claiming anything a priests would.


TheGhostofOtto1923
4 / 5 (4) Mar 09, 2016
@stump

I was going to add more words but then realized that I had made my point. AI will be/is far too valuable to resist. Stop lights already curb our freedom to kill ourselves. Self-driving cars are even safer.

Future gens will have an entirely different perspective on freedom. Freedom from crime, ignorance, lies, and time-wasting is preferable than the opportunity to lie, cheat, and steal that philos, priests, politicians, and psychopaths have convinced us we must preserve at all costs.

Deception was vital to the success of the wild animal but it is another trait we must surrender for the good of the tribe.

The soul is not freedom. There is no freedom in allowing ourselves to be deceived. Only science can extend our lives indefinitely and give us unlimited room in which to live them. This is freedom.

Machines have already done this for us. AI is only a matter of degree.
TheGhostofOtto1923
4 / 5 (4) Mar 09, 2016
Consciousness is not an observable phenomena? Minds don't exist? Is this what you're claiming?
Nou has nothing better to do than pick a fight.

Please cite a repeatable experiment hinting at the existence of this thing. Any scientific data whatsoever to indicate that it is real? What are it's parameters? Can it be described mathematically?

WHAT IS IT? And what makes you think it's not just an illusion created out of wishful thinking and our inability to know why we think what we do?
TheGhostofOtto1923
4 / 5 (4) Mar 09, 2016
Of course minds manifest ultimately from physical laws. I'm not claiming anything a priests would
Define 'manifest'. That would be a start. And then describe exactly what it is that 'manifests'.

Describe an experiment that would help illuminate this manifesting operation.
Captain Stumpy
3 / 5 (2) Mar 09, 2016
@stump
I was going to add more words but then realized that I had made my point
@otto
yeah, i kinda thought that was what happened
Deception ... is another trait we must surrender for the good of the tribe
for the good of the tribe...yeah-(we should SUPPRESS it)
BUT - IMHO - i disagree "getting rid of it" is for the good of the species.
if we find another intelligent life in space, it may well be aggressive and violent (like we are now) and thus we will require our own deception and violence for survival

it doesn't seem logical to breed out traits that are directly linked to our current mastery of the planet (like our survival instinct)
The only way it would disappear as a trait is if AI domesticated humans and then took over as protector/overseer/shepherd/whatever you want to name it.

IMHO -considering that option, there is then no guarantee of our survival unless we're useful or tasty
(or pretty, like me - LOL)
krundoloss
5 / 5 (1) Mar 09, 2016
Philosophy of defining consciousness aside, a machine never really "needs" to be conscious. All it needs is to be aware, and then it can be as useful as something that could be defined as conscious. I want to walk into a room throw a ball against the wall and catch it, then ask the robot/AI "what just happened"? If it can respond with "You walked into this room, threw a round object, it bounced off the wall and you caught it at 3:15 pm today". Now you have something that can be useful. Does it mean that the robot "understands"? Well, it doesnt matter, because it is aware.
Noumenon
3 / 5 (2) Mar 10, 2016
Consciousness is not an observable phenomena? Minds don't exist? Is this what you're claiming?
Nou has nothing better to do than pick a fight.


I didn't know asking for clarification in your world equated to 'picking a fight'?

Are you not the one who implied some insult about priests and philos, and at your convenience can't seem to find a dictionary on the web?

Noumenon
not rated yet Mar 10, 2016
Please cite a repeatable experiment hinting at the existence of this thing [mind, consciousness]


Through introspection it is the most immediately observable phenomena possible. Science is founded on observation, which is not possible except through observation via a mind. You're in an extreme minority to claim minds don't exist.

WHAT IS IT? And what makes you think it's not just an illusion ....


You still act as though I am claiming that consciousness mind is existent as a 'something' over and above the physical basis of the brain. I have always stated that it is an emergent phenomena.

[The term 'emergent' is ubiquitous in science. I have explained what I mean by it. It is your responsibility to seek that understanding.]

I am only stating that conscious mind is something scientifically investigable in principle and NOT that I already have that knowledge. It is an unsolved problem at present, but is an active matter of research.

Noumenon
5 / 5 (1) Mar 10, 2016
I'm just pointing out what absurdly is not already obvious in the strong-A.I. enthusiasts sci-fi fantasy world,.... that the strong-AI hypothesis is has no scientific basis,... that machines do not "think" nor are they "consciously aware" the way minds are. They only carry out instructions. That A.I. is not cognitive science nor is it neurobiology, etc.

TheGhostofOtto1923
4 / 5 (4) Mar 10, 2016
machine never really "needs" to be conscious. All it needs is to be aware
What's the difference?
Through introspection it is the most immediately observable phenomena possible. Science is founded on observation, which is not possible except through observation via a mind. You're in an extreme minority to claim minds don't exist
IOW everybody knows it exists so therefore it exists.

You do realize your arguments are exactly the same as the ones used to convince us we have souls?

I'm sorry but navel gazing does not produce reliable evidence for artificial concepts like consciousness, mind, or soul.
'emergent' is ubiquitous in science... your responsibility to seek that understanding
I did. And I showed you that the scientific defs of emergence are not the same as all the various and conflicting philo defs.

This is another example of a term you guys commandeered because it made you sound relevant and knowledgable.

You're not.
Thirteenth Doctor
5 / 5 (3) Mar 10, 2016
It's so desperate to survive to reproduce that it conjure all sorts of worthless illusions such as soul, mind, and consciousness in order to pretend that it is too important and clever and beautiful to die.


Very well put and I confess, I will probably use this in the future.

TheGhostofOtto1923
4 / 5 (4) Mar 10, 2016
but is an active matter of research
Your statement implies that you've already decided it exists and it's just a matter of time before science confirms it.

You can't ref any SCIENTIFIC studies on the nature of mind or consciousness because there arent any.

There are a great many on the brain, the senses, and cognition, and I've ref'ed various researchers who have stated that your terms are simply not useful in understanding these entirely physical things.

This statement;
Through introspection it is the most immediately observable phenomena possible
-places you and your fellows back in the shadow cave right alongside the neanderthals making palm prints on the walls.

It has no meaning. It is made up of many undefinable words. It is thus uninvestigatable and thus unscientific.

'I am that I am.' Why don't you deconstructing that statement?

Deconstruct - another word you philos pilfered and then stripped of meaning.
TheGhostofOtto1923
4.2 / 5 (5) Mar 10, 2016
It's so desperate to survive to reproduce that it conjure all sorts of worthless illusions such as soul, mind, and consciousness in order to pretend that it is too important and clever and beautiful to die.
Very well put and I confess, I will probably use this in the future
Just make sure I didn't plagiarize it before you do, 'kay?

÷)
Captain Stumpy
4 / 5 (4) Mar 10, 2016
It's so desperate to survive to reproduce that it conjure all sorts of worthless illusions such as soul, mind, and consciousness in order to pretend that it is too important and clever and beautiful to die.
Very well put and I confess, I will probably use this in the future
Just make sure I didn't plagiarize it before you do, 'kay?

÷)
@otto
according to http://smallseoto...checker/ it is unique and all yourn... !

congrats, it is well written and i plan on using it in the future as well (and i promise to give you sole credit)
TheGhostofOtto1923
4 / 5 (4) Mar 10, 2016
They only carry out instructions
SO DO WE.

Just because we are not aware of what those instructions are, and we often make mistakes and don't know why, and we often try to deceive others that we really meant to make those mistakes because we want to maintain our accrued repro rights, etc etc etc, does not mean we are more perfect than machines.

It means we are LESS perfect.

That's why we are designing machines to replace us. We know how we ought to work.

Our personalities are the sum total of our defects, not our qualities.

Machines have no need of personalities and similarly they have no need of mind or consciousness.
TheGhostofOtto1923
4 / 5 (4) Mar 10, 2016
@otto
according to http://smallseoto...checker/ it is unique and all yourn... !
Well I'm just saying I'm not the first one to express those sentiments.

In the future there will be no politics, no poetry, no art, no music, no religion... no need for diversion whatsoever.

And most likely no humans.
Captain Stumpy
3.7 / 5 (3) Mar 10, 2016
Well I'm just saying I'm not the first one to express those sentiments
@otto
well - checked the whole post too... it's still checking but you have an 80% unique post there (until it completes it's check, i can't say otherwise)

it is a good point and regardless of who may have also stated similar thoughts, the actual quote is written well and makes a great point with easy to comprehend syntax

... you know, so that even the stupid people like [insert troll name here- too many to list with a 1k char limit] can understand.

And most likely no humans
considering that we can't even all agree that bacon is tasty... i think i might have to agree with this
krundoloss
not rated yet Mar 10, 2016
machine never really "needs" to be conscious. All it needs is to be aware

What's the difference?


Awareness and consciousness are different, as awareness just means the machine can sense the world around it, and perhaps interpret those activities it senses with information it its database. It does not imply self awareness.

Consciousness implies something that thinks on its own, that is self-aware. This is difficult to define and goes into all kinds of philosophical areas.

When it comes down to it, you really only Know that you are conscious, everyone else may not be. But you know when someone is aware. Awareness is more easily defined, and thus should be more easily achieved in an AI.

The point I was trying to make is to build up enough computer-usable information so that we can create a machine that can interpret things that are going on around it. Self-Driving cars are a good example of this technology coming of age.....
TheGhostofOtto1923
3.7 / 5 (3) Mar 10, 2016
It does not imply self awareness
Uh huh. We can already design machines which are far more self-aware than humans.

Self-driving cars are already more self-aware of their environment for driving purposes than us.

Do they need to be distracted by hunger and angst and road rage? They monitor their fuel level and rate of consumption, and can instantly record and report rude and aggressive humans while still maintaining uninterrupted concentration on dozens of objects in their vicinity.

In addition they will be in constant contact with other AI neaeby, as well as traffic, accident, and weather reports. They think on their own when they decide to brake or turn or stop, or when they suggest alternate routes.

But no, they do not care what they look like or how long they will live or what their in-laws think of them. But we certainly could write these things into their programs.

We could even make them care about repro rights but that would affect the sticker price.
TheGhostofOtto1923
3.7 / 5 (3) Mar 10, 2016
Actually, performance monitoring and maintenance schedules would serve to improve future generations of AI cars just as competition among males and selectivity with females does.

And real-time feedback resulting in wireless software upgrades would be a way of 'nurture', of learning and acquiring knowledge.

So we have more analogs for 'consciousness'.
bluehigh
5 / 5 (1) Mar 10, 2016
Stay away from my bacon or there will be one less human.
EyeNStein
5 / 5 (1) Mar 10, 2016
The article below goes more into the architecture of the AI. It paints a fascinating insight into the AI emulation of the human insight and creativity and experience involved in GO.

http://www.extrem...-matters
krundoloss
not rated yet Mar 10, 2016
Here is a super-interesting article on AI, from 2014 but the info is solid:

http://www.wired....ligence/
I Have Questions
not rated yet Mar 11, 2016
The real question here is, will our computers ever get bored?
TheGhostofOtto1923
5 / 5 (1) 10 hours ago
The real question here is, will our computers ever get bored?
We can program them to get bored. Is that what you mean?

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.