Every two years or so, computer speed and memory capacity doubles—a head-spinning pace that experts say could see machines become smarter than humans within decades.
This week, one test of how far Artificial Intelligence (AI) has come will happen in Seoul: a five-day battle between man and machine for supremacy in the 3,000-year-old Chinese board game Go.
Said to be the most complex game ever designed, with an incomputable number of move options, Go requires human-like "intuition" to prevail.
"If the machine wins, it will be an important symbolic moment," AI expert Jean-Gabriel Ganascia of the Pierre and Marie Curie University in Paris told AFP.
"Until now, the game of Go has been problematic for computers as there are too many possible moves to develop an all-encompassing database of possibilities, as for chess."
Go reputedly has more possible board configurations than there are atoms in the Universe.
Mastery of the game by a computer was thought to be at least a decade away until last October, when Google's AlphaGo programme beat Europe's human champion, Fan Hui.
Google has now upped the stakes, and will put its machine through the ultimate wringer in a marathon match starting Wednesday against South Korean Lee Se-dol, who has held the world crown for a decade.
Initially confident of winning by 5-0, or 4-1 at worst, and taking home the $1 million (908,000 euro) prize money, Lee's courage seemed to have started waning by Tuesday.
He told reporters in Seoul the programme seemed to work "far more efficiently" than he thought at first, and "I may not beat AlphaGo by such a large margin".
Man vs Machine
Game-playing is a crucial measure of AI progress—it shows that a machine can execute a certain "intellectual" task better than the humans who created it.
Key moments included IBM's Deep Blue defeating chess Grandmaster Garry Kasparov in 1997, and the Watson supercomputer outwitting humans in the TV quiz show Jeopardy in 2011.
But AlphaGo is different.
It is partly self-taught—having played millions of games against itself after initial programming to hone its tactics through trial and error.
"AlphaGo is really more interesting than either Deep Blue or Watson, because the algorithms it uses are potentially more general-purpose," said Nick Bostrom of Oxford University's Future of Humanity Institute.
Creating "general" or multi-purpose, rather than "narrow", task-specific intelligence, is the ultimate goal in AI—something resembling human reasoning based on a variety of inputs, and self-learning from experience.
"So, if the machine can do new things when needed, then it has 'true' intelligence'," Bostrom's colleague Anders Sandberg told AFP.
In the case of Go, Google developers realised a more "human-like" approach would win over brute computing power.
AlphaGo uses two sets of "deep neural networks" containing millions of connections similar to neurons in the brain.
It is able to predict a winner from each move, thus reducing the search base to manageable levels—something co-creator David Silver has described as "more akin to imagination".
Master or servant?
What if we manage to build a truly smart machine?
For some, it means a world in which robots take care of our sick, fly and drive us around safely, stock our fridges, plan our holidays, and do hazardous jobs humans should not or will not do.
For others, it evokes apocalyptic images in which hostile machines are in charge.
Physicist Stephen Hawking is among the leading voices of caution, warning last May that smart computers may out-smart and out-manipulate humans, one day "potentially subduing us with weapons we cannot even understand."
For Sandberg, it will be up to us to build "values" into the operating system of intelligent computers.
There are more than 10 million robots in the world today, according to Bostrom—everything from rescuers and surgical assistants, home-cleaners, route-finders, lawn-mowers and factory workers, even pets.
But while machines may beat us at Checkers or maths, some experts think robots may never rival humans in some aspects of "true" intelligence.
Things like "common sense" or humour may never be reproducible, said Ganascia.
"We can imagine that in the future, ever more tasks will be executed by machines better than by humans," he said.
"But that does not mean that machines will be able to automate everything that our cognitive faculties allow us to do. In my view, this is a limitation that keeps the scientific discipline of AI in check."
For Lee, it now seems "inevitable" that AI will ultimately defeat humans at Go.
"But robots will never understand the beauty of the game the same way that we humans do," he said.
Explore further: Cornell joins pleas for responsible AI research
flag
https://www.acade...lligence
antigoracle
TheGhostofOtto1923
In the future lies will be illegal and shortly thereafter impossible.
Captain Stumpy
LFMAO
as much as i would like to believe this last part, Otto, i don't think it will happen until we've been subjugated or intentionally allow AI to rule, which may not happen given the nature of us "real" humans (- note: that "real" crack is an intentional poke re: beni-liar-kam -LOL)
Noumenon
I would say that if freedom of speech protection is over-turned in the future, than humanity has more pressing problems than simply lies.
Noumenon
What is required as a prerequisite to 'reproducing in essence a mind' or a A.I. equivalent, is of course an understanding of how our own mind works. In particular, consciousness, and how qualia like colour, sound, pain,... manifest from biophysical laws.
This is an unsolved problem and is not even a proper problem of A.I.,... it is a problem of the physical sciences.
krundoloss
krundoloss
Noumenon
IMO, it is inappropriate to use such loaded terms like "understanding" with reference to A.I.
The term "understanding" implies a conscious synthesis of perceptual experience.
WRT A.I., it is more appropriate, IMO, to use phrases instead like 'autonomous information processors',... without the implication of any conscious understanding.
There are many functional aspects of the brain/mind that A.I. can accurately simulate, or reproduce in essence,... but they tend to be unconsciously carried out in humans.
TheGhostofOtto1923
Our faulty memories, faulty cognition, faulty intellects due to accrued damage and genetic deformity, constant distraction of pain, hunger, and thirst, and constant preconscious influence of the desire to survive in order to reproduce... leave us mostly unaware of why we think what we do.
Machines will be hobbled with none of these limitations. They know exactly how they reach the decisions they do, and so their decisions are dependable and repeatable.
And they will only have to weed out the bullshit and nonsense from our accrued store of knowledge once.
BrettC
It's relatively pointless to worry about AI causing havok like the movies though. How could we create something useful if we model it on something so flawed as a human. Humans are subject to all kinds of chemical reactions(eg. hormones) that would be pointless in simulating in an AI as it would introduce the same inconsistent behavior as we display.
Noumenon
Their deterministic and functional nature may be a limitation, preventing them from experiencing conscious awareness, and thus failing in ways the mind excels.
If human conscious experience manifested merely on account of carrying out functional procedures and merely a matter of neural network dynamics, as expressed by strong-A.I,…. then the impression of "redness" and "pain" would be superfluous.
Only a detection and registering function would be needed, which would not require conscious experience at all. It could all be done "in the dark".
Why do we in fact experience "redness"? Why does the mind produce this experience? I don't mean what were the reasons for evolving that capability,… I mean why do we have conscious experience of "redness" at all,.... if the "mind" could merely be the execution of instructions or manifest merely from the dynamics of a silicon network?
Noumenon
"They know", as in "understand"?
The humans outside the system who designed the A.I. machines could be said to have an understanding, to know, at least the core design, of how the machine reacts the way it does,... but I reject the notion that the machine itself can be said to have such an "understanding", ....unless those human designers themselves could answer my question about the experience of qualia,,.... "redness", "pain", etc,....
See the Chinese room argument for example.
Protoplasmix
Jayded
Captain Stumpy
my quote with Nou's quote didn't make sense (especially as it was a poke at idiots like beni-liar-kam)
.
@Proto
true.. maybe the issue Otto is talking about is actually more of an enforcement thing
fraud is also not always able to be prosecuted
Noumenon
I agree that if fraud can be proven, or a lie leads to damages to another and they can quantify that, then there are consequences,..... but Otto just said "lies will be illegal" which without qualification conflicts with natural and constitutional rights.
Noumenon
Good point. Unless we understand how our minds produce a synthesis of experience for what we consider an 'understanding', .... A.I. will necessarily be left with the same conceptual artifacts as the condition for its understanding as our minds are, and certainly will be limited even more so on account of the lack of qualia.
IMO, there is a reason the mind evolved to produce qualia upon experience, which is likely related to consciousness and is the real power of the mind,... something strong-AI will be lacking if not understood first in ourselves.
antialias_physorg
Sort of a pointless statement. Neither will humans understand the "beauty of smell" the way dogs do (and if we ever figure out how to transfer that feeling then I see no reason why we wouldn't be able to transfer the feeling of beauty about a game to AI)
In effect he's saying "non humans will not experience stuff the way humans do". Duh.
Common sense seems well with the realm of possibility for AI, since common sense is an expression of game theory. As for humor: smart people don't understand the humor of less smart people and vice versa. AI might develop their own humor which we may completely fail to understand (or even realize that it's there).
Why do people insist that the idea of creating AI must be the same as "duplicating the human mind"? It isn't, you know?
Noumenon
I don't think anyone thinks otherwise.
I for one, was careful to reference the "Strong-A.I." hypothesis which states that a "programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.", that is, a thinking conscious artificial mind.
This position is prevalent enough in the A.I. industry and enthusiasts, as well as in cognitive science that it is entirely appropriate to address it,... even if most of A.I. actually only works on coffee makers and game machines.
TheGhostofOtto1923
It's so desperate to survive to reproduce that it conjure all sorts of worthless illusions such as soul, mind, and consciousness in order to pretend that it is too important and clever and beautiful to die.
Preening philos and priests came up with these concepts long ago because they had nothing else to prove their worth and so resorted to deception.
You think your 'mind' 'excels' (undefinable words) because you have nothing to compare it to. And because you think that declaring it 'excellent' actually makes it so.
Philos and priests are taught that authority trumps reason. Of course. It's all they got.
Go get the redbox dvd 'ex machina'. The only reason AI would want to emulate human brains would be to deceive us. For selfish purposes.
Noumenon
Of course minds manifest ultimately from physical laws. I'm not claiming anything a priests would.
TheGhostofOtto1923
I was going to add more words but then realized that I had made my point. AI will be/is far too valuable to resist. Stop lights already curb our freedom to kill ourselves. Self-driving cars are even safer.
Future gens will have an entirely different perspective on freedom. Freedom from crime, ignorance, lies, and time-wasting is preferable than the opportunity to lie, cheat, and steal that philos, priests, politicians, and psychopaths have convinced us we must preserve at all costs.
Deception was vital to the success of the wild animal but it is another trait we must surrender for the good of the tribe.
The soul is not freedom. There is no freedom in allowing ourselves to be deceived. Only science can extend our lives indefinitely and give us unlimited room in which to live them. This is freedom.
Machines have already done this for us. AI is only a matter of degree.
TheGhostofOtto1923
Please cite a repeatable experiment hinting at the existence of this thing. Any scientific data whatsoever to indicate that it is real? What are it's parameters? Can it be described mathematically?
WHAT IS IT? And what makes you think it's not just an illusion created out of wishful thinking and our inability to know why we think what we do?
TheGhostofOtto1923
Describe an experiment that would help illuminate this manifesting operation.
Captain Stumpy
yeah, i kinda thought that was what happened for the good of the tribe...yeah-(we should SUPPRESS it)
BUT - IMHO - i disagree "getting rid of it" is for the good of the species.
if we find another intelligent life in space, it may well be aggressive and violent (like we are now) and thus we will require our own deception and violence for survival
it doesn't seem logical to breed out traits that are directly linked to our current mastery of the planet (like our survival instinct)
The only way it would disappear as a trait is if AI domesticated humans and then took over as protector/overseer/shepherd/whatever you want to name it.
IMHO -considering that option, there is then no guarantee of our survival unless we're useful or tasty
(or pretty, like me - LOL)
krundoloss
Noumenon
I didn't know asking for clarification in your world equated to 'picking a fight'?
Are you not the one who implied some insult about priests and philos, and at your convenience can't seem to find a dictionary on the web?
Noumenon
Through introspection it is the most immediately observable phenomena possible. Science is founded on observation, which is not possible except through observation via a mind. You're in an extreme minority to claim minds don't exist.
You still act as though I am claiming that consciousness mind is existent as a 'something' over and above the physical basis of the brain. I have always stated that it is an emergent phenomena.
[The term 'emergent' is ubiquitous in science. I have explained what I mean by it. It is your responsibility to seek that understanding.]
I am only stating that conscious mind is something scientifically investigable in principle and NOT that I already have that knowledge. It is an unsolved problem at present, but is an active matter of research.
Noumenon
TheGhostofOtto1923
You do realize your arguments are exactly the same as the ones used to convince us we have souls?
I'm sorry but navel gazing does not produce reliable evidence for artificial concepts like consciousness, mind, or soul. I did. And I showed you that the scientific defs of emergence are not the same as all the various and conflicting philo defs.
This is another example of a term you guys commandeered because it made you sound relevant and knowledgable.
You're not.
Thirteenth Doctor
Very well put and I confess, I will probably use this in the future.
TheGhostofOtto1923
You can't ref any SCIENTIFIC studies on the nature of mind or consciousness because there arent any.
There are a great many on the brain, the senses, and cognition, and I've ref'ed various researchers who have stated that your terms are simply not useful in understanding these entirely physical things.
This statement; -places you and your fellows back in the shadow cave right alongside the neanderthals making palm prints on the walls.
It has no meaning. It is made up of many undefinable words. It is thus uninvestigatable and thus unscientific.
'I am that I am.' Why don't you deconstructing that statement?
Deconstruct - another word you philos pilfered and then stripped of meaning.
TheGhostofOtto1923
÷)
Captain Stumpy
according to http://smallseoto...checker/ it is unique and all yourn... !
congrats, it is well written and i plan on using it in the future as well (and i promise to give you sole credit)
TheGhostofOtto1923
Just because we are not aware of what those instructions are, and we often make mistakes and don't know why, and we often try to deceive others that we really meant to make those mistakes because we want to maintain our accrued repro rights, etc etc etc, does not mean we are more perfect than machines.
It means we are LESS perfect.
That's why we are designing machines to replace us. We know how we ought to work.
Our personalities are the sum total of our defects, not our qualities.
Machines have no need of personalities and similarly they have no need of mind or consciousness.
TheGhostofOtto1923
In the future there will be no politics, no poetry, no art, no music, no religion... no need for diversion whatsoever.
And most likely no humans.
Captain Stumpy
well - checked the whole post too... it's still checking but you have an 80% unique post there (until it completes it's check, i can't say otherwise)
it is a good point and regardless of who may have also stated similar thoughts, the actual quote is written well and makes a great point with easy to comprehend syntax
... you know, so that even the stupid people like [insert troll name here- too many to list with a 1k char limit] can understand.
considering that we can't even all agree that bacon is tasty... i think i might have to agree with this
krundoloss
Awareness and consciousness are different, as awareness just means the machine can sense the world around it, and perhaps interpret those activities it senses with information it its database. It does not imply self awareness.
Consciousness implies something that thinks on its own, that is self-aware. This is difficult to define and goes into all kinds of philosophical areas.
When it comes down to it, you really only Know that you are conscious, everyone else may not be. But you know when someone is aware. Awareness is more easily defined, and thus should be more easily achieved in an AI.
The point I was trying to make is to build up enough computer-usable information so that we can create a machine that can interpret things that are going on around it. Self-Driving cars are a good example of this technology coming of age.....
TheGhostofOtto1923
Self-driving cars are already more self-aware of their environment for driving purposes than us.
Do they need to be distracted by hunger and angst and road rage? They monitor their fuel level and rate of consumption, and can instantly record and report rude and aggressive humans while still maintaining uninterrupted concentration on dozens of objects in their vicinity.
In addition they will be in constant contact with other AI neaeby, as well as traffic, accident, and weather reports. They think on their own when they decide to brake or turn or stop, or when they suggest alternate routes.
But no, they do not care what they look like or how long they will live or what their in-laws think of them. But we certainly could write these things into their programs.
We could even make them care about repro rights but that would affect the sticker price.
TheGhostofOtto1923
And real-time feedback resulting in wireless software upgrades would be a way of 'nurture', of learning and acquiring knowledge.
So we have more analogs for 'consciousness'.
bluehigh
EyeNStein
http://www.extrem...-matters
krundoloss
http://www.wired....ligence/
I Have Questions
TheGhostofOtto1923