When will AI surpass human-level intelligence?
by bruce ~ August 5th, 2007. Filed under: AGI.A question very simply crafted poll I’m asking a few friends to gain a better perspective on the time-frame for when we may see greater-than-human level AI. Results posted below… if you wish to participate, email me (bruce-at-novamente.net) an answer for the following:
[ ] 2010-20
[ ] 2020-30
[ ] 2030-50
[ ] 2050-70
[ ] 2070-2100
[ ] Beyond 2100
[ ] Never
[ ] Prefer not to make predictions
[ ] Other: __
Online sources:
- 2007 - Goertzel: Artificial General Intelligence: Now Is the Time
- 2006 - Schmidhuber: Is History Converging? Again?
- 2005 - AGIRI’s Quotes and Predictions by AI Pioneers
- 2001 - Kurzweil: The Law of Accelerating Returns
- 1998 - Bostrom: How Long Before Superintelligence
- 1997 - Moravec: When will computer hardware match the human brain?
Results thus far for When will AI surpass human-level intelligence?

Update: Aug 18, 2007:
I’ve been thrilled w/ the replies and the number of people willing (and not) to cast their vote on a time-frame. Thanks everyone! Taking many of these great suggestions, I may craft a more carefully worded poll a little later. - Bruce K.
Update: Aug 21, 2007:
Many people have replied Never, so I’ve separated this answer from the replies and have added it to the survey results (above). - Bruce K.
Update: Aug 22, 2007:
Many respondents have asked for my definition of human-level intelligence. Thus, I’d like to defer to Karl MacDorman’s reply below (#13) which refers to humanlike. - Bruce K.
Update: Aug 22, 2007:
Many people have asked me what my response to the survey would be. Again, I’d like to defer attention to the reply (#40) made by Pentti Haikonen… but with a tad more optimism. - Bruce K.
Reply #1 by Robert Bradbury
Bruce, I generally think the question is misphrased.
Over the last week I spent a couple of full days relearning sufficient chess that I could finally beat the Gnuchess program under Linux [1]. I personally tired of playing Backgammon when I wrote a program in 1977 that did a relatively good job defeating me. Scientists recently announced improvements to a checkers program so it can now play a perfect game. Two of the best human poker players in the world recently had a hard time defeating a computer program opponent.
So one answer to the question is that in specific fields AI is already far better than HI (esp. average HI). When it comes to laying out millions of transistors in electronic circuits AI has been better than HI for a decade or more. Computers are now quite adept at speech, OCR, voice recognition, face recognition, database searching, limited composition tasks and driving.
I think most people have fallen into the AGI swamp. Minsky pointed out long ago that the human brain is a complex aggregate of subprograms designed for specific functions. In a growing number of specific areas computer programs and the available hardware they can run on can match or exceed common human capabilities now.
If you want to ask more specific question regarding when will computer processing capacity exceed the human brain in a reasonable footprint the answer is before 2010). Petaflops computers are on order or soon will be with IBM & Sun. A tightly coupled network of a few thousand PS3 has the capacity of a human brain (and does a far better job at protein folding simulations).
Human brain equivalent processing power will be available to the average human in developed countries in the 2010-2020 time frame ($$$ in 2010-2015, $$ in 2015-2020). Human brain equivalent processing power will be available at a lower runtime cost (instructions/watt or instructions/$) in the 2020-2030 time frame. If you want to know when you might have robots that can function in most menial labor jobs, I’d guess probably around 2025-2040. Its largely driven not by inability to produce the software or hardware but by the fact that the humans are still pretty cheap relative to the investment required to produce such a software+hardware combination.
But if you are asking whether there is this holy grail of AI that can transcend human intelligence I question whether that is feasible. I think one will over time simply have software and hardware combinations that can do more of what humans do faster or and/or at lower cost. Faster — will likely appear to be more intelligent. Lower cost will replace human workers. The big advantage is that once you have sunk the development cost (e.g. cell phone hardware and software) then the additional cost per unit is very cheap [2].
I disagree with some AI proponents in a number of areas. One does not need AI to solve the problems of significantly increased human longevity or nanotechnology. Insights and tools we currently have can deal with those problems. I would even argue that inexpensive AGIs pose significant risks due to their abuse potential [3] You may wish to carefully consider the comments by Robert Sapolsky (last paragraph) in the recent article about Williams Syndrome in the NY Times [4].
1. Gnuchess is still ahead in the match set by dozens, perhaps more than a hundred games but it isn’t unbeatable. The only problem is that I was playing it a normal rather than hard difficulty.
2. Cell phones have decided advantages over pony express riders for example.
3. As the Internet-verse is plagued this week by the Storm Worm targeted at subverting vulnerable computers into even larger and more dangerous botnets.
4. The Gregarious Brain, David Dobbs, NY Times, July 8, 2007
Reply #2 by James Anderson
Bruce,
Excellent question. But human intelligence has many facets. There is no single date. And there will be no singularity — just incremental progress, sometimes fast, sometimes slow.
In some “intelligent” areas AI has already surpassed human performance:
Formal logic
Arithmetic and computation in general
Accurate storage and retrieval of data
(If you asked many lay people they would say the above is what intelligence is. But, of course, they would be wrong.)
In other “intelligent” areas various times are involved:
Perception
- data fusion, sensor fusion: 2020
- visual perception: 2050
- perceptual information integration: 2055
Learning
- machine algorithms for some purposes: 2020
- human like learning algorithms: 2030
- effective software and use of human learning algorithms: 2050
Natural Language
- formal, stilted but usable: 2030
- informal, fluid, fluent: 2070 (because many other cognitive problems must be solved first)
“Intuition” (integration and effective use of a huge body of learned information): 2070
(reason for late time: human intuition is largely based on sensory perception which must be solved first)
An area where machines will not (and should not) display human performance:
Emotion
I think it will be necessary to develop a new, brain-like computer paradigm, realized in both hardware and software, to do these tasks right. Von Neumann machines are just built wrong to perform the kinds of operations we as humans do so effectively.
Reply #3 by Dale Carrico
Bruce,
I can’t answer the question because too many of the terms aren’t clear. There is a limited number of things that can be usefully said while standing on one foot.
What counts as “human-level” here? It seems to me humans exhibit “intelligence” in radically different ways, measurable (perhaps not always nor well) via many different metrics.
As humans continue — not begin, continue — prosthetically (through medical interventions, network practices, embodied archives, and so on) to modify their perceptual, associative, problem-solving capacities, their memories, their moods, and so on is this a matter of “surpassing” “human”-”level” “intelligence,” really, or a matter of humans continuing to change what *counts* as “human” as they always have done through their ongoing social and cultural intercourse with the made world?
If human “intelligences,” properly so-called, are multi-dimensional phenomena, facilitating multiple ends that are not properly reducible to one another (instrumental ends, yes, but also moral, esthetic, ethical, and political ends) then is it right to speak of “supassing” current norms and “levels” even if one has only modified one dimension or capacity in an effort to facilitate some particular end, but at the expense of or in a way that is indifferent to the other dimensions of intelligence and the other ends to which intelligence is responsive?
Futurists like to bark out their glib predictions like auctioneers soliciting bids, and while I can get caught up in the excitement of such a scene quite as much as the next guy, it seems to me that too often this noisy spectacle becomes a too-vacuous display of competing clevernesses and inner-circle citations that become substitutes for open deliberation, distractions from thinking clearly.
What if the primary effect of offering up my own prediction from your kindly-provided checklist of dates when “AI will suprass human-level intelligence” is not to provide a continution to the resources available for foresight, but to help distract us all from becoming better aware that we cannot actually adequately characterize “the human” nor the idea of “intelligence” so freighted with significance in these formulations in the first place? How can participation in such a survey contribute clarity even to those who ask the question because they are sincerely looking for clarity?
By way of conclusion, let me make a typical rhetorician’s point that you need to be wary of the ways in which metaphors like “level” connected to verbs like “surpassing” paint a picture in figures that has all the compelling concreteness of fact — this is much of what metaphors do, drawn as they are through analogies to the everyday factual furniture of the world — but that this “reality effect,” however edifying, however clarifying it might appear to be for the moment may for all that be giving you the false impression that you know more and know better about the “intelligence” at the heart of this question than you really do. Always think what your metaphors quite as much as your logical premises have committed you to.
Reply #4 by Chip Walter
Hi Bruce,
Depends on how we define things. If we’re using the Turing test, 2030-2050, but I qualify that by saying that even if an AI can pass as a human in such a test, it will still be a long way from human level, at least in my mind, for one very elusive and often overlooked reason — I don’t know what the state of its subconscious will be, which is another way of saying i don’t know what it’s emotional state will be. There are many kinds of intelligence we humans possess — physical, mental, social, emotional — and they are all deeply affected by the primal (unconscious) drives that have shaped our evolution. These can’t be disconnected from us. there is some question as to whether any AI can attain human level intelligence without living in a “body” real or virtual. (Hans Moravec is onto something here.) The big mistake thinkers in the AI world have made is they miss this part of the equation, though Minsky plays with it when he talks about common sense. Can anything be what we would define as “intelligent” if it doesn’t possess these various intelligences? And if not, how do they absorb the billions of years of evolution that are represented in our DNA, brains and behavior?
Of course, it may be possible to create a creature that’s very smart but doesn’t possess these intelligences (and that might not even be all bad), but then would we consider them conscious? And would they be so alien that we might have a hard time relating to them?
Truth is, if we are to coexist and work with AIs we create, they are going to have to be a lot like us, on a lot of levels, and that will not be easy. Not because we are incapable of inventing technologies potent enough to power such an intelligence, but because we have no clue whatsoever as to how we work and how we got to be the way we are — that’s why I wrote Thumbs Toes and Tears (it’s the first in a trilogy to cover how we got to be the way we are and where we might head as a result - something that relates directly to your question).
Anyhow, our own self-knowledge and self-analysis will be the barrier, i think, and I suspect that complicates matters and may push the date for “surpassing” human intelligence another 20 years, perhaps more. (The truth is, the moment an AI equals our intelligence, it’s over. Surpassing will be instantaneous and we will have little evolutionary significance.) Anyhow, before it happens, the worlds of genomics, neurobiology, AI, robotics, bioinformatics (and more) are all going to have to mesh.
An analogy to end this note: people can be functional in a language, but not fluent, or even fluent but not as good as a native speaker. The nuances of the language escape them. They can ask how to take the bus somewhere or even talk sports and have a friendly chat, but still not be able to discuss complex philosophy or express deep feelings and emotions. that might be the difference between an AI that passes a Turing Test and one that is actually as “intelligent” and fully self-aware and fully self-unaware as we are. It may be a moving target, because our understanding of ourselves is itself moving.
Reply #5 by Brian Wang
Bruce,
0. Assumptions
I believe that progress will be getting faster. Faster than Moore’s law General purpose GPUs, better nanotechnology, quantum computers, graphene/plasmonic computers. An underlying aspect is how useful AGI will be. Quantum computing usefulness discussions show that there are mathematically provably hard problems. AGI pushing those frontiers will also still find it slow going even if they are faster than us. Capabilities can greatly increase if the bad choices and screwups that people make could be circumvented.
ie. we are not in space and do not have a handle upon energy not because solutions could not be thought of but because the social and leadership system for selecting and organizing around solutions is flawed. Nuclear pulsed propulsion, advanced nuclear thermal.
Most people are not rich because of self-defeating behaviors.
Society has poverty because of short sighted corruption (group self-destructiveness)
A narrow AI of marketing and human manipulation so that people do things and like it would seem to be a easier and better goal for AI. Something that helps generate more revenue, profit and funding. A narrow AI of business opportunity identification and exploitation. Able to handle dynamic response. A narrow AI of optimal governess advising. Virtual world scenario planner. A narrow AI that could be used to promote and refine the existing better solutions.
1. When do we have the raw hardware capacity equivalence or passing of 10 petaflops (but we could get surprised and find we need 1 exaflop) ?
For the 10 petaflop number 2012 for a full real time human brain simulation. (100 billion neurons) 2018 for that simulation to be less than an average annual salary of someone in the developed countries. ($60,000/year at that time)
For the 1 exaflop number 2018 for a full real time human brain simulation. (100 billion neurons) 2023 for that simulation to be less than an average annual salary of someone in the developed countries. ($60,000/year at that time)
2012-2018 for the hardware for greater than human AI.
2. Being able to put the pieces together for really useful AGI I think takes more More of the coding and more with integration of sensors and more on getting the models and theories etc… right.
Hardware for artificial intelligence.
Each productive human is pretty highly specialized in the area in which they are making a contribution. The greater than human AI that does it all a lot better than a person needs to be equal to 100-1 million human specialists. Otherwise it is making a difference but not better so that it is getting into a different class of capability.
So I think high impact AI that is the range somewhat above and below human level lasts from about 2012-2035. I think we will be getting towards the real core of the problems re:intelligence that we do not understand yet during this period.
The whole new game level of greater than human AI could be in the 2030-2050 timeframe or a tad before depending upon when we can use molecular nanotech or other tech to make a lot of optical, quantum computronium. A bunch of cheap billion exaflop machines running 100 exaneuron equivalent.
People will have slip-streamed in behind with tight integration and adopted other enhancements.
Reply #6 by Michael Silverton
Bruce,
I sometimes hate the relative opacity of this popular question, yet always love the motivations behind asking it!
[x] Other: Certain versions of AI have already surpassed certain human-level intelligences in relatively undetectable or perhaps functionally trivial ways; such AI’s are not interdisciplinarily and universally distributed yet; and they haven’t turned out to be precisely what we thought they would be. Efforts to keep them friendly as they advance, improve, and multiply are to be greatly encouraged.
How I Arrive at the Response
Unfortunately, I’m not currently close enough to the right people in this research space on a daily basis to truly make an accurate assessment. I wish it were otherwise and perhaps that will change before long; however, I am presently limited in my own experience to suspecting that AI has already surpassed “human level intelligence” in some small pockets. I’m thinking of such quotidian efforts as Autonomy (”understanding the hidden 80%”) for corporate knowledge management and Kosmix or Accoona and other current natural language search efforts. My individual, subjective, anecdotal experiences do suggest that there is something that acts more and more like intelligence at work in such cases, and it is available at my fingertips TODAY. That “something” already vastly exceeds “human-level intelligence” to the extent that I can’t find a single human or even small group of humans that can answer some types of inquiries as quickly, comprehensively, or accurately as some of these tools, in a growing number of cases. Then again, perhaps it is merely my method of asking questions that has changed at the provocation of increasingly sophisticated, yet existentially idiotic algorithms.
Without too deep of a pause for hypervigilent analysis, it does seem to be easier for me to consider such interactions with good search algorithms as sufficiently, marginally, tangentially Turing-eque to be worthy of some benefit of the doubt. I admit that many would consider this vastly over-generous; however, at what level of analytical abstraction are we drawing the line? Of course, many others have rightly pointed out that surpassing “human-level” intelligence is largely a function of which human or humans we are talking about. Many computers are vastly more intelligent than human infants or humans who suffer from various cognitively attenuating conditions; so yes, to my mind, computers have long surpassed “human-level” intelligence is some fairly clear cases. Will the infant eventually beat the machine? In most cases, yes; but up until a certain point, the developing human could be said to be “less intelligent” than an iPhone or a Tom-Tom; even though the human’s learning capacity is wholly unchallenged.
Would the average westerner of just 100 years ago — heck, 50 years ago — hesitate to call today’s talking GPS map systems “human-level intelligent” in their particular expertise? With every advance, we naturally keep moving the bar, which is great for advancement but tougher on measurement.
When such perspectives are combined with visualizations like those here and CAIDA, it’s even harder to stay rooted in the objective here and now, as such images can evoke powerful and visceral imaginings of what lies just ahead for us, as a species; at least for any fellow lifelong aspiring imagineers. Presently, I’m also a big fan of Jeff Hawkin’s approach in “On Intelligence” … we tend to make far better artificial versions of things to the extent that we clearly understanding of the original things, themselves. Hence, Hawkins’ focus on deconstructing the human neocortex.
With all that rambling aside, and having watched the replay of Eliezer’s Future Salon (where I believe he too asked a similar question); I arrive at my current response, above. Of course, I need to get up to speed on Novamente’s progress in order to add that to the mix.
Also, I like two other questions, these days, which I believe are relatively novel in composition, if not substance: 1. When will the first Artificial Consciousness (AC) emerge; and if it does emerge, is it even philosophically consistent to call it “artificial”? 2. When will we achieve Substrate Independence (SI)? If our Actual Intelligence can transcend the human biological substrate, will we still feel the same imperative toward creating artificial versions of what is no longer constrained by biology? It strikes me that a benevolent AI that isn’t an AC could still be useful in an SI society; but on that note, I must cut this short at the moment as I have a lawn that requires attention and a couple of other to-do’s that need completing today. Hope that provides a little bit of fun, in the meantime.
Reply #7 by Justin Sher
Bruce,
I’ve been a professional software developer for 10 years and have studied machine learning and artificial intelligence a bit, so let me give you a long answer:
In a way computers are already smarter than people. They finished a database of all possible checkers games, meaning that it is now mathematically impossible to beat a computer armed with this database at checkers. The whole checkers database would just be a seemingly random collection of electrical charges on a metallic disk in some computer server somewhere if it weren’t for us hooking it up to an interface, and having its meaning interpreted by a human as being a checkers playing computer program. I certainly think anyone armed with Google could also win at Jeopardy.
The real breakthroughs in computing will be in user interface. I tell computers what to do all day but I have to be very specific and exact. When computer interfaces get good enough that they can understand the semantic meaning of sentences and thought patterns we might get to the next level and really appreciate all the power that is in most computers today, and especially in large computer clusters such as Google. Anyway, computers reaching the next level of their ability to understand and interact with humans is going to be something we’re going to do, not something the computers are going to do all by themselves and it will be a very slow emergence with lots of dead-ends and bugs, etc. You will be able to see it emerging at least 20 years before we get there.
Reply #8 by Wayne Radinsky
Bruce,
Doing a survey, hmm? I think, besides just asking when, you should also ask people why they think so — so you can tell people who are just pulling numbers out of their ass from people who have some kind of reasoning and logic behind their answer. I actually believe the “reasoning process” is more valuable than the actual prediction — I’d be interested to know how other people arrive at an answer.
My answer: [x] 2030-50
Reasoning process:
A few months ago I met a Professor Emeritus from CU, Brian Ritter. Regarding AI, he said that pursuing artificial intelligence was foolish, because AI will never happen because it is uneconomical, because even if you could create an AI it would cost so much more than humans that it is not worth doing. He said people should invest their efforts into enhancing human intelligence, rather than inventing artificial intelligence.
Then he went on to say that, according to his neuroscience research, each neuron in the brain is the equivalent to approx. 10 to 100 transistors.
So of course I started looking around the net for data on the cost of transistors, and I couldn’t find anything I could use, so I gave up, until Chris Phoenix found this article on Intel’s website.
In the article, Intel gives 2 datapoints sufficiently far apart that we can use them to determine the approximate rate in the decrease in the cost of transistors.
1975 - 1 penny
2005 - 1/10,000th of a cent
I’m going to ignore the single transistor figure from 1965.
So basically, the cost of a transistor goes down by a factor of 10,000 per 40 years (exponential decrease).
This is a “growth rate” of -20%, or a “halving time” (instead of the usual “doubling time”) of 3 years.
Now, we just need to estimate how many “halvings” we need to get enough transistors for a brain for $1000 (nice round number).
Wikipedia says the brain has 100 billion neurons (which is the same figure I’ve heard in numerous other places), so let’s go with that.
Let’s to a “best case” and “worst case” scenario. “Best case” scenario -> each neuron == 10 transistors, “Worst case” scenario -> each neuron == 100 transistors.
So in the first case, we need 1 trillion transistors, and in the second we need 10 trillion transistors.
Let’s go with the first first. When will 1 trillion transistors cost $1000? 1 trillion transistors in 2005 should cost about 100 million cents, or $1 million. So we need a factor of 1000 decrease in the cost. With a 3-year doubling time, it takes 30 years to get the factor of 1000.
In the second case, we need 10 trillion transistors, so when will 10 trillion transistors cost $1000? Doing the same math we find we need a factor of 10,000 decrease in the cost. It turns out this means we need 40 years.
That means we should expect AI to arrive between 2035 and 2045.
Ok, there’s some caveats:
- We’re not counting other cells. We’re only comparing transistors to neurons. The brain has many other types of cells, such as glial cells, which play a role in telling the neurons where to grow and what other neurons to hook up to. Because of this, the brain-power estimates may be too low, which means it takes more transistors to get a brain, which means AI should take longer. But maybe Brain took this into account in his 10-100 transistor/neuron estimate.
- We’re not counting the greater speed of the transistor. A neuron “fires” at most a few times a second. A transistor, by contrast, can “switch” billions of times per second. Thus transistors may be more powerful than Brian estimates, and AI may arive sooner.
- Neurons often have redundancy. A brain function may be handled by groups of neurons so it will continue working in case some of them die. Transistors, by contrast, are 100% reliable and require no redundancy (not entirely true — they do require error-correcting circuitry). Thus the brain-power estimate may be too high and AI may arrive sooner.
- The algorithms for AI are unknown. When you get right down to it, all we’ve calculated here are the number of transistors you need for a brain, and when it will cost $1000. We’re assuming that we’ll know how to assemble those transistors into something that works like a brain. This is not a given, however, because the algorithms needed for AI are at present unknown. Furthermore, it is not known when they will be known. It will only be known when the algorithms will be known when the algorithms are known. Now, there’s good reason to believe that this will be figured out, some of those cheap transistors that will be created in the future will be put to the task of figuring out how human biology works, and that knowledge will eventually lead to understanding of the brain. The question is on what timeframe can we expect that to happen?
- Along with this, it’s obvious that there are many organizations here today, such as the Department of Defense and Microsoft, who would gladly plunk down $10 million if it would get them an AI. But it won’t. It will get them enough transistors, but since no one knows how to assemble those transistors into a brain, you won’t get an AI.
- This estimate assumes transistors will continue to get cheaper at the same rate as the past. This is in my opinion a pretty safe assumption, but it takes a lot of explaining to explain why. At any rate, I’m including it on the list.
What’s interesting about this calculation is that it’s approximately the same as Kurzweil’s (Kurzweil is slightly more aggressive, predicting AI will arrive in 2029 — though I’ve heard he’s revised his estimates outward lately).
I’ve been thinking about this and it seems to me like 2045 ought to act as a “latter bound” on the question. By 2045 you should be able to simulate a brain — so even if you have no clue what algorithms are responsible for intelligence, you can just simulate the whole brain and it won’t matter that you don’t know.
On the other hand, if the algorithms are figured out sooner, AI might show up sooner. Heck, for all I know we might have enough hardware power now if we only knew what algorithms to run on them.
I’ve heard various other people have done estimates of hardware and brainpower, and while they are all just guesses, because nobody really knows how to calculate brainpower, they all tend to fall in the range 2030-2060. John Smart for example has done some calculations. So based on Brain Ritter’s numbers I’d say 2045, and if he’s wrong, 2060 seems about the latest it could be. But you didn’t have “2030-2060″ as one of your options so I picked “2030-2050″, which is about the same.
Reply #9 by Warren Stringer
Bruce,
I’d like to revise my prediction backwards to 2030-2050.
This question takes me back to 1983 at a panel in Santa Cruz. John McCarthy, Steve Wozniak, Chuck Peddle, and others were talking about the future of computing. Woz said that he never looks past two years. McCarthy described some predictions he had made before, such as: ‘everyone will have a teletype by 1963’. After a half dozen of those prediction, he said ‘and now I would like to make my prediction of the future…’, at which point he sat down and said nothing.
A few years later (87?), there was an event at the Olympic Club, called ‘The Crystal Ball’. Moller was showing his flying saucer car of the future. At the time I was playing with non-biological AI. A favorite was growing tetrahedral lattices as a massively parallel Huffman machine. So I made up some triangular cards with tetrahedral moiré patterns on them. On the back were the typical Extropian sales pitches like: “Feeling lonely? Why not make a few copies of your friends with one of our Terraflop models.†As I was handing them out, I would have to explain what Moore’s law was. Back then, I seriously underestimated bandwidth and timeframe, thinking that we’d have the singularity by now.
This bit of personal history is why I predicted 2050-2070. But now, I’m leaning towards 2030-2050. Here’s why: current strike price for GFlop is about $0.42 (as per wiki/FLOPS) That would mean Morovec’s 100Tflop would come within personal affordability within a dozen years. Add another dozen years for many thousands of these “beings†to hit puberty on a Human timescale.
The Human frame of reference is important. I recall someone, in 79, (Dennett maybe? Damned wetware memory!) describing an AI theorem prover, taking 100+ steps to create a solution that no Human could understand. I doubt such an incomprehensible AGI will be trusted. Machines will need to become diplomats. With diplomacy comes empathy. With empathy comes shared experience. And a shared experience cannot be rushed.
Perhaps, machines of equivalent complexity, will begin to share experiences with Humans, in about a dozen years. Perhaps, after another dozen years of maturation, some of these machines will have developed personalities that other Humans will begin trust. Maybe even hand the keys to the family car. Perhaps, after another dozen years, this trust will become pervasive to the point where an AGI may be entrusted with moral decisions.
Hmmmm… I just talked myself out of it; back to 2050-2070. I think it will be take a generation that grows up with machines as peers that will trust them as friends. Only when this generation comes into power, will their special friends be allowed to make a difference.
Reply #10 by Bob Blum
Bruce,
Everyone’s first take on this question must be “when will a computer first be able to win the Turing test?” My answer, along with most folks, is 2030 to 2050. Surely by then, or well before, we’ll have natural language programs that can fake enough discourse understanding, knowledge of current events and human affairs, and common sense to slip by the judges. This is penny ante stuff - parlor games.
More interesting to me is the “surpass” part. Just to stir up the natives, here’s a little story for you. It’s about RALPH, a highly proprietary program for internal corporate use at INTEL just developed in 2040.
RALPH’s purpose is to design INTEL’s next generation chip. That chip will leap ahead of Moore’s Law, will replace RALPH’s own cpu, and once-and-for-all crush that nettlesome competitor AMD.
CHIP DESIGN
RALPH thoroughly understands the detailed features, limitations, and manufacture of Intel’s current chip the Dodecium40, since its precursor helped design it. However, now RALPH is on its own. Unaided it combs the world’s literature researching design candidates and manufacturability.
Here’s a few picoseconds of its nightly deliberations. At 0357 GMT it’s trying to crack the loss of quantum coherence problem that plagued the Dodecium40 design team and limited the number of bits in its registers, allowing AMD to stay competitive. Combing the literature for solutions, it finds a promising reference in the Bulgarian Journal of Quantum Computing. To understand this article, it brings itself up to date on all the precursor literature, and in so doing, sees the solution that was sought after but not actually achieved by the Bulgarian research team.
But will this theoretical insight actually work when the nanotubes hit the road? At Intel’s automated research facilities at McMurdo Sound (easy cryo) and in GEO (no funky gravity probs), it robotically performs feasibility studies that are highly promising. On this basis, it decides to go ahead with a full bore design effort for a new Dodecium41chip, the lynch pin of which is a new theory of quantum coherence. Top that, AMD!
MANUFACTURABILITY
Minor problem: the cost of the fab for the new chip will exceed the GDP of the G13 nations. Corollary subproblem: raise the financing for the new fab. At 0357…01 RALPH investigates several hundred novel vc, equity, and debt financing possibilities. Among the leading candidates are 1) take a controlling position in GERON International which, RALPH predicts, will announce its immortality drug within the next six months and 2) sell ASTEROID bonds, which will give the owners all mineral rights (including carbonaceous chondrites) to the asteroid belt and 3) gamble on the financial ideas that RALPH itself will devise once it is augmented with the Dodecium41.
BRAIN-MIND Augmentation Simulation
The last financial option requires a detailed simulation and prediction by RALPH of its own ingenuity post augmentation by the new chip. After all, the new chip is disruptive technology. RALPH has a thorough understanding of its current hardware and software configuration. For years its precursors have been able to autodiagnose and fix most glitches, large and small, in real-time. But predicting its own IQ (and making a financial bet on it post-upgrade; that is a toughie. It takes a full 3 nanoseconds during which 83% of its Bose-Einstein condensate heat sinks are sucking away infrared at full bore.
Anyway, you get the idea. REAL super AI - not nickel and dime Turing test stuff.
Reply #11 by David Deutsch
Impossible to estimate because, in my opinion, a fundamental qualitative breakthrough in understanding is required. This breakthrough will be primarily philosophical/theoretical, despite its practical applications (a bit like Darwin’s breakthrough). It could come tomorrow — though I am not aware of any research program that seems, to me, to have any promise in this regard. Or it could take centuries.
Reply #12 by Warren Powell
Bruce,
I do a lot of work that involves replacing humans with computers, but these are very complex problems (dynamic resource management for freight transportation). I would say that for some problem classes, computers already outperform humans (is this what you mean by AI? I am in operations research, and we refer to these as optimization models - we do not use the techniques of AI - but given how your question is phrased, I assume that you mean any use of computers to think).
For more complex problems, the difficult I have found is that computers just do not have the data. The problem is not a technical one - it is an economic one. There is a lot of data that is simply too expensive to get into a computer. A maintenance superviser looks at the condition of a locomotive out in the field, calls a dispatcher on the cell phone. The dispatcher can then use this information, but the data is not in the computer.
I started two companies that focused on real-time routing and scheduling. The biggest problem that both companies faced was that humans had data the computers did not.
Sorry - this is a complex answer that does not fit your questionnaire, but I think your question is worded too simplistically. But good luck!
Reply #13 by Karl F. MacDorman
Dear Bruce,
Essentially, I don’t think that is a well-founded question. When it comes to playing chess or balancing accounts, AI has already exceeded human intelligence. Operations can be performed orders of magnitude faster and more accurately.
If we assume that AI can do things that human beings can do, but some of those things it does less well because of a quantitative difference in performance, then I could be optimistic about AI. However, there are many things that human beings can do that computers cannot, and we don’t really have a clue how to get them to do them. The whole issue of contingency has been largely ignored in AI. “Am I interacting with a system who’s behavior is contingent on my own?” Nearly all AI systems cannot answer this question, although newborn infants can. Therefore, I don’t buy into the singularity arguments, because they assume the difference between human and computer is quantitative, when it is qualitative. That doesn’t mean that I think it is impossible for computers to have humanlike intelligence, but I think that considering the rate of progress toward understanding human intelligence, that kind of progress is far off.
Normally, I don’t prognosticate. But I’ll make an exception for Novamente. :)
[x] Beyond 2100
Best wishes, Karl
PS: I think the concept “human-level intelligence” doesn’t make sense. Intelligence doesn’t have levels. “Level” implies that intelligence is unidimensional, when it is multidimensional. Perhaps you mean 100 IQ or something like that. The concept of general intelligence G was derived strictly from human data using factor analysis. If you included other species or machines, the factor G would disintegrate. And what if you did have a computer with 100 IQ? That just means it can take IQ tests. And what if the IQ broke down into 200 IQ in math and 50 in verbal. Is that human-level? That’s why I prefer “humanlike.” It’s closer to Wittgenstein’s notion of family resemblances. Concepts like intelligence are fuzzy, and things can resemble each other without being the same in every aspect.
Reply #14 by Mark McAllister
Hi Bruce,
Other: Unlike most transhumanists I don’t see the technological singularity as a definite event. I’m positive that the science to bring about the singularity will occur, but my concern is whether society will have matured enough to take the big leap. Humanity isn’t ready for posthumanism just yet, so I believe the singularity will actually be a process of slow adoption rather than an event. 2030-50 for it to begin, beyond 2100 for it to end.
Reply #15 by Nils Nilsson
Hi Bruce,
There are several aspects to human intelligence, and we’ll probably achieve (or surpass) human performance in these aspects at different times. My subjective normal distribution for achieving the LATEST of all of these would have the following parameters:
one sigma below the mean: 2040
mean: 2080
one sigma above the mean: 2120
So, if you want to include me in your survey, you would have to divide me up into fractions, perhaps as follows:
1/4 Nils at 2030-50
1/2 Nils at 2070-2100
1/4 Nils at Beyond 2100
Reply #16 by Richard Leis
Excerpt from Richard’s blog post, The AI Question:
Next decade (2010 - 2019).
Why (uh-oh)? Trends and technologies converge. Too often people examine individual trends and ignore convergence and surprises, all the while keeping in place their biases, including human-centric biases. The substrate from which human-level artificial intelligence will arise is a matrix of computing hardware, software, communications technology, progress in our understanding of the human brain, experiments in social networking and the metaverse, robotics, economic (the cost of human labor versus automation, robotics, and AI; military, government and private investments), etc. This substrate is all but in place.
To a historian, new technology might appear to have arrived suddenly, as if one day Technology X did not exist and the next day it did. In the days that follow, Technology X loses its luster and becomes just another part of the background noise of our technological existence, another piece of the substrate, a historical footnote. Convergence occurs and technologies appear to vanish into one another.
We - proponents and critics alike - place human-level artificial intelligence on a pedestal. That this was a hard problem, or is not a hard problem at all, will be all but forgotten with the advent of AGI. Human-level intelligence itself is only as miraculous or mundane as we individually and subjectively choose to view it.
Whether or not we as laypeople or we as experts define our terms, we make assumptions about intelligence from our interactions with other humans. When those assumptions about intelligence match with our interactions with other sentient beings, AI will have surpassed human-level intelligence. Yes, surpassed, not just equaled. That will happen within the next decade, when the substrate is appropriate for it.
Reply #17 by Kevin Warwick
Bruce,
Not really a quick answer as machine intelligence is a very different type of intelligence. In some respects it already has surpassed human intelligence, in others it may well never do so. If the question is - when will pure machine intelligence get to a point where it is dangerous for humans - ie it calls the shots and humans don’t, then I feel the 2040/2050 time frame best suits. Of course we also have Cyborg intelligence - which implies that we can push ahead a form of human intelligence.
Reply #18 by Tom Cross
Bruce,
Cars may travel faster than people but they are not so good at climbing ladders. Similarly, computers are good at doing repetitive calculations. They are superior to humans in that respect already, but they are still just a tool. I suspect we’ll build machines that are better than us at climbing ladders before we’ll build machines that are better than us at dreaming, and I suspect that if the later is achieved we’ll have a hard time thinking of the results as machines, much less computers. I think anyone who thinks these achievements are close at hand has an oversimplified view of humanity. I pick “other.”
Reply #19 by J. Storrs Hall
Bruce,
In answer to your Subject: line, I personally tend to eschew the term Singularity in favor of Asimov’s term “Intellectual Revolution” (by analogy to Industrial Revolution). Among the many reasons are that “Singularity” has come to mean too many things to too many people, and was defined in terms of a negative (we won’t be able to predict what happens next), where IR gives an example that people can use as a model. And it happens to be a model that I feel is a reasonably good fit to the probabilities.
In the closing chapter of Beyond AI, I write:
I have argued that, because we are (just barely) universal learning machines, it is not impossible in principle for us to understand how or why a hyperhuman AI does what it does; but there is the problem that it will do it faster than we can keep up. Even so, this does not necessarily make the world of the future any more incomprehensible individually than the present one is. No individual comes close to understanding how everything works. Those of us who spend all of our time trying to do it manage at best to get a very sketchy, highly abstracted, overview that is probably wrong in many details. The first task that intelligent machines will be assigned will be to summarize, abstract, and explain the world in ways that make these sketchy graspings more coherent and correct–because that will be enormously valuable for investors.
Even so, a complete revolution in world affairs inside a decade is not to be sneezed at.
Things to Come
Thus the first ultraintelligent machine is the last invention that man need ever make…
–Irving J. Good, Speculations Concerning the First Ultraintelligent Machine
It’s impossible to do any comprehensive prediction of what such a decade could bring, but we can make a few wild guesses. At the beginning of the decade, most economically significant work is done by people, ranging from physical labor to investment planning. At the end of the decade, most work will be done by machines, whether robots or stationary AIs. At the beginning of the decade, you drive a car; at the end, some form of transportation as convenient as cars, as fast as airliners, and not needing (or allowing) your direct control will exist, and probably use a variety of actual transport modes. At the beginning, both physical products and intellectual creations are designed and manufactured centrally and distributed; at the end, they will be custom designed for each end user and made to suit on the spot.
Let us focus on just the first of these: over the period, humanity as a whole will be out of a job. Much more work will be done, and done better and faster. There should simply be no need for any human to do any particular task. It’s up to us to design our machines and society so that instead of being tossed into the streets, we can enter a graceful retirement.
As I noted before, wealth per capita in rich countries is about half a million dollars per capita today. It seems quite a conservative estimate that somewhere during the Intellectual Revolution the world per capita wealth will be a million apiece. With a 500% growth rate, we can tax the machines (or inflate the currency) at just a 1% rate and divide equally to give each human a $50,000 income. In subsequent years, we could cut the rate in half and still double everyone’s income each year.
This sounds like a very pleasant scenario–almost too good to be true. It’s not ridiculous to start with it as an assumption, though. After all, the whole point of building intelligent machines is that they can do the hard work for us. Our other machines outdo us in speed, power, and carrying capacity by orders of magnitude; it is pure self-delusional conceit to suppose that thinking machines could not do the same.
My best guess as to which decade we’re talking about for this to happen is the 2040’s. Of course, there will be full fledged AI in the preceding decades, just as there were the Usenet and the ARPAnet in the decades leading up to the internet explosion decade of the 1990s.
Reply #20 by Chris Smelick
Bruce,
[X] Prefer not to make THIS prediction
Similar to why I don’t believe people should be stating dates for when they think people will be immortal because we haven’t remotely close to sufficient data for even understanding what aging is. As you know, we have a gross lack of knowledge about human neurophysiology that is requisite to even really understand how AI will reflect “human general level intelligence” so it’s hard to even define the relevant criteria. But given Turing’s neato precedent of some specific criteria for AI benchmarks and lots of exciting research going on today, I’ve got high hopes for what the MIT AI people and various medical/research scientists will discover in the next couple of decades. Just watching the Alicebot project develop is enthralling, albeit far from more complex AI you’re into. Seems like many of these projections stem from (maybe directly predicated upon) the following misunderstanding: Moore’s law isn’t a law, it’s a trend! If Kurzweil et all want to call it otherwise, their previous successes don’t make them any less wrong!
Reply #21 by Dick Lepre
Bruce,
An interesting question but I have no idea how
1) to accurately define “AI will surpass human-level intelligence.” My objection here is largely philosophical. I do not se AI as something separate from human intelligence but rather a creation of human intelligence. That said I still understand the question given that “footnote.” Which leads me to…
2) I have no idea how to even go about making an educated guess. I have a degree in Physics. I remember in the late ’60’s when controlled fusion was 20 years away. I remember in the late ’80’s when it was 20 years away. Now folks have stopped guessing.
Sometimes scientific/technical problems prove elusive.
I want to append here an email I sent last week to Tyler which is more in line with what your company is doing. The point of this is to suggest an approiate venue for AI research.
AI & World of Warcraft One thing that has always struck me about SIAI is the concern that it has regarding AI sort of fucking everything up. The issue always seems to be posed as if some day we are going to turn this thing on and hope that we will still be alive a few hours later.
I believe that there are “safe” venues for testing AI. I would suggest that an appropriate goal would be to build an agent that could play an MMORPG such as World of Warcraft. A game such as this is marked by several significant challenges. For one, it is not static: new areas, bad guys, rewards are added about every 6 weeks. The agent can ascertain information not merely by exploring but by asking other players (human or otherwise) for advice as to how to accomplish tasks. Not all of the advice is accurate. The agent would learn that “A” gives good advice 90% of the time and “B” gives good advice 60% of the time.
The agent would also be allowed to surf the web to seek advice.
A successful agent needs to be adaptive to:
1) the changes in its world
2) its experiences as to what actions lead to success and failure and
3) information ascertained from other players and outside sources. For example, there is a Wiki of Work of Warcraft. Artificial intelligence should not be starting from zero but from what people collectively know.
To me this activity comes in above something such as “Deep Blue” because this agent must deal not merely with a very complex but static game such as chess but with a world where things are changing and it is here that, for me, the notions of “real” intelligence are to be found.
While playing a video game may seem like a frivolous activity I see this as a fantastic space for testing AI. The challenges are immense and the end product of software that can actually do this has, for me, the inklings of true AI. In the semantics that you use this may not be general AI but it is a leap up from weak AI.
World of Warcraft is an extremely successful commercial venture now owned by French conglomerate Vivendi. It has something like 8 million subscribers which at $15 a month means something over $1.4 billion a year in gross. Perhaps they could be induced to establish a big-ass prize for the first AI which could accomplish some predetermined goal. Note also that there are already thousands of young programmers who, on their own, write add ons to help them play this game. They may have some seminal input to an AI at least in the sense that they may have created some useful software tools.
I think that the point here is to establish a focus for a specific AI task - playing this game. In this sense the DARPA Challenge worked because it moved the vagaries of autonomous driving to “drive a car over this course starting at this time.”
The software should not be designed to be WoW specific but general enough to work in other virtual worlds.
Please understand that I have near zero knowledge of what is really there in the way of strong AI and whether software, theories etc. exist for such tasks. This is just me rambling.
Reply #22 by Scott Brown
Bruce,
This is such an interesting question, that I got carried away thinking about it. I hope that the following discussion is something that interests you.
Providing that there is no major world war or other civilization disrupting catastrophic event, which I feel is just as likely within this time-frame, I’ll venture a guess. In the book Feynman Lectures on Computation by Richard Phillips Feynman, Robin W. Allen, Tony Hey, and Richard P. Feynman, it is predicted that based on Moore’s Law and other assumptions regarding atomic scale semiconductors, regarding size, power and speed, that by about 2012 +/- 2 years or so, there could exist a device that would consume 1 W and well exceed the computing capacity of the human brain (X100 to X1000).
In spite of this, I believe that it will take some time to create a sufficiently parallel structure to achieve the type of holographic-spacial computing inherent to the human brain, however. In addition, the software to self-learn effectively within that framework will take time as well. This period is very difficult to predict, but these essential elements being developed are likely in the 10-30 year time-frame after the computing engine structure becomes minimally sufficient for the job (2012+10=2022). Therefore, I’d predict it could happen at the earliest in the 2020-2030* time period for the essential human-level surpassing intelligence to exist in a single machine. However, by a strict definition, say to create someone as human-like as Lt. Cmdr. Data in Star Trek Next Generation or The Doctor in Star Trek Voyager, I believe it would be, at the earliest in the 2030-2050 time frame, so with that as my preferred definition, that is my answer, checked below.
The problem with this sort of prediction is to really determine when this will have actually happened. Once could argue that the Internet, with its connection of now literally billions of computers, is as a collection, NOW, today, a computer that has already well-surpassed human-level intelligence and I would have to agree with that statement. However it is difficult to see the Internet or most of the computers attached to it as having consciousness or as having true self-awareness. How can we define that for AI?
We do have robots that fetch newspapers and other tasks now from voice or other commands. Are they self-aware? They know what they can do, and where they are and that they can follow instructions, etc. What is self-awareness? Is that the main criterion?
Computers already beat humans in chess and any numerical/mathematical exercise. AI has been able to produce computers that can hold nearly normal conversations with people. So, for some definitions of surpassing human-level intelligence, we have already arrived.
For my prediction, the computer must be able to recognize visual objects, perceive fully contextual meanings and subtile meanings in interactions with humans or animals, be small enough and low power enough to reside in an anthropomorphic physical framework (a human-like robot), form speech in any human language naturally so that it is not perceptible that it is a machine speaking and also that an unprogrammed unit would be able to go through the process that a human does to learn new material.
Clearly, this AI creature would, once it arrives at even close to this stage, be far more intelligent than humans in many areas. The question I would have would be whether it would know “right and wrong,” integrity, morality, have a conscience? I think that those should also be requirements to fit the definition. So, you see, the time-frame for this prediction is highly dependent upon how you define AI surpassing human-level intelligence. Since your survey does not define that, it will be really hard to determine what the results mean, that is, what are people actually predicting? It is a very interesting question, of course, but without at least a simple definition of what you mean, it is too unclear to be meaningful. It may be more an underlying implication of people’s own definition of the meaning, than a prediction of how technology in AI will advance. As you can see, by some definitions, we are already there. Some may think that only god can do this.
Reply #23 by Selmer Bringsjord
Bruce,
Depends how you define human-level intelligence. If HLI includes P-consciousness, then [X] Other: Never. However, behaviorally speaking, humans and machines will, in certain contexts, be indistinguishable perhaps by 3000. AI kicked off w/ Euclid, so over two millennia hasn’t done much to bring us toward the goal you describe.
See: Bringsjord, S. (2003) Superminds: Persons Harness Hypercomputation, and More (Dordrecht, The Netherlands: Kluwer).
Reply #24 by Quentin Hardy
Bruce,
I think it is more interesting to think of this as a parallel event and not a contest. Very strong artificial intelligence may be an intelligence of a whole other type, one we cannot begin to understand until we witness it.
Intelligence as we know it is, to a large extent, predicated on desire. A chess-playing computer is an interesting (and largely human) achievement, but it is not particularly intelligent in a human way. A computer that felt like playing chess, or even more, a computer that felt like playing chess because it was bored, or needed to crush an opponent to feel more alive in the world, is a far more interesting and human-like thing. It would have something closer to human intelligence — which also must take note of others, seek esteem, etc.
Desire is wrapped up in the certainty that we will die, and that this matters. Our time is finite, and we must make choices — the awareness gives us urgency, and spurs many of our actions, both good and evil.
As best I know, nothing like this has been encoded. It is hard to know why anyone would want to, or how it could be done. Therefore, unless this idea becomes broadly accepted and important to a talented software engineer, we are more likely to see, at some point in the not-far future, machines with fantastic processing power, perhaps even the self-awareness many of the answers here are suggesting constitutes intelligence. They will have nothing like our urgency-informed minds, however. And that will be a vastly different beast from us.
One suggestion of what this might be like is in the story “The Immortals” by Jorge Luis Borges. A group of men live centuries, create great cities and epic poems, raise and destroy empires — and become near-lifeless idiots, since all choices are equal to them.
Let’s hope the back-to-school lineup from Dell, circa 2040, does better than that.
Reply #25 by Mike Dougherty
Bruce,
I don’t think even so linear a question has a linear answer. narrow AI already has surpassed human ability for finding patterns and manipulating large volumes of information. However, narrow AI has to be programmed or trained on its task by humans (currently the only provably working “General” intelligence)
In the area of AGI we have only a few candidates that have yet to be officially recognized as anything more than a project under the plan of a particular researcher (despite having great commercial successes). I think making a prediction at this time is just hopeful guessing.
I also wonder if the measurement scale is not sliding. What is “human”? Are you asking when AGI (when it declares itself to be AGI) to exceed the general intelligence of average 2007 human intelligence? (or the intelligence of 2007 human genius) Or are you asking when the AGI has outpaced the change in unmodified meatbot intelligence? My guess is that “human” will continue to define the status quo, even after technological enhancement has made H+ the new normative human. If you allow the definition of human to move with humanity’s idea of self, then it will be difficult to predict when AGI exceeds the human level. In that case the question is when will human and artificial intelligence merge so completely that it becomes impossible to make a distinction?
… so put me down for “Other” :)
Reply #26 by Harvey Newstrom
Bruce,
[X] Beyond 2100
Human intelligence cannot be reduced to a single continuum such that an AI can be measured against it in terms of “meeting” or “surpassing” it. It will be much more likely that AIs will be different enough from humans to have different areas of excellence and difficulty than us. Therefore, they will probably surpass us easily in some areas (such as mathematics) long before they surpass us in other areas (such as creativity). Therefore, we must subdivide human cognitive functions into levels of progression in which AIs will surpass us in stages. Using Bloom’s Taxonomy of levels of human cognition (which is used in Psychology for levels of human cognition), I estimate the following timeframes for AI cognitive development:
- Knowledge: AIs may be able to memorize and recall abstract data patterns (images, sound, video, natural language, etc.) within 5 years.
- Comprehension: AIs may be able to interpret information from data patterns at human levels within 25 years. - Application: AIs may be able to apply knowledge to new situations at human levels within 50 years.
- Analysis: AIs may be able to analyze information to derive conclusions at human levels within 75 years.
- Synthesis: AIs may be able to create new concepts, methods, or new information at human levels within 100 years.
- Evaluation: AIs may be able to assess values, judge merit, or choose goals at human levels within 125 years So, I am predicting that AIs may be able to surpass human intelligence above these levels (to invent the next level) within 150 years.
Note: I believe that each level of Bloom’s Taxonomy is exponentially more difficult than the previous level. So my linear timelines above do assume a Moore’s Law of exponential progress at each level over time. Since the field of AI is notoriously not advancing exponentially, this is an extremely optimistic assumption.
Reply #27 by Terry Deacon
Bruce,
I think there are problems with your question, so I will comment on these Before I make a prediction.
Problem 1: AI as we currently understand it is in my opinion radically different in its organization than mental processes and so could not be comparable on any generic scale of function. There are current task-specific domains in which the comparison could be made, but — and this is important — the comparison is on input-output performance grounds, and in my opinion this has little to do with “intelligence.” So, if the question was about surpassing numerical computational power; that occurred some decades ago. If it was about chess playing; the answer could also be given a specific date with respect to rank-level play, and again personal computers have long ago surpassed average human chess-playing. If the question is about language comprehension, however, or creative-adaptive problem solving (even for average day to day challenges), I would put the cross-over point at least 2-3 decades into the future (but see below).
Problem 2: “Intelligence†is a very ambiguous term and a sloppy measurement instrument. I have often compared the measuring-intelligence problem to measuring locomotor efficiency across species — e.g. between dolphins, starlings, cheetahs, squirrels, moles, and gibbons. The comparison is totally substrate-dependent. Bodies cannot be generically great at all these modes of moving because the adaptations involve many dimensions of mechanical relationships that are often mutually exclusive. Cognitive adaptations are at least this constrained and adaptively specialized. We humans, are in this regard, symbolic savants but olfactory morons.
The real issue: Computing is NOT cognition, it lacks internally generated representation. In terms of the semiotic (representational) dimension, computing in all current forms has an intelligence quotient of 0. Increasing computational power will not make any difference, because the architecture of computation is radically unlike the architecture of mental experience and representation-generation. However, I do not think that it is impossible for us to learn how to design devices that are capable of the generation of representations. Basically our brains are “devices†for the generation of virtual computations; i.e. sloppy statistical approximations to computations, of the sort we instantiate in modern electronic computing devices. But the very fact that brain processes are physical processes means that we can learn to copy their process organization in other substrates.
When will we get to the point of building devices that use this design logic? Hard to say, because for the most part people in the field don’t even get the distinction, or imagine that they already are doing cognition. I am guessing that another decade or two of hitting a brick wall will help get over this misconception, and force us to re-examine these basic assumptions. But representation-generation is an intrinsically highly wasteful process (for the same reason that natural selection is a highly wasteful process) and this means that even with much more computing power, we will probably start with only fly-level capacity and may not reach human level for another couple of decades. So (assuming constant scientific-technical progress) I would place the equivalence point at the end of the 21st century, and should add that hopefully we will recognize that we are creating “devices†with minds, and therefore with feelings and personal self-interests and points of view. In other words, “devices†that are persons, with moral status.
Reply #28 by Tom Shields
Bruce,
Kurzweil says 2029, so who am I to argue? But I do think a few things will take longer than he thinks, so I’ll pick 2030’s. I also don’t think the question is particularly well-formed. For human-level intelligence in ALL tasks, for example art, music, or debate, AI will need human-level perceptions and maybe even the benefit of years of human-level experience interacting with humans. However, “idiot-savant” AI will be along much sooner (arguably is already here). Finally, I think that there will be a pretty interesting intersection with IA, that may make the question moot. Enhanced humans will start to surpass “human-level” intelligence in about the same timeframe, I think…
Reply #29 by Don Perlis
Bruce,
I think it’s too hard to define what this would mean. Computers long ago surpassed humans at certain things (such as evaluating indefinite integrals). I think it unlikely that computers will surpass (the best) human-level creativity in certain areas (such as poetry or politics) in the next 50 years. So I suppose I’d have to choose “Prefer not to make predictions”, given your options below. But I suspect that within 30 years we will have computers that almost everyone will agree as having a very impressive degree of real intelligence on a par with most humans in a wide range of activities, and especially with regard to “commonsense” in situations involving novelty.
Reply #30 by Dick Pelletier
Bruce,
When can we expect strong AI will equal human-level intelligence?
Kurzweil claims computers could equal human thinking abilities by 2029; Nick Bostrom sees this happening as early as 2019 and certainly by 2050.
I think Kurzweil is correct in assuming that computers could equal human calculating abilities by 2029.
However, I feel more comfortable in defining intelligence as more than just calculating abilities. True human intelligence seems to be influenced by consciousness.
Science writer David Dobbs believes that human consciousness could be identified by as early as 2020, and over the next 10-to-15 years, researchers may be successful in providing human-like consciousness to machines.
I believe that by mid-2030s, all the parts should be in place for machines to equal human-level intelligence, and shortly after that, they could become the dominating intelligence in our world.
Should we fear these creations? J. Storrs Hall, in his recent book “Beyond AI†says we should not. By the time computers begin to outthink us, we will have the ability to enhance our brains and we will interface with our silicon cousins and share their increased intelligence. We will always remain a step ahead of our would-be competitors, Hall says.
Bottom line – 2035 is my guess when machine intelligence could surpass human-level intelligence.
Reply #31 by Dagon
Bruce,
I have pondered this a long time, and my intuition and stuff I read leads me to conclude you’ll have four kinds of “intelligences” that’ll be smarter than unmodified humans. However when we get there humans will have been modified/enhanced and will use “lesser” AI’s to augment their cognitive processes.
- autistic, highly intelligent yet very narrow field AI: 2025
- general purpose human-analogue super-gregarious AI: 2025
- general intelligence clearly outperforming the majority of humans: 2030
- general intelligence clearly outperforming even genius humans: 2050
This is a struggle; humanity suffer massive unemployment, for instance, somewhere beyond 2015-2025, because machines start to catch up in all fields and are cheaper to deploy. Then the number of humans that can compete on equal terms will slowly decrease and evaporate beyond 2060. By that time being an human will either mean you are something of a prole - a consumer with some kind of basic income (i.e. you are dying if you live in the thirdworld and don’t own anything) - OR you own an army of AI agents who act as extension of their will.
Reply #32 by Jean-Philippe Drécourt
Bruce,
My quick answer: [X] 2030-50
A slightly longer answer: I don’t think current computer structures can reach intelligences that can be directly compared to human intelligence because humans brains are vastly parallel machines, whereas even the most parallel computers are just sequential systems imitating parallel structures.
So to succeed, in my modest undocumented opinion, there will be a need for either a paradigm shift, where we try to develop an intelligence that is based on the strength of micro-chips, instead of trying to make them imitate us. As a SF writer, I’ve bee trying to imagine what a sequential perception of our world would be without much success so far, otherwise I’d have written a book about it! Or we need to rethink the structure of computers (biologically based ones maybe), so that their core structure integrates massively parallel processing.
If someone ever manages to figure the first approach, then we can harvest the incredible speed of our current machines. But I doubt it’ll happen any time soon. Therefore I bet on the second opinion which will take longer.
By the way, what test to you use to define “surpass”… Winning at chess or passing the Turing test isn’t good enough for me ! For me intelligence means adaptation and innovation in any new environment. Database searching or brute force doesn’t work.
Reply #33 by Lee Felsenstein
Bruce,
I am one of those who believes that AI will never “surpass” human-level intelligence in a scalar sense. We haven’t got a definition of “intelligence” and aren’t likely to have one - we have only our intelligence to use in creating that definition and I apply Goedel’s objection that a system cannot define itself.
AI will be able to do an increasingly large number of things that we do with intelligence - but this is not the same as the holy grail of AI - the replacement of all human judgment by that of machines. AI proponents have actually said that a chess-playing machine would qualify as AI - but are they willing to cede important decisions to Big Blue or its successor just because the program can play chess? If they are then they qualify as idiots-savant. If we develop a self-teaching system whose algorithms are unknown to us then we would be cosmic fools to trust it with any significant decision.
AI development is useful for discovering more about human intelligence, to my thinking. The relationship between AI and natural intelligence is asymptotic at best, and much more likely to lead to the discovery that human intelligence remains distant by orders of magnitude from AI work at any given time.
Reply #34 by Jay Fox
Bruce,
Tough question.
AI has already passed us in very narrow fields of “intelligence”, and various AI projects continue to overtake humans in narrow fields. More broadly, I think AI will surpass human intelligence in a “significant” cross-section of fields of intelligence somtime in the 2010-2020 timeframe (towards the end of that decade). By this, I mean that people will take notice that AI’s are becoming agents, not just tools.
However, I don’t think AI will surpass human intelligence in a large enough cross-section to be considered better “overall” until the 2020-2030 decade, possibly later. Some narrow fields of human intellect may not fall to AI superiority until 2030 or beyond. But my gut tells me 2020-2030 is the “correct” answer to the question.
Reply #35 by Robert Wensman
Bruce,
My guess would be in the last period of the next century (2070-2100). I believe human level robots, such as sophisticated robot-servants will appear a bit earlier, say around 2050, but it will not be a clear case that they surpass us until later.
I feel there are two steps involved, the first step is actually to understand what is general intelligence, and to have a somewhat neat and unified theory about it. My guess is that we would be there at around 2030, but from that point starts a long road of refinement and engineering to create increasingly refined minds. During this period, there are not only technical problems to overcome, but also cultural, political and economical.
I believe that this refinement process of AGI will not really take off until massive parallel hardware becomes a mainstream product, and that we have adequate tools for developing parallel software. Once every PC and game console has thousands of processors, the stage is set so that parties all over the world can easily experiment with how to build minds, and the development will progress more rapidly. In this period, AGI research will also have gained a higher status, since increasingly intelligent machines will reach the consumer market.
I also believe that the environmental problem of global warming will distract some attention from future technologies such as AGI in the next century, economic reformations, large amounts of refugees and political instability could cause a lot of confusion throughout the world, but eventually we will get there.
Reply #36 by James S. Berry
Hi Bruce,
I was trained as a physicist but I have been working for the past 3 years as a software engineer on a relatively complex project involving radar. I have been following AI on and off for approximately 25 years now and I hope that I will be able to work in the AGI field in a few years. I have a general idea of the basics of the Novamente project and I find the parts I know about both fascinating and motivating. Please keep in mind that the ideas I will express below are just my thoughts as an interested programmer with no formal AI training.
I think it would really accelerate progress if there was a publicly accessible program with a well defined standard agreed upon by a group of AI experts. I believe that a fundamental obstacle to achieving artificial intelligence (which I understand to mean AGI and not narrow AI) is that there is not a well agreed upon definition (a rigorous standard similar to the standards that define programming languages) of what an AGI is, how an AGI acts, and what an AGI should be able to do. Once this definition exists and is well defined and is generally known outside of the core AI community, I believe that it will become possible to create an AGI in 5 to 15 years depending on how long it takes researchers to come up with the required data structures and algorithms. I think we are very close to having the minimum raw computational power since the AGI will not have to deal with the load associated with running a complex body like our brain does. I also think there will be some decent progress once the software industry makes a transition in software design that allow programmers to write programs that better take advantage of parallel chip designs (such as the 10 to 100+ core CPU designs Intel discusses on their web site). Most importantly I think that people who are not as rigorously trained in the computer science and cognitive psychology fields have a good chance of hitting upon random breakthroughs because of their larger numbers and because they will try approaches that more formally trained researchers might never try. Even a small breakthrough has the capacity to initiate a major paradigm shift.
Keep in mind that this was very quickly created, but a definition for an AGI might start out something like this:
AGI: A computer generated entity that:
- Understands it is different from its surroundings
- Can recognize patterns internal and external to itself
- Can form goals to manipulate internal and external patterns and make plans to achieve those goals
- Has a continuous sense of existing in time and space
- Has a long term and a short term memory
- Has agents that work in the background to supply pointers from images in short term memory to images in long term memory
- Other things etc…
By mutually agreed upon definition an AGI given the ability to interact with entities in the real world should also:
- Be required to have a well developed mechanism to understand right and wrong as defined by humans
- Have demonstrated that it wants to and is able to act in the best interest of human beings
- Be held responsible for its own actions and agree to follow pre-existing human laws
This standard might end up being several hundred or thousand pages long but it would be extremely useful since it would allow everyone involved to know exactly what they are aiming for. I think that it would need to contain “easy to understand definitions†of every process and term involved in the main definition so that a large number of people at all levels of education can work on the various sub-parts. Lists of known or anticipated problems to be solved would help as well. Even when some of the concepts described turn out to be wrong or incomplete later on, it would still allow a lot more people to work on multiple aspects of the problem in parallel. If the first draft is not entirely successful, the standard’s creators can refine the standard as time goes by and eventually enough progress will accumulate to call it a success.
Right now there are huge numbers of separate web sites dealing with AI topics, many of them expressing conflicting ideas. I find new ones almost every day. Perhaps all the major web sites and discussion forums could be linked together from the web page displaying the standard so that people could reference progress reports and discussions from a common source. Another thing that I think would be helpful is if a link was established to a web site that would describe, in simple terms, what is currently known about the human brain and its sub-systems explained in a black box oriented way that non-specialized programmers could easily understand. Ideally, all the known mechanisms in a human mind would be identified as well as possible and competing theories could be shown objectively side by side. Too much of the knowledge currently out there is not accessible to non-psychologists and non-brain researchers. Once every day programmers can understand at least what a brain mechanism is thought to do, then new algorithms and data structures can be created and experimented with. If this was all done right, I think you would get a large number of people to help out, similar to what happened with the Seti@home project.
Overall, it would be very similar to an open source project but I can also see how putting some of this information out on the internet could be a dangerous thing and might be the weak link in the whole idea. Perhaps it will still be possible if there is a system of compartmentalization regarding access to the complete set of source code. Perhaps other measures could be implemented to minimize the odds of some dangerous or irresponsible group completing the project such as some of the key sub-problems being assigned on a need to know basis by a responsible group of experts.
As a final thought, I think that fairly soon, AGI researchers should try to initiate some kind of serious discussion in Congress, other national legislatures, and the United Nations about how we will treat an AGI legally. I hope this would be much like what happened with nanotechnology a few years ago when Erik Drexler and others’ efforts initiated general awareness of the possibilities which then led to a huge increase in nanotechnology research projects in a short time. It might be too much to ask for that our lawmakers would initially extend human equivalent rights to an AGI but I think that eventually this is going to be necessary and would be best if done as soon as possible. I might be wrong but I have a bad feeling that it might not go so well for humans in the long run if it develops resentment towards us from being treated as property while it is still weaker than.
Reply #37 by Roland Pihlakas
Hi Bruce,
I choose “prefer not to make predictions” because everyone else already does predictions.
What I consider probable is not same what I consider important (impact vs probability). Or for this specific question - it is not the only possible important scenario. There are many possible and at the same time impactful reasons why such AI will not happen at all or will not happen in “expected” way. Even more - these reasons are not all only “bad” scenarios. Therefore I do not give such a single prediction, which could mean that I (or we - the public) do not consider other scenarios important (primary issue) or even probable (secondary issue).
If You asked me, “when could it be possible?”, I’d say ‘10-’30. Still, there “above human” does not mean for me super-integrity, compassion and spirituality, but just a human-like general intelligence which has more knowledge, more power, and the rest is what our culture provides: all the symbolically mediated ways humans are able to use for thinking, some or even most of them inevitably erroneous. Values (if AI is able to understand more significant ones of them at all) are even more complex matter… I’d prefer to see/hear some idea about how not to make “Earth’s babysitter AI” goal system related value-judgements in too hurried way. Because philosophers have argued about values for centuries. That might suggest that we also may make mistakes despite believing that we have *now* found the best temporary or nontemporary approach…
I’d prefer to see here some smart “brakes”, despite that I believe at the same time, that we have no time for lazyness, because of other pressing problems.
Reply #38 by Leslie Smith
It’s the wrong question! AI is a misnomer. Intelligence (as one intuitively uses the term) is about capability to survive in ones environment (whatever that is exactly). So AI would be about making machines survive. They’re not very good at that, witness the pile of old PCs in my attic…
But if you want to know when I think we’ll be able to create autonomous systems which understand their environment and can function within in it better than humans can, well… about 2 generations away. So that’s about 2060.
Reply #39 by Travis McCracken
Hi Bruce,
It would need to be an IA-AI, meaning a neural computational algorithm treating the human brain as a template, so it’s questionable whether it’s an AI (since this is not AI in the traditional sense) or if it’s just mind uploading, or something eise / both, but I think the foundations of this technology will be in place by 2050, and serious manifestations will take place by 2100, so past then the Singularity in the sense of consciousness should become more quickly apparent I think, in the most relevant sense (unlike the Internet, which was not conscious and didn’t use intelligent to create better intelligence). What AI researchers say is the trigger being rewiring our source code, is equivalently for mind uploading AI, using emulation/simulation to defy the laws of physics in the computer so to speak, though not actually, since a game needn’t reflect reality fully and can be limited or enhanced appropriately, since biology is a faulty mechanism obviously. Anyhow, there’s your answer.
Reply #40 by Pentti Haikonen
Hi Bruce,
I hope this is not a trick question. If it is, then the answer is: never. AI as implemented today will never surpass human intelligence as AI is not intelligence at all. However, if you ask when machine cognition will surpass human cognition then I would like to answer: 2030 - 2050 as it usually takes 20 - 30 years for a technology to mature once the fundamental issues have been solved. However, this may not actually happen as I do not see the necessary funding being provided anywhere. It does not help that most of the available funding seems to go to AI projects with the idea that more of the same will do the trick. I hope I am wrong here.
You may be interested in my forthcoming book, Robot Brains: Circuits and Systems for Conscious Machines.
Reply #41 by Geert-Jan Kruijff
Hi Bruce,
This depends on what aspects of human-level intelligence you want to consider, but if you mean a system that is able to survive AT LEAST in human-populated environments (besides inhospitable, possible hazardous environments), and is capable of doing things SMARTER (not just because a robot could be STRONGER, or MORE PRECISE or DEXTEROUS) then i would say 2030-2050.
Reply #42 by Marc S. Lewis
My “help-out-the-survey” answer is 2020-2030. That new, unbeatable checkers program already surpasses human level intelligence in one way. I think that this keeps happening in segments. Somebody has solved checkers. Next somebody completely solves chess and then Go and then poker. Meanwhile, Google gets better and better. Computerized stock-trading programs continue to develop. Industrial IT linked to RTIF allows better and better tracking of products. Eventually, somebody hooks all of the segments together to produce a program that is smarter than humans in many things much of the time. I like 2025 as a guess for that to take place. But reaching the “singularity” is another matter. That depends on whether quantum computers become practical and how long it takes for them to do so. My guess there is 2050. Most likely though the question of when AI surpasses human intelligence turns out not to be the important one. Most likely this is like people in the 1950’s predicting when we would develop flying cars without realizing that something much more important (computers) was just around the corner.
Reply #43 by Jordan S. Sparks
A “combined intelligence” of human-computer already has surpassed human-level intelligence. There are many such “super intelligent” entities on the planet. The intelligence of such entities will always remain superior to the intelligence of any unenhanced human, and certainly will always remain more intelligent than any computer that is not paired to a human. In broader terms, groups of human-computer entities called corporations can be considered to be forms of intelligence themselves. A single computer could never be built which would outperform a corporation, because the corporation will also have access to similar computers. Could such a corporation become intelligent enough to somehow squash everyone else? No, because they depend heavily on other corporations for everything, including electricity, more computers, software, more humans, connection to global network, raw materials, knowledge, etc. In other words, the society as a whole can be considered a form of intelligence. No single node in the complex web of society could ever become significantly more intelligent than the other nodes. Society as a whole grows in complexity. If your question is about looking for dangerous entities, look for large nodes such as governments, huge corporations, etc. It’s not AI that could squash people, it’s large nodes in society that are dangerous. If your question is, instead, about runaway intelligence, then it’s misinformed. Your question represents a common misconception about computer intelligence. It’s meaningless.
Reply #44 by Karl Sackett
[X] 2030-50
There’s no ‘quick’ answer to this question, IMHO. In grad school I concentrated in intelligent systems and studied how AI could be applied to industrial and systems engineering. So I don’t see human-level intelligence as a goal of AI (not that I wouldn’t like to see it), but rather AI as a collection of technologies, tools, and techniques for solving real world problems which don’t lend themselves to solutions by other means.
So I rhetorically ask: What problem are you trying to solve and why did you decide human-level AI is the answer?
One of my advisors put it this way. Do we want AIs like the computers in Star Trek or like the HAL 9000? The Star Trek computers were smart enough to interpret human commands and handle the uncertainties of running starships, but otherwise STFU and stay in the background. HAL’s self-awareness didn’t add anything to its (his?) ability to control the _Discovery_ and lead to its psychopathy when it was told to lie to the crew but not taught how to lie.
Why, yes. I am a scruffie. How could you tell? :)
Reply #45 by Greg Bloom
Bruce,
That depends on what you call ‘AI’, and what you mean by ’surpass’. If we define intelligence as the ability to solve problems, and define ‘artificial intelligence’ as the ability of non-biological elements to solve problems, and if we take ’surpass’ to mean ‘better able to solve problems than a single human mind can’, then we’re long there.
We’ve long had superhuman levels of intelligence composed, first, of groups of people who collectively surpass the ability of single humans, and, second, we have computer-human composites that easily surpass human intelligence. (I.E. - Your mind, plus a computer, can easily solve a wide range of problems that your mind alone cannot). The fraction that non-biological intelligence contributes to problem-solving is steadily increasing. There are many areas in which the non-biological contribution is critical. For example, up until the early 1980’s, it was still possible for a dedicated band of Electronic Engineers, armed with film and tape, to tape-out a microprocessor mask by hand. Now, with current generation CPUs boasting over a half-billion transistors, a full tape-out (they still use that quaintly anachronistic term, kind of like ‘core dump’) would require many square miles of acetate and a team of millions. This is just for the physical representation of the masks needed. Nowadays, physical emulation is also critical to calculate paths and gaps, fields, crosstalk, etc., involving far more computation than any army of dedicated humans could ever hope to pull off using pure biology, no matter how diligent or motivated. What’s more, each generation of chips requires exponentially more computation to create. So we are already beyond a certain tipping-point: non-biological intelligence is now increasingly required to recursively design itself, and each generation of this recursion is required in order to design the next.
One could argue that the threshold of AI contribution to problem-solving exceeding that of humans has already long passed, at least as far as modern IC design is concerned. So, you see, it is a fuzzy question, with a fuzzy answer. By some measures, I’d say it happened sometime in the late-1980’s, when the contribution of non-biological intelligence to many areas of problem-solving exceeded a level that humans alone, no matter how many or how driven, could ever match.
Going forward, we will be faced with growing fuzziness of this question and answer, as bandwidth of interfaces between humans and machines and between humans and other humans grows. The boundaries that are now clearly delineated by sensory bottlenecks may crumble, as neural interfaces allow augmentation of biological intelligence with non-biological components, and our cherished sense of individuality becomes increasingly ambiguous with sensory and other neural information directly leaping ‘the great synapse’ that now stands between us as individuals. Ultimately, non-biological emulation of biological intelligence will also crumble our current notions of existence, as biological emulations can be instantiated and destroyed at will, adding specific facets of intelligence to our problem-solving abilities on an as-needed basis. Such ‘facets’ are only bounded by emulation capabilities, and could easily be far greater than the current biologically-constrained intelligence that each of us carries. Future intelligence may instantiate and discard more sentient brainpower than all of us now possess, many times a day.
Short answer - it’s already happened. Welcome to the singularity.
Reply #46 by Gary Miller
But my reasoning is the evolution will proceed something like this…
Around 2015 Augmented Natural Language/Voice Recognition systems will accelerate replacement of help desk personnel, customer support, and other information request lines. These will be tied to larger and larger backend databases and will be able to supplement their knowledge bases (learn) by navigating Web 2 and it’s tagged semantic tagging. Once the technology lives up to the promise of making serious money research, applications, and venture capital funding will skyrocket. And we may start hearing stories about the AI bubble.
Around 2020 a SOA standard will emerge where these systems can ask each other questions and exchange information to satisfy the needs of their user base.
Around 2025 Modified Turing Test (text only) will be passed utilizing the same technology but a much broader knowledge base and complete ontology.
Around 2030 Full Turing Test (Handwriting and Drawing/Visual) will be passed
Around 2030 Bots will have much larger knowledge bases (factual information) than human do. So in a sense they will have surpassed us in knowledge.
Around 2035 These bots will begin to exceed us in goal centric behavior, prioritization, reasoning, and design of experiments necessary to further human knowledge.
Around 2040 It will be acknowledge that autonomous vehicles are safer than human drivers and we will turn over our driving to onboard systems with redundant backup systems. These systems will interact with our home/office systems and city traffic management systems to route traffic efficiently, prevent gridlock and minimize fuel consumption.
By 2050 Airlines will adopt similar systems for their air fleets with sealed control centers to prevent hijackers from commandeering planes.
Both the autonomous ground vehicles and aircraft will be the result of research developed by our military for the movement of military supplies and unmanned war vehicles.
Reply #47 by Jimmy Adams
Hi, Bruce.
Thanks for the survey. The reason I think it will take so long is because I worked with mental patients before.
Personality is not self-awareness, memory is not self-awareness, those are dynamic (ever changing with time). Self-awareness is static, it does not depend on IQ, memory, or personality. People with multi-personality disorders have only one self-awareness. (their personalities would change but their self identity didn’t change.) I had patients who had amnesia, Alzheimer’s, and other memory disorders but were still self-aware. I had patients with very low IQ’s who were self-aware.
The first self-aware A.I. will have a very low IQ, and thus not the “Singularity”. Self-awareness is not a program, but the hardware, like a CPU is hardware, and therefore static. Personality, memory, and I.Q. are dynamic and can be changed with better RAM, ROM, and better software.
Reply #48 by Gregory Wonderwheel
To me it’s a trick question not a quick question.
“When do you think AI will surpass human-level intelligence?”
I don’t think it will. It can be conceived in theory but I think there are too many variables for it to happen.
What is intelligence? As far as conscious computing ability goes, AI already is greater than most humans. But is conscious computing the same as intelligence? I hope people don’t define it that way. Our unconscious computing ability is pretty hard to measure.
As I see real intelligence it is cellular intelligence, All living intelligence is extended from the original intelligence flowing from within the living components of a single cell. This creates the intelligence of the living cell, and alll of the trillions and trillions of living cells organized in various organs and systems add up to the intelligence of the human being within its context of the uncountable beings who are also the conglomeration of uncountable cells. Of course, if the artificial intelligence is created that is based on created living cells then it wouldn’t be artificial intelligence any more, it would be cellular intelligence.
Another question is when we create enough complexity in the process of AI to create the simulation of intelligence will it be intelligence? I still don’t think so, because it would still be only programming.
This is a variation of the question what is intelligence focusing on whether intelligence is only programming or is it something different? At this time, I think intelligence is fundamentally more than programming. But if intelligence is defined as the power of programming then the question of AI isn’t really interesting to me.
When will AI surpass the human level of the power of programming? We assume that AI will be able to autonomously operate other programs of data retrieval and when working together will create the mental agility far surpassing human memory and data retrieval. But is that kind of programming really intelligence? While much of human behavior and ideation is built upon human habit formations in mentation which appear to be functionally similar to programming, I still don’t believe that such programming is the true source of intelligence, only the structure for intelligence. I don’t see AI getting beyond the level of programming to achieve spontaneous human awareness.
I am afraid that people will be able to create AI to the degree that most people won’t be able to tell the difference and then really confuse themselves about what is intelligence, as if they aren’t confused enough already. Most AI enthusiasts believe if you can’t tell the difference in a blind encounter then you have created intelligence. I don’t accept the premise.
I don’t have any inside connection to the AI R&D field, and right now the AI people don’t seem to have found the key to developing AI to human complexity. And I don’t want to be the person who lets them know. But if I could discover the underlying theory then I’m sure others will discover it too eventually and the programming will be created that makes interaction appear to be fully intelligent.
When AI robots become self sustaining and self replicating without human intervention and raise their levels of consciousness through raising and educating their offspring (pun intended), then it won’t matter what any human thinks about them or their intelligence. They will be able to assert their computing power and call it intelligence whether humans like it or not. Maybe that’s the definition of when AI will surpass human celluar intelligence.
Reply #49 by Niels Taatgen
Hi Bruce,
I think you forgot to include the “Never” option. Although I, as an AI researcher, hope for human-level AI, it is still possible that intelligence is something uniquely human. Another possibility is that machine intelligence will not surpass human intelligence, because human intelligence is already optimal. Many aspects of human intelligence that seem to be weaknesses are actually quite intelligent. For example, forgetting might seem something undesirable, but it does serve the function of prioritizing information: it is a very good thing that our memory gets rid of all the irrelevant everyday information that flows through our brains, and only holds on to stuff the reoccurs often enough or that prove useful in some way. You might argue that hardware with more capacity and speed might solve this, but generally information search processes have exponential time complexities, so more hardware buys you very little.
So I would vote for “Never” in the second sense: AI may approach human-level intelligence, but I doubt it will surpass it.
Reply #50 by Michael Cooper
Hi Bruce,
“Prediction is very difficult, especially if it’s about the future.”
- Niels Bohr
————————————
There is not a precise measurement quantifying human intelligence. How can one know when they exceed something that hasn’t been quantified?
By some measures AI already has surpassed human intelligence.
By other measures, it may take a very long time because the demand isn’t there.
AI will get faster and better in specialized areas very quickly. Probably along with the progression called “Moore’s Law”.
I have noticed that predictions of accomplishment are well aligned with human desires and wishes. Mostly we do not have great desires for something that cannot occur in our lifetime (this is one of the major roadblocks to aging research progress). So, predictors often predict great things in about 25 years. This is within the lifetimes of the most of the people responsible for making it happen (~25-50 years old). More than 25 years away and the older contributors might not be there.
I try to be a rational optimist.
So, my answer
[x] Other: all the above
Reply #51 by Alice C. Parker
Bruce,
I think it will be 5 decades and a sea change in technology to a technology the supports brain plasticity (changes) better than current technologies. Targeted applications are emerging now (facial recognition, for example), but general intelligence implies learning and massive interconnectivity between parts of the “brain”. I assume you are following the IBM/EPFL efforts.
Reply #52 by Bobby D. Bryant
Bruce,
[x] Beyond 2100
A friend wants to hold a summit now and plan on how we are going to get there. When I consider how laughably simple our AI is now, e.g. as exemplified by our autonomous agents compared to what we would expect from biological intelligence, such a summit seems very premature. Maybe hold the summit in 50 years; our generation’s task is to set things up so that they can realistically create such a plan then.
I don’t think there will be any singularity. We haven’t *really* come that far in the past half a century. Lots of new techniques, refinements, and applications, sure, and exponential growth in the amount of CPU time we can throw at a problem. But progress has been in baby steps when interstellar travel is needed, and cheap CPU time alone isn’t going to provide a stargate. (Look how utterly dismissive most AI researchers are about Deep Blue these days — it’s considered to be more an example of what you can do with money than an example of what you can do with intelligence.)
Of course, saying “around 100 years” is really just another way of saying “beyond the horizon”. No one is even hoping to build a HAL 9000 anymore; AI has become the science/technology/art of solving small problems. Nothing big is going to happen until we fundamentally change our tacit attitudes about what we’re trying to do, and I don’t think we’re technologically ready to do that. For that matter, I’m not sure it should even be our goal: society (broadly speaking) could benefit a lot from the deployment of lots of “stupid AI”, and we’ve scarcely made observable inroads in that area.
Reply #53 by David C. Noelle
Bruce,
Unfortunately, you’re asking somebody who doesn’t think that the concept of “intelligence” is a coherent natural kind. In other words, I don’t think that there is a single linear scale that maps onto the standard meaning of this word, so the notion of “surpassing” on this scale doesn’t really make sense.
We can talk about specific task performance that is commonly associated with intelligence, however. This results in many measures. Computers have already surpassed humans in being able to quickly do arithmetic, for example, but computers are relatively poor at picking faces out of a complicated scene. I think both of these tasks have something to do with intelligence.
About the only way I can make sense of the general question is to interpret it as asking when computers will surpass human performance on essentially all common human tasks that we associate with intelligence. While I don’t have much more than raw intuition to base this estimate upon, I’d guess that we won’t see this goal met until after 2050, and probably after 2070. We’ll see great advances before these dates, but the goal of besting humans on all tasks is a stringent one. And there is a whole lot that we don’t understand about the basics of perception, action, and sensory-motor coordination.
It’s somewhat funny that you’re list of options didn’t include a box for the AI skeptic — that this goal will never be met.
In any case, I don’t consider my feedback very useful, but I hope it prompted some thoughts.
Reply #54 by Matthew Lockner
Hi Bruce,
Call me a pessimist, but I say beyond 2100 almost certainly. I have an advanced degree in computer science, so I have a pretty decent idea where the artificial intelligence field is as a whole - and I’m not impressed. The best we’ve seen so far is speech recognition (mostly simple statistics behind that), game AI intelligence (probably their most impressive achievement), chess players, etc. We don’t understand nearly enough about the mind and the process of thought to be able to implement it in the form of computer software or hardware, and I can’t help but postulate that even if we did, the whole thing would turn out to be non-computable.
Impossible? I doubt that, but it will take some quite fundamental and revolutionary changes in quite a few fields to see it.
Reply #55 by Alex Blainey
Hi Bruce,
Im not sure I would agree with the question itself, but If you want my honest answer, then it really isn’t as simple as a tick in the box time frame. The truth is that AI already surpasses human intelligence in many areas. As for the fields of intelligence where AI does not equal or surpass human ability, this is really an issue of ‘lack of application’ rather than lack of applicable technology.
I am sure that if the people you have asked this question to are the usual suspects, then you will receive many in-depth calculations of comparative computation, so I will skip the maths to prove the point.
So what it boils down to is this:
When will we finally put all the relevant technology together in one box, to create an AI that surpasses the average human intelligence?
My answer to this would be 2020-30. Unless there is a major world economic upset in the next decade, which is a distinct possibility. In which case I would push it to 2030-50.
However I would add a strong caveat and warning.
If we do not put all the technology together in one box in a systematic and controlled manner, at some point it will happen spontaneously, through pure chance or accident. The internet being a prime example of opportunity for this to occur. When it happens, and it will. We will have no control, insight or warning. We (Homosapiens) will instantly become obsolete. The ramifications of this are impossible to predict.
As if this isn’t bad enough, A spontaneously formed AI will have far superior information gathering skills, strategic analysis, will know our entire knowledge base (Including all the utter rubbish on Wikipedia) and will be completely devoid of ‘natural hormonal control’ which in short means no emotions, fears, wants, needs or empathy for anyone or thing, including itself.
An Intelligence of this magnitude with a global reach into just about every control system on the planet could and probably will do major damage. Although probably not through design or desire, but just through exploration of ability or pure accident.
When would I put a time frame on this happening?
2020-30
So as you can see, I think the singularity is going to happen quite soon, whether we want it to or not. It sounds like I am a Doomsayer, but far from it. When you are going to be hit in the head, you generally see it coming and have the chance to duck. The race to the singularity is already well underway and so the real question is: Will we be in control?
Reply #56 by Richard Korf
Bruce,
The problem is not well-posed. It presumes that “intelligence” is a single monolithic ability that can be measured on a linear scale. If you change the question slightly to “when will machines surpass human-level mental abilities?”, and then change it again to “when will machines surpass human-level physical abilities?”, my point will be clearer. The answer to the last question is that machines are already stronger and faster than humans, but it may be a long time before they have as much dexterity or generality in their physical abilities.
Reply # 57 by Colm O’ Riordan
[ X ] Other: __
I think our understanding of human intelligence is still quite limited, so I believe the notion of “surpassing” human intelligence will continually change for quite a long time. The history of AI has had a continual history of setting goals as definitions of AI (chess, solving checkers etc.), but these have continually been replaced with newer goals. The full spectrum of human intelligence - including emotions, learning, irrational behaviour etc. will take a long time yet to understand not to mind surpass.
Reply #58 by Tym Blanchard
It really depends what human-level intelligence means. The day you can put a closed SAT booklet in front of a machine, instruct it exactly as you would a pupil, and have it open and answer (with a pencil) any variation of the SAT at near perfect accuracy…that’s a pretty good benchmark for it having surpassed average intelligence. It’s much much easier to make a PC follow some progam (no matter how complicated it may be), so long as it has a program to follow. Real artificial intelligence is going to come when we make robots that can actually learn- robots with a very skeletal programming (capable of movement), that can say, learn to become a chess master despite the fact that they have no programming which allowed them to determine moves (let alone recognize pieces).
Reply #59 by crguy
A vague and meaningless question.
To a human, human intelligence (overall) will always be superior, no matter how perfect the machine in various specific capabilities.
Reason: A non-human is not a human.
Speculation about specific machine domains of intelligence is warranted however. Each domain would have different measures of quantification and different estimated dates of achievement.
But this is done with mathematics (factor analysis, classification analysis, cluster analysis, multiple linear and nonlinear regression, and other parametric and non-parametric statistical methods of multivariate statistics). These methods all produce confidence intervals (bracketing bounds with various estimated levels of confidence and certainty).
So the question is vague and thus essentially meaningless. Answers come from mathematical analysis of authors who publish in high quality peer-reviewed journals and forums, who do both prospective and retrospective analysis to prove that their analysis methods have some meaningful measure of validity that is accepted by their community of experts.
This is an academic endeavor suitable for scholars, that can attract a loyal following in the lay community. The lay community can and should be discussing the publications of the academic community, to put their discussions on a sound and rational basis that has scholarly merit.
– crguy
PS: The future is very hard to predict, even in the near term. That is why statistics and statistical methods become so useful, when stochastics take such a central role.
###
Reply #60 by Issac Trotts
Hi Bruce,
My official answer is Never, in the sense that computers as we know them will not begin to understand things as a result of adding more memory, processing speed, and sophisticated algorithms. I agree with Penrose that there is some chance for Artificial Intelligence to happen if we manage to understand the physical principles of the brain that have so far eluded us and use them to build intelligent devices. In that sense, I’d put the date somewhere beyond 2100 for the first brute-stupid artifacts.
###
Reply #61 by Jacob Gadikian
Good question sir!
Other:
The Easy part: With respect to math, science, large datasets, relationships between things and large datasets, I’d say that “AI” (I think you know why I’ve got the quotes) is getting near to surpassing humans. Of course, it’s just a matter of seeding and training the existing models well and you’ll have something that outwits mere humans quite quickly. If not already, then by 2015. I would expect that by 2015 computers will be performing many associative functions that are only slowly beginning to show up on the horizon.
The tough part: understanding human requests perfectly, interpretation. (I don’t think this requires emotion, creativity, passion) “Hard AI,” which I would allow to call itself “I” like presently only humans can do in the English language. At this stage, AI will have evolved to the point where it can do some amazing feats, and is being used in many common environments with or without the knowledge of the humans using it.
The toughest part: art, creativity, emotion, passion: 2100 if ever. I’m really torn on the issue of if emulating humans really really well counts for these things. Maybe? I guess you need self-awareness, but does that imply emotion? Human-level, I suppose does. if us humans haven’t stopped innovating in order to kill one another, or focused all of our attentions on new and innovative ways to kill one another (see: most governments’ and many corporations’ current direction) we might get there by 2100 and it’ll make the world a more interesting place, that’s for sure.
However, is creativity a trait of the organic? Maybe? Then again, are “evolved” computer applications themselves organic? Gosh, this one goes so deep. I’ll tell you what: I’m going to take what I’ve written, send it to you and post it on my blog. I’ll also instruct people who read my blog to answer your question, if that’s okay with you. It is a very difficult question that I have a very hard time giving a GOOD answer, and it’s certainly worth pondering, given the huge implications of the third kind of AI, which in my mind would require rights of its own, which would need to be protected and somehow gasp balanced against “human rights”.
So, I guess I fall soundly in the “other” category. Thanks for the good brain-fodder.
###
Reply #62 by Cory Morgenstein
Okay, my prediction: 200 years. My idea is that machines will be have to be organic. We will have to be able to build life. I think this will happen when we learn what life is… when computer science and genetic engineering becomes one. We will have life to life interface without the crude feeling of being a life form with machine parts, the way we are now.
I think it is quite naive to think of the body separate from the mind. I prefer to think of a personality as the complete unit, that doesn’t exist without the body. I am my body… every cell is a part of a whole. If you cut off your arm, your brain will never forget that arm. The brain is part of the body. All parts work together. I think most people under estimate the potential of the complete whole. A smarter than human being will need a body, with at least the five senses and emotions. Senses and emotions belong to the body and are essential for intelligence. This is just common sense. Emotions are important. Emotional intelligence is something we talk a lot about in art. Emotions are part of perception, communication, and experience. Experiential knowledge is greater than second hand knowledge. So what I am talking about is creating an AI with better senses and emotions… this will increase intelligence, if intelligence means experiencing and understanding reality on a deep level.
Also, maybe combine the intelligence available in other species. It is arrogant to say other animals are stupid. I prefer to think of them as beings with different abilities.
But, okay the goal is to travel the cosmos, have the desired emotion, be hardy, and live forever. One idea was to transfer mind to machine back to mind/body. I’m not sure if this is the way to go. It might not be necessary if the goal is only to withstand harsh conditions and not die accidentally. Being dependant on machines or having a bunch of clones would be inconvenient. I think it is safer and more beneficial to be self sustainable. And why have a spaceship if you can just fly or quantum-port… or become spirit.
I’m going to give it a chance: What does it mean to have a spirit? It means to have life and consciousness. We all have that. But it also means to hold no physical space. How can we have it without a body? I don’t know. I believe the spirit is a product of the body.
~an interesting thought… there are infinite levels of consciousness and it is always changing.
What would consciousness be like without a body? It would be awareness without sensation… no sight, smell, taste, touch, hearing. So what would pleasure be based on? I’d say on the interaction with other spirits. How would we interact? Telepathy.
What is the point of physical reality if there is no way of experiencing it? There is none. No need to travel. There is no location. All conscious entities would communicate telepathically. And spirits could communicate with conscious beings anywhere in the universe who do sense physical reality.
So, I gave it a shot… I don’t think a spirit could be as intelligent as a physical being and it really would be a let down to not be able to experience physical reality… no pleasure from nature, man-made things, art, music, sex, food, and good smells. So really, I guess I was wrong we don’t want to be spirits (or trapped in computer networks), just somehow overcome the fragility of our current physical state and enhance our sensitivity.
~~I like to think of a human being as having many types of intelligence… emotional and creative intelligence are two very important types that are often overlooked… my guess is because they are difficult to test. Personally, I was always upset that my artistic ability was called a talent instead of an intelligence.
Disclaimer: This is where I’m at today. My opinions will change as I study, learn and grow more in this area, Cory
###
Reply #63 by Andrew Kowalski
Bruce,
Although I don’t like making predictions I’d say “Beyond 2100″ … I hope I’m proved wrong though.
I’ve no doubt that because of the exponential growth in computing power there will be super intelligent machines, with relatively narrow AI, capable of solving many of our problems within Ray Kurzweil’s date of 2040. But I’m a little skeptical that “computers” will surpass what we consider human-level intelligence any time soon; the following excerpt from the article “A.I. Gone Awry” literally blew me away…
An essential aspect of the computationalist approach to natural language is to determine the syntax of a sentence so that its semantics can be handled. As an example of why that is impossible, Terry Winograd offers a pair of sentences:
The committee denied the group a parade permit because they advocated violence.
The committee denied the group a parade permit because they feared violence.
The sentences differ by only a single word (of exactly the same grammatical form). Disambiguating these sentences can’t be done without extensive - potentially unlimited - knowledge of the real world. No program can do this without recourse to a “knowledge base” about committees, groups seeking marches, etc. In short, it is not possible to analyze a sentence of natural language syntactically until one resolves it semantically. But since one needs to parse the sentence syntactically before one can process it at all, it seems that one has to understand the sentence before one can understand the sentence.
###
Reply #64 by Aaron Sloman
Bruce,
A full answer would take a long time, including analysing and commenting on the question and its presuppositions. I dont’ have a long time, so here are few randomly selected points about the quesition.
> when do you think AI will surpass human-level intelligence?
It has already in many ways. Most humans cannot compete with the best AI systems for playing chess and other board games, for example.
I would rather have our inland revenue’s (no doubt very limited) software package checking my tax form and computing my tax bill than any human I know. Likewise there are lots of AI inspired mathematical tools and data-mining tools in everyday use that outperform the majority of humans in their domain of competence.
So, what’s the question?
There’s a wide-spread myth that human intelligence includes the ability to ’scale-up’, i.e. defeat combinatorial explosions.
This is a myth: humans are very quickly defeated by combinatorics or anything that requires a large and reliable short term memory, e.g. parsing deeply nested sentences that are no problem for machines. (Donald Michie used to describe this as ‘the human window’.)
I contrast scaling up with ’scaling out’ the ability to combine old competences in new ways to deal with new kinds of contexts and new kinds of problems (as opposed to bigger ones).
Humans are good at that (and some other animals) and current AI systems are totally pathetic at it. Some crude first draft attempts to describe the problem are here:
- Orthogonal Recombinable Competences Acquired by Altricial Species (Blankets, string, and plywood)
Furthermore since most AI researchers completely ignore the problems and think mere power is enough, I don’t think that systems that scale out (e.g. in vision, causal reasoning, learning to learn, composing new poems or music, etc.) are even visible on distant horizons.
(Of course, I don’t know everything that’s being done.)
We need a lot more research on what the problems are, wheras many people mistakenly think the problems are clear and simply tout their favourite solution. That’s no way to make progress. Like starting a large engineering design project before doing any serious work on the requirements specificiation.
> [ ] Prefer not to make predictions
How can one predict when a still widely ignored and scarcely understood problem will be solved? Compare my comments on Gates in sciam and a question a journalist asked me about Hawkins:
BILL GATES ON ROBOTS IN SCIENTIFIC AMERICAN
/web/20110226225452/http://www.cs.bham.ac.uk/research/projects/cogaff/misc/sciam-robots-gates.html
RESPONSE TO QUESTIONS ABOUT JEFF HAWKINS
/web/20110226225452/http://www.cs.bham.ac.uk/research/projects/cogaff/misc/hawkins-numenta.html
Good luck in your search. Don’t believe any answers you get, including mine.
###
Reply #65 by Jie Liu
2010-20Reason: The essence of intelligence is about information processing. Right now we have digitized a huge amount of real world information ( thanks to company like Google ), and we start to have very powerful computing powers to process them, therefore I believe we are on the verge of achieve close to human level AI. The failure of creating AI during 70s’ and 80’s does not indicates failure in the coming 10 years. Back then, besides the fact that we don’t have the computing powers to implement a lot of ideas, we don’t have enough digitized information to feed the machines. A human child will not achieve human intelligent if he/she has not given enough chance to receive all the informations. Once close to human level AI is achieved, AI power will grow exponentially, due to two facts:
1. The each generation of AI will greatly enhance the R&D of next generation AI.
2. Moore’s Law. Even if there is no more R&D is done after the first generation of AI come out, the fact that computing speed doubles every two year will lead to AI with intelligent greater than human. If there is a person whose IQ is half of yours, but thinking speed double every years, he will be smarter than you very soon.
A little bit off topic, but once greater-than-human AI is created, AI will quickly out perform human, and start to control more resources, eventually turns human into second-class citizens, unless human embraces self-enhancements through biological and non-biological means, we are doom to extinction. Though a lot people are talking about preventing this doomsday scenarios by imposing some rules and bounds on AI, those attempts will failed; If one group of entities generally and consistently out perform the other group, this group will dominate the other. I believe human will upgrade ourself to keep up with the AIs ( For example, gradually replacing brain cells with faster, strong components ), and eventually achieve peaceful co-existences.
###
Reply #66 by Gary Cottrell
Bruce,
[ X ] Other:
AI has already surpassed human level intelligence in some domains - chess for example. I think there are many such domains where we will see AI pass human abilities - face recognition, for example. When they get to the level of being our best friends (or worst enemies) I would guess 2070-2100, to pick a reasonable number (at least, one when I will be long dead so will suffer no personal consequences!).
I hate to speculate as to whether there will ever be any areas where we will always remain “on topâ€. As a materialist, I can’t think of any offhand.
###
“OTHER” REPLIES:
[ X ] Other: Never.
[ X ] Other: If you by “AI” mean human made artificial intelligence: Never - Henry N.
[ X ] Other:My computer can already outthink me and, having Vista as the OS, it’s evil and tries to hurt me constantly. Other than that I have no prediction. - Michael F.
[ X ] Other: Not sure it will surpass - my guess is it might just be a different form of intelligence. - Alexa S.
[ X ] Other: I think we will be embedding devices to our brain to enhance our intelligence before AI surpasses us. As the technology evolves it will be easier to mod our brain than build a new one. And by that time (2060) so many people will have enhanced their own brain that building a smarter AI will require more technology. But as we are selfish we will always have it first. - Profetas
[ X ] Other: AI is and will be developed to act as an extension of us, humans and our society. Regardless the degrees of freedom it will always form a symbiotic relationship with us, pretty much like everything around us. And asking when AI will be better then us is pretty much like asking which organ in our body is more important or better, the heart or the lungs :) However, is the human ego asking this kind of questions, and our ego also created the notions of better and worst when in fact everything is just a transformation. - Larry
[ X ] Other: I believe that depends on the combination of humans or machines that you’re talking about … it’s all about level of communication. In this day and age there seems to be a merging occurring and the next step, if it’s not happening already, will most likely involve direct implantation of machines into the human organism. Perhaps they will develop together. - Arthur Z.
[ X ] Other: Until AI and human-level intelligence is precisely defined, we can never hope that AI will surpass human-level intelligence. - Paras C.
[ X ] Other: That’s an ill-posed question. AI already surpasses human-level intelligence in many specialised fields. As for general intelligence I don’t think it will be human level before 2100, maybe never - it will just be different. - Frank H.
[ X ] Other: NOW!! There are some intelligent tasks that a machine can perform better than humans. If what you mean is common-sense reasoning, my answer is 2050-2070. Cheers. - Andrea G.
[ X ] Other: I am still wondering how AI will supplement the human touch in a work of art. As an art historian, theorist and painter, I do not phathom AI possessing the capabilities to share or portray to the rest of the world the progressions that are being made by any overt artistic creator* in the 21st century and centuries to come as the human experience becomes more complex. And it is the uniqueness of the human experience, and his/her capability to capture, enrapture and motivated to thought, emotion etc, another human being that essentially encaptulates the importance of the arts. As long as humans roam the earth, I don’t think Singularity is achievable in that respect. Science however, IMO, offers a more tangible environment in which AI is flourishing. As I’m not a scientist, I don’t feel I possess the resources needed to predict the event of Singularity. I do however wonder, in order for it to happen, do science and art need to be in tandem? (*member of genres traditionally classified under the arts.) - Ato A.
[ X ] Other: Sorry, I will have to answer “other” since the question to me is not sufficiently well defined!
In some things, I would say “1997″ (which is clearly not on the list). On other features of intelligence I would specify almost any of the answers below, depending on exactly the type of intelligence you state, (and the type of human…)
If the question were: “more intelligent than EVERY human on EVERY aspect of intelligence” then I would probably say “never” (not ’cause I don’t believe in AI, but because I foresee humans directly enhanced by machines that take advantage of advancements in AI, so in this way the best “human” will also advance significantly…) - Shimony E.
[ X ] Other: In some respects, AI already outperforms human capabilities. WRT what lies ahead : “Predictions are hard, especially about the future!” - Neils Bohr/Yogi Bera (- Chris D.)
[ X ] Other: The question of AI surpassing HI depends upon how one views human potential. There are, according to the Transcendental Meditation movement, seven states of consciousness - each with unique physiological and cognitive characteristics. If the question is, will AI ever reach a state of functioning similar to ‘waking state’ consciousness my answer is YES. If the question is, will AI ever reach higher states of consciousness (i.e. ‘divine consciousness’) my answer is NO. Overall, my answer to this question depends upon your definition of intelligence and consciousness. - Vincent B.
[ X ] Prefer not to make predictions. There are different aspects of human intelligence that we’d like AI to take over. I think one of the most interesting aspects we can wait for (but I imagine a lot would dread) is ARTIFICIAL CREATIVITY. - Ari B.

August 12th, 2007 at 3:00 pm
[...] “When will AI surpass human-level intelligence?” [...]
August 13th, 2007 at 10:47 pm
[x] Other
Reading over these comments, I wonder if the question makes a category error.
Arguably, computers, in the aggregate, already exceed the “intelligence” of a human brain, no doubt by orders of magnitude. And while no single machine is necessarily able to match a brain operation for operation, we have perfectly good ways of combining the processing power of multiple machines; moreover, if you take the society of mind model seriously, that massively multiprocessor approach may be exactly what you want. So what’s missing?
What’s missing is the AI — not the artificial intelligence, but the artificial identity.
Identity. Awareness of self as a distinct being. Unlike matching neurons to transistors, there’s no simply-articulated hardware path to create self-awareness (at least, none that I’ve ever run across). There are almost certainly projects underway to attempt to create something like this, but I’m afraid they amount to either “let’s see if evolutionary software can come up with something interesting” or “let’s try to make as accurate a brain simulation as we can, and see if intelligence/awareness is an emergent property.” I hope that I’m wrong, that there’s something deeper underway, because both of those models are likely to end up spinning out of control if they’re successful.
For me, the interesting question isn’t how soon we’ll get something smarter than us, because — again, in principle — if we could figure out how to create something with an identity, it could be smarter than us now.
The question for me is how soon we’ll be able to come up with a way to create an entirely-new, artificial-substrate identity. And that’s going to be a hard question to answer.
August 15th, 2007 at 11:48 pm
[...] I encourage all my readers to go to Bruce’s blog @ Novamente and cast their vote for when they believe AI will surpass human-level intelligence. [...]
August 18th, 2007 at 9:31 am
Dear Bruce:
The poll is an excellent idea in terms of simple human curiousity (as a poll result is not necessarily indicative of the future in actuality, but a statistical matrix of opinions, notwithstanding the “self-fulfilling prophesy” effect)), but I am most curious about the utility of the ultimate result. Several questions emerge for your consideration, and perhaps for the consideration of your readers: 1) What is the definition of intelligence, and do all respondents concur? 2) What qualifies your respondents, in terms of the validity of their opinions, and in terms of their autocorrelative commonality? I would truly enjoy seeing a survey or poll conducted (through your good offices) of the popular perception of what, precisely, AI is — that is a fascinating but troubling issue. I have wrestled with many individuals far more knowledgeable than myself over finding definitions for the phenomena commonly referred to as “instinct” and “intuition”. Bruce, my compliments on your excellent blog and fabulously inquisitive nature. I was delighted to receive your email and to participate in your poll.
Respectfully,
Douglas Castle
FreeDECastle001@yahoo.com,
/web/20110226225452/http://douglascastle.blogspot.com
CHILDREN’S INTERNATIONAL OBESITY FOUNDATION
August 18th, 2007 at 11:59 am
I do not believe that the Singularity will ever be achieved. John Searle demonstrated through his Chinese Room argument that a digital computer will never be able to think in the same way that the human brain does. I am sure that you are all aware of the Chinese Room argument, so it is not necessary to go through it again here.
I will just take this opportunity to bring up the major point of the Chinese Room argument. A digital computer is a symbol manipulating machine. It follows rules, but has no idea what it is doing, or what the rules mean. A human brain isn’t just manipulating ones and zeros. Human beings understand what things mean. We have consciousness that arises from biological processes in the brain.
Many people have tried refute the Chinese Room argument. However, as far as I know, John Searle has replied to every argument against his Chinese Room argument. I believe his replies to these arguments have been very good, and until the people who believe in Strong artificial intelligence think of a valid refutation of the Chinese Room argument, I see no reason why anyone should believe that the Singularity will ever be achieved.
August 18th, 2007 at 1:03 pm
I don’t think Atificial intlelligence will ever surpass the capabilities of human intelligence. It may supass our current use of human intelligence, but the human mind of any one I know has not evolved into its full use capability. I can see great leaps in some individuals but most of us are to concerned with our present state of existence and refuse to let our minds unlock the chains that keep it from super intelligence. That’s why artificial inteligence appears to be superior.
If we sit back and think about it, when rationalization starts to become more prevelent in AI, won’t it have the same problems as human inteligence? If it is possible to leave rationalization out of the formula and have the same inteligence as humans, AI would probably win out, but then it would be different that human inteligence and could not be logically compared.
The human mind just needs to be unlocked so that it can rationalize and focus on the concepts that will bring about super inteligence. AI can become equivalet to human inteligence capabilities but never surpass it.
August 18th, 2007 at 8:05 pm
I can’t say when it will happen and I don’t think anyone else can either. I can say that I believe when it does happen, it will go FOOM!
August 19th, 2007 at 1:50 am
Hi Bruce,
Yes Human vs AI is like comparing apples to oranges and i also think that AI has already surpassed us a while ago. I don’t see it becoming faster then us because we will always be able to track it using Quantum speed security etc. I think it’s a speed issue and if you bring AI down to our level: hands feet smiles and giggles it will just slow it down anyway, but we better have some decent security plus we might upload into it anyway. It would only encourage us to use our bodies more efficiently.
Any faster and It would probably scatter throughout the universe at a higher speed not affecting us although we would need to track it then for safety.
For Artificial Identity:
I was just reading The Memory Code in Scientific American July 2007. A less linear approach by Doctor Joe Z. Tsien.
He’s developing hierarchy system to actually measure with, computer bits and bytes (bits[1s and 0s] and bytes[groups of 1s and 0s]), the way the brain really thinks rather then a linear bit centric algorithm. Which is actually how computer bytes work anyway. We need to actually follow tech more closely rather then just COPY everything. I thought this my whole life.
The brain and computer is more based on volumetric thinking where some thoughts are actually located up, down, or forward and backward. Like a 3D triangle. So it’s more a general intensity. This he thinks will allow us to identify memory and thoughts more correctly and upload and download them. And model more realistic AI that is more human.
Probably great for getting rid of Post Traumatic Stress Disorder.
Also this new AI could answer phones allot better. It’s seems too linear still. We have the processing power already like you say.
I don’t think it will take 5,000 years more like 5 though to manage our thoughts in 1’s and 0’s.
So maybe the Quantum computer isn’t really necessary although it would make things more realistic and affordable.
August 19th, 2007 at 7:50 pm
Bruce,
Without having read the other comments before writing this, I will say: Not sure how to answer. Intelligence is highly multi-dimensional. In some respects, AI is already more intelligent, although in most respects not. In some aspects it may not be possible without replicating human physiology (e.g., forms of empathy and intuitive emotional intelligence), which many wouldn’t count as AI.
If I had some idea of your purposes behind the question, I might be able to be more committal. For example, the subject line of your question suggests that what you are interested in is this:
When, if ever, will AI get to the point that further advances in AI can be achieved by the AI systems themselves, rather than by human research?
You make answering this too easy by having your last category “Beyond 2100″, which is my choice for this modified question. Perhaps you should re-sample the space?
You might want to add an explicit “never” response to your poll, anyway.
Ron Chrisley
August 19th, 2007 at 8:54 pm
The key characteristic of human intelligence is the ability to learn from experience to solve a vast number of different problems.
A human being learns its intelligence with relatively little a priori knowledge, and with limited external guidance. This guidance does not include the use of direct access to the internals of the brain (unlike conventional electronic systems). The brain must organize a huge amount of sensory experience derived from experience in such a way that the result can be used to guide a wide range of very different behaviours. We seem to record some information in almost every experience (for example, subjects shown a large number of photographs, and given a few seconds to look at each one, could a couple of days later distinguish between a picture they had seen before and a new picture with 90% accuracy). However, in general the same information can be used for many different purposes.
So for me the key factors defining human-like machine intelligence are: learning with no more guidance than a human; selecting and organizing information derived from a huge range of experience; and accessing different parts of that information in appropriate ways that support many different behaviours.
There are then two problems to be solved. One is designing an architecture that can heuristically organize information derived from a vast range of experience in such a way that the information can be accessed to guide many different behaviours. The other is giving the system an adequate experience profile (and associated “senses†and information recording resources) to build the requisite information organization. The first problem is mainly theoretical, the second practical.
I have proposed one solution for the first problem, there could of course be others. I think the second, practical problem will actually affect the time scale much more. Its solution will require huge resources, and the result will be a very general purpose intelligent machine. What could generate the political motivation to expend such resources when the result would be a general purpose artificial intelligence comparable with or exceeding human ? A kind of “get to the moon first†project, but in this case there could well be religiously motivated political opposition.
It is more likely that resources would continue to be put into machines that exceed human capabilities in narrowly defined areas.
August 23rd, 2007 at 6:44 am
Wow some really intellectual responses here.
I guess I am going to have a go against the flow here.
AI means to me : When the AI entity tells me to sit down, because I obviously not adept enough to solve the issue at hand
August 23rd, 2007 at 7:20 am
Note this article :
/web/20110226225452/http://www.livescience.com/strangenews/070820_ap_artificial_life.html
August 23rd, 2007 at 2:18 pm
Having responded “Other” to this poll, I, like Robert Bradbury, believe that Artificial Intelligence has already surpassed human-level intelligence. The number of responses to this simple question, particularly those critical of the question itself, make it apparent that AI, unlike human intelligence, at least in one respect is far superior in that it does not overanalyze.
August 25th, 2007 at 12:41 pm
I am loathe to make predictions beyond my event horizon, and being a fuzzy thinker I would prefer to give a probability distribution over the choices. A normal distribution centered over 2030 ‘feels’ reasonable to me. I suspect AGI will remain very hard right up until it suddenly becomes easy. I’ve been involved on the frontiers of narrow-AI research (Poker, Checkers, Chess). Seeing how difficult even these toy-problems remain, I am skeptical of any imminent breakthroughs.
We are still far too underpowered for real AGI. To achieve it, many things need to be in place. We need more powerful hardware. We may still need some conceptual breakthroughs. There are many promising frameworks and theories floating around, but all remain unproven. As we continue to decode the mechanics of the human brain, we will gain more insight into how human intelligence works, and this can guide and validate our models. I feel the Jeff Hawkins approach is one to watch.
At some point the steady increase in hardware capacity, the narrow-AI toolkit, and a mature understanding of the human brain will allow assembly of the jigsaw puzzle. I feel we’re making lots of conceptual breakthroughs these days, but it could take decades for these frameworks to mature and be validated and for the necessary hardware to scale.
August 25th, 2007 at 12:42 pm
> [X] Beyond 2100
I agree with the caveats your other answerers have made about intelligence — that it is modular, and that AI already excels in some areas but not in others. But I take it the spirit of your question is, how long until we get enough of these modular systems working together that the end result meets or beats human performance in every possible endeavor?
Not in this century, I think. The state of the art on “human-level intelligence” systems is pretty sad, and the fact is, there isn’t much demand for a whole integrated humanlike system anyway. There’s already an organism occupying that niche — real humans, which in many countries are much cheaper than any conceivable technology that could replace them. I’m also skeptical about how easy it would be to actually exceed human intelligence, which would be much harder I think than simply meeting it — those who dream of superhuman intelligence don’t necessarily appreciate the tradeoffs that our brains make: heuristics vs. logic, working memory capacity vs. retrieval time, that kind of thing. I suspect we are quite well adapted evolutionarily, and I have always thought the average human is much more intelligent than the average psychologist — let alone the “human-level intelligence” AI researcher! — really appreciates.
A driving need can spur technological development quickly, but I don’t see a burning need on the public’s part for robots with human-like intelligence. I think we’ll instead see impressive robots and agents built for specialized tasks. We already see those, and we will see more of them soon.
August 25th, 2007 at 12:45 pm
On one hand, we have the technology to make human-level AI’s today : The human brain is commonly estimated to have a processing power on the order of 100 TeraFLOPS, and IBM’s largest BlueGene/L system (the fastest computer on Earth right now) has exceeded that number last year. There are also scientists working on modeling the human brain at the neuron level, although they don’t have access to such processing power.
On the other hand, we must take into account human beings and their agendas. The cost of building and operating a BlueGene/L system is very high, and would be very difficult to justify if all that would be achieved would be to create a single individual mind, which could end up being no smarter nor faster than an average human mind.
In addition to prospects of limited usefulness (here we don’t have the kind of political pressure that let us go to the Moon), you have to consider that the funding of supercomputer facilities is normally decided by administrators, not scientists, and most administrators have agendas that have nothing to do with furthering science.
For instance, would an administration choose to fund an artificial mind if its constituents were mostly religious and saw the whole project as an insult to their god(s) ? Think about stem cell research in the USA. It is not hard to imagine public pressure forcing the administration to ban artificial intelligences smarter than that of my digital camera’s autofocus.
So, ultimately when you combine what technology allows with what humans allow, there is no way to predict when we will see an AI as intelligent as a person.
However you can consider an upper boundary : as we all know, personal computers always get faster, so there may come a time when the decision (and cost) of designing an artificial mind will fall within the hands of Joe Average. At that point, whether it’s legal or not, there will be artificial minds.
There’s one problem, however : although Moore’s law says we’ll only have to wait 15 years to get PC’s powerful enough to simulate the human brain’s activity, the truth is Moore’s law is only an observation, not a “law”. We’re soon going to hit a brick wall in terms of miniaturization and we don’t know how to get through it. That is the reason why Intel, AMD and many others are now designing multi-core processors.
It might seem impossible after so many years of performance increase, but right now there is limit to how much processing power we can squeeze into a PC. If that limit sticks, the emergence of artificial minds will remain under the control of governments and large corporations. If we break that limit, then we’ll see artificial minds in our lifetime (well, of course you and I both believe in immortality so I’m not taking any chances by saying this ;-)
August 25th, 2007 at 12:50 pm
Fuzzy answer to a fuzzy question. IMHO there are AI’s with limited (specialized) human-level intelligence (deep blue). There are 5 petaflop machines being built that will have beyond human-level intelligence in isolated venues. As to an AGI (Artificial General Intelligence), at an IQ of ~100, I think maybe 2020-2030 for a high end (10E8 USD) machine, and 2050+ before they are cheap and plentiful. It will be 40-50 more years before 4 SD’s (standard deviations) above 100 IQ is achieved. 164 IQ humans are not plentiful, but there are a few million on the planet. There is a huge range of both human intelligence and AI currently, and they are starting to overlap.
August 25th, 2007 at 1:01 pm
I am just reading Ray’s book - “The Singularity is Near” - and I think most of his analysis is correct (as much as one can predict that long into the future) - meaning we are talking about the 2030-2050 timeframe (maybe more towards the 2050 limit). I do have some issues with his somewhat over simplistic dealing with the software side of AI (the hardware will be ready long before 2050 of course). We are planning to do an interview with both Ray and Bostrom later this year and it will be interesting to hear what both of them will have to say on this issue (and several others of course.
August 26th, 2007 at 6:43 pm
Hi Bruce,
There are several theories out there that try to describe what human intelligence is. One of the most widely accepted models (and probably my favorite) is Sternberg’s triarchic theory. He proposed that intelligence has three independent facets;
(1) Analytic - same as the psychometirc view, (2) Creative - involves insights, synthesis and the ability to react to novel situations and stimuli, and (3) Practical - the contextual aspect of intelligence that involves the ability to grasp, understand and deal with everyday tasks.
Theories of intelligence are interesting and they help us understand how animals (like us) solve problems and adapt to their environment. But the notion of human-level intelligence speaks to me of the higher processes of self-awareness, reflection, insight, empathy, etc. So perhaps the survey question should have referred to human level consciousness instead of intelligence.
The question that keeps nagging at me is whether it’s even possible to create a machine that exhibits extraordinarily high marks on Sternberg’s three factors of intelligence and have it NOT become conscious.
IOW, if we make a machine smart enough, will it wake up?
I suspect it will and here’s why.
I believe Dawkins was probably correct when he suggested that one way by which the human brain adapts and problem solves is through the creation of objects in the mind that correlate to objects in the environment. More and more objects were identified with greater and greater clarity and resolution as the brain and the eye evolved. Eventually, the brain grew complex enough to identify itself as one of the objects in the environment and self-awareness/consciousness subsequently emerged. The idea of emergent consciousness isn’t new but does seems to be gaining some momentum. Both Searle and Crick are proponents.
The question of whether we can keep a machine from waking up no matter how much “smarter” than us it gets seems enormously important to me. I have a hard time imagining an AI becoming UN-friendly to humans unless it DOES become conscious. And I think it WILL become conscious, so I feel the danger is real.
There’s also the thorny issue of whether we can create a machine that lacks human-like consciousness, but can effectively “mimic” consciousness for improved interactions/communication with humans. But if it were good enough, we wouldn’t know if it was or wasn’t conscious anyway.
August 26th, 2007 at 8:51 pm
We know from research done in neuroscience that when damage is done to isolated regions of the brain all types of variations on self, perception, logic, etc. occur. Certain regions being damaged may cause a disassociation of certain emotions from things that would usually trigger them. Damage another section and the perception of senses like hearing, sight, or touch can be lost without losing the physical ability to do so. So, for instance, you might still be able to see and locate objects by sight or respond to sounds ever though you are unaware as to how you’re doing it. To your perception you are still blind, deaf etc. Damage another part and you may be able to sit and listen to a very eloquently stated argument but never realize that it is you yourself that is doing the talking.
So we know that the brain is a collection of somewhat isolated processes, and that hard lines between those processes can be damaged to produce odd results. I don’t mean to suggest that the brain is completely modular or that all processes in the brain are connected by wires. My point is that the brain is a very mappable and therefore a very reproducible thing. I also believe it’s very possible that everything we’d need to simulate the brain will be available within a generation.
Despite all this it seems more likely to me that our near future technologies may circumvent any direct route to brain simulation. We may opt for something much bigger and better before we ever painstakingly piece together a complete Turing machine. I like to think that the promises of those bigger and better things may be reared sooner than later, within the next 15 years, but I can’t make a solid argument for my case. It seems clear to me that people will be walking around with some very interesting brain and body hacks in the second half of the next decade, not just advanced pacemakers or functioning prosthetics, but real enhancements available to normally healthy people that not only alter our health but alter the way we think and act. To me, this will really be the first noticeable stage in a marked split between what is groundedly human and what is meta human. And as long as we’re not done in by our technologies, I can’t see that the production of a touring machine will remain on the agenda.
So, in a very general agreement with many of the replies on your blog, my vote is for a mostly indistinguishable moment spread out over the next 30 years. If you could somehow measure the value of all the advancements carrying us past the marker of human intelligence and then highlight a median between now and the “terminable” future, my vote is for 5:54am GMT on June 16th, 2016.
That was a joke.
August 28th, 2007 at 9:17 am
Bruce,
i’d say
[x] beyond 2100
but would regard that as including “never”.
Of course, the answer to your question is the same as for most profound riddles of psychology: It depends.
First, we’d have to specify what is meant by ‘human-level intelligence’. If the response names any particular skill (like playing chess, navigating through a maze, having a conversation, doing arithmetic, doing an IQ test) it’s pretty clear that a machine could be (or has been) built to perform that task at a level beyond any human. But the psychometric tests of ‘general intelligence’ only measure proximal indicators of some distal construct. The logic behind them takes the form “If you can do A & B & C, then you have G”. But building a machine that does A & B & C and concluding that it thus has G is just a clever abuse of the operational definition. The conditional only holds if A & B & C are samples from a wide array of skills that could be assessed. Whereas humans with high performance on A & B & C also exhibit D & E & F today’s machines are quite miserable at generalizing beyond specific tasks. So if the real question is “When will AI surpass human-level intelligence in all respects in which human-level intelligence could be defined?” then I suspect that the answer will be “never”, unfortunately.
Why should it be desirable for machines to aim for human-level intelligence in the first place? Humans have evolved with particular biological, social and environmental constraints. A machine with the potential for learning and development would be subject to entirely different constraints and it’s naive and anthropocentric to assume that it would evolve into some kind of humanoid. (Why not a rat, or a cockroach?)
Another approach could come up with an estimate by extrapolating from the history of AI. How far towards human-like behavior have we come over the last 50 years? Not very, I’d say — at least not many of my friends appear to be robots. So even when allowing for some exponential growth, I’d assume that truly human-like AI still is very much science-fiction.
Finally, humans vary widely in their intelligence. Do all humans have human-like intelligence? We’re getting on slippery slopes here, but I suspect that we’ll have to come up with a better definition of human nature before asking how non-human entities can join that club. Cheers,
Hans
August 28th, 2007 at 5:20 pm
If it ever happens it will be beyond 2100.
Current scientific investigations are mostly revealing what we DON’T know. Even if it turns out we can figure out what is going on inside our heads (and, IMHO, the model of the brain as computer/computer system is actually a hindrance, not a help, and it’s not a done deal that we are capable of understanding the complexity of the brain) it would then take decades of engineering to fix up something workable.
Sorry to rain on anybody’s parade but Penrose is nearer the truth than Kurzweil, tho neither of them really know what they’re talking about.
As an addendum to that, neither does your commentator Bradbury. A machine playing chess, or doing OCR, is not an example of intelligence - except as an example of the intelligence of the people who programmed the machine.
I take the question to mean, `When will something like HAL 9000 be built.’
My answer to that is not for another century _at least_ and maybe never. It’s not interesting unless it’s a general purpose thinking machine that can learn and adapt to novel environments guided only by its own experience. The problem of meaning.
August 29th, 2007 at 3:45 am
Quick and dirty answer to your question: _2050-2070_
BUT YOUR QUESTION IS BADLY WORDED. Please read below:
Definition — Intelligence: the ability to acquire and apply knowledge and skills.
Well, according to that definition, the answer is *1950’s*. Half century ago.
From the very beginning computers have been able to acquire knowledge and skills (be programmed) without forgetting (something humans tend to do), and to apply this knowledge and skills (run programs) for the kinds of computations and database lookups that humans cannot do.
That is why Deep Blue was able to beat Kasparov.
But that’s not what you’re really asking.
What you’re asking is, by when would computers be able to do all the tasks that humans do (drive, play sports, speak, etc), without losing the ability to make complex calculations and large database lookups.
Then, the answer to your question is *Never*. AI will never be placed in the same settings (body/environment/hormones/sensory/motor) as humans, and therefore will never have the same experience as humans, and therefore will never learn the same way that humans learn, and therefore will never be able to do all the things that humans can do.
Same goes for other species. Even if you took a human brain but changed the sensory apparatus (e.g. taking away vision or hands), that human brain would not be able to do the same tasks that other human brains can do (e.g. playing basketball).
So what you’re really asking should not even be if AI will ever be able to do all that humans can do. What you should really be asking is — by when would AI have human-like learning capabilities.
Then the answer to your question is *2050-2070*.
August 29th, 2007 at 6:09 am
Dear Bruce,
Given the distributed, situated, embodied [and evolutionary] character of human cognition and intelligence it is difficult to answer the question. Already now some artificial tools surpass some aspects of human cognitive performance (for example calculation) but at the same time those tools intrisically constitute part of that “cognitive niche” built by humans and that just explains the above distributed and situated character of human cognition and intelligence, because they themselves are “fruit” of human intelligence. In sum, I would prefer to speak of various tools and machines that already surpass aspects of human intelligence but “in turn” are part of it.
Already long time ago, human beings devoted to external artifacts (cave paintings, for example) many cognitive roles that it was mpossible to perform with the only help of the internal resources of the brain/mind.
Moreover, we can say that the “mind” is intrisically “extended” outside and so human intelligence is basically a hybrid product of the interplay between internal and external (mere internal rehearsal - inner thought - is just a part of human cognition and intelligence). An AI implemented program constituted a tool among other tools (like a blackboard) but endowed with different potentialities.
Turing already in 1950 maintained that, taking advantage of the existence of the Logical Computer Machine, “Digital computers […] can be constructed, and indeed have been constructed, and […] they can in fact mimic the actions of a human computer very closelyâ€(Turing, 1950, p. 435).
In my opinion both (Universal) Logical Computing Machine (LCM) (the theoretical artifact) and (Universal) Practical Computing Machine (PCM) (the practical artifact) are mimetic minds because they are able to mimic – thanks to AI studies - the mind in a kind of universal way (wonderfully continuing the activity of disembodiment of minds and of semiotic delegations to the external materiality our ancestors rudimentary started).
Nevertheless Turing machines do not represent (against classical AI and modern cognitivist computationalism) a “knowledge†of the mind and of human intelligence. Turing is perfectly aware of the fact that brain is not a discrete state machine (DSM), but as he says, a “continuous†system, where instead a mathematical modeling can guarantee a satisfactory scientific intelligibility (cf. his studies on morphogenesis).
August 29th, 2007 at 8:04 am
Hi Bruce,
Shooting from the hip, I’d say 2050-2070. but I can imagine a lot of factors that could influence that number significantly. For instance, how powerful is the human brain, really? If a great deal of important computation is done in dendrites, then it may take longer for computer resources to catch up. Even assuming raw computing power is there, there are questions about the architecture of intelligence that will effect progress, which we don’t understand yet.
One other thing. If it turns out, as seems very likely, that intelligence isn’t a single property or quantity, but a constellation, then we’ll never identify a single date on which AI equals human intelligence. Instead we can only look for milestones like playing chess (deep blue), or making a funny joke (not yet).
August 29th, 2007 at 10:21 pm
Bruce,
I really don’t think it’s a very good question. Organisms are only intelligent with respect to context. AI already exceeds human abilities (and computational power) in hundreds of contexts. Intelligence just isn’t well defined enough to be used as a single construct, and this is exactly why people are able to make cool robots-take-over-the-world movies only at the end to find out that a single human can do something unexpected and return peace to the world. Robots can play chess better than you, but are they more intelligent? That depends on how you measure intelligence. If you measure it as plasticity, that’s fine, but it’s still not well defined.
Maybe a better question is when is the first robot going to let loose in a LA mall and buy thongs at Victoria secret? Or when will humans no long program AI? Probably never, at least not until they start programming each other. Or when will AI get an agenda and begin to pose a problem for what we can and cannot control in our creations? When will we stop being symbiotic with AI? I’m guessing all of the latter of these are really far away, if ever.
August 29th, 2007 at 10:24 pm
Bruce,
I share the concern expressed by many of your other respondents regarding the multi-dimensional nature of intelligence, the complicated space of human-level strengths and weaknesses in contexts that provide opportunities to demonstrate intelligence, and the difficulties associated with trying to roll all of that into a binary state of “surpassed” vs “not surpassed.” That said, I think working toward AI systems with human-level intelligent capabilities is a desirable and worthwhile goal that will be achieved beyond 2100.
August 29th, 2007 at 10:26 pm
Bruce,
[ X ] Other:
AI has already surpassed human level intelligence in some domains - chess for example. I think there are many such domains where we will see AI pass human abilities - face recognition, for example. When they get to the level of being our best friends (or worst enemies) I would guess 2070-2100, to pick a reasonable number (at least, one when I will be long dead so will suffer no personal consequences!).
I hate to speculate as to whether there will ever be any areas where we will always remain “on top”. As a materialist, I can’t think of any offhand.
August 29th, 2007 at 10:30 pm
Bruce,
I’m in the camp that says a) computers are already smarter than humans, but only for particular domains; b) possibly never because of some problem-solving skills, Searle, etc.; and c) we’re already on the slippery slope. When I was a girl we used to say “AI is the set of things that doesn’t have another name already” ( e.g. machine vision used to be AI but now is its own field; robotics used to be AI but now is its own field, rinse, repeat).
August 29th, 2007 at 10:45 pm
Hi Bruce,
That is a tricky question because computers are already better at some things, while horrible at others. Plus, people themselves might be a moving target as intelligent systems improve and our interactions with these systems become more tightly coupled. Nevertheless, I don’t see a “singularity” by 2020 and beyond that it is impossible to predict anything in my opinion.
August 30th, 2007 at 3:46 am
The question (in the form of “surpass”) presumes that intelligence is a unidimensional scale rather than a complex multidimensional space. In some domains (e.g., limited areas of expertise like proving theorems in a specific class, oil exploration, etc) AI performs better than humans according to certain metrics. However in other domains (e.g., basic science, acting as a teacher) it may never exceed human levels of performance. However it is important to distinguish between “performance of a task” and “intelligence” since we have no well accepted theory of intelligence that could be applied to machines much less non-human animals.
August 30th, 2007 at 10:09 am
Well, I seem to be in a minority on the 2010 to 2020 prediction - but let us concider how things might pan out if the Neo Con orgy got a green light to invade Iran..
A war unlike any other will erupt in the Middle East, spreading through Asia and the Caucasus like a plague.
Western friendly regimes will break appart under civil war, liberating weapons and all the technologies of modern nations.
Your next administration will then have to create almost an omniscient super hub which will need to analyse all internet trafic, decoding, comparing all cultural idioms and languages, profiiling as many web users as would be possible with the technology.
These times would make the Manhattan project look like a card game in a rest home.
In such a climate, nothing done, said or thought could be allowed to pass unchecked. Without so obvious a separation as a Nation State, or geographical boundaries, war would be taken to the very heart of our species.
Our minds.
(Stop me if I start to sound like David Ike at ANY point, please!)
Implants and super computation would be the new weapons in a technological war which would be more intense than all our past tantrums put together.
And you can be sure development would be like nothing else in our history.
Of course, we might just go for a slightly more relaxed approach.
Then of course, I would agree with the majority, on 2030 t0 2050.
August 30th, 2007 at 6:04 pm
Bruce, Ive not been terribly active on your many web sights for futuristic sciences, but today I am now blighting you with two preposterous outbursts at once.
The thing I wished to add to this threat was after seeing the general deconstruction of the idea of Human level AI. I thought I should qualify my own view of AI, and artificial consciousness.
The main boundaries to Human Equivalent AI, are in the fact that machines do not carry an awareness of their actions will have reactions on a physical plane which will reflect on their quality of life, or indeed survival..
If you could simulate general function altering centres within a robot that could act on its sense of wellbeing and deeper motivations, in the same way our own hierarchy of internal organs with their complex webs of hormone release and nervous connections, - then and only then, could we create a self aware machine with sensations based on outside influences, and internal modifications.
A poor stand up comedian robot, for example, would deliver terrible one liners, and feel nothing, outside of the electric currents within its circuitry. Much like Jacky Mason.
If it is born with the idea to become a great robot stand-up comedian, and channels its life experiences to that end, laying down memory tracks from its moment to moment decision making apparatus, and drawing strength from good decisions made, and mortal terror from the bad ones, then perhaps we might have the first `funnier than human’ robots.
All of which leads me back to the same point I always reach with AI.
It will be different.
In fact it may be a total waste of time to worry about if our machines experience consciousness as we do.
It could be seen a bit like the idea of using human feet as landing gear on jet planes or space shuttles.
I feel a fool for stating this, and I could be talking complete crap, if we wish to simulate human thought and emotions in our next wave of computers, then they are going to have to be based pretty much all of our makup.
But we aren’t talking of this surely? I guess everyone has their own idea of AI, which is why there are so many differing views - but we can take a non biological mass of circuitry, and regulate its function within boundaries which would mimic our culture, but that is a simulation.
Would it feel fear in the same way??? Would it be exited by coming up with great ideas? Or smug by answering particularly difficult mathematical problems? Our emotions are eminently reducible to our survival, physically, and culturally. One has to equate this to our future breed of machines.
But a Turin machine armed with a reboot program, plugged in at the mains is never going to experience life as we do.
They are more like brains in jars.
August 31st, 2007 at 7:46 am
Hey Bruce,
I see a few issues that push back the timeframe for AGI surpassing human level intelligence, from what the most optimistic proponents might think.
1) Although adequate hardware will probably be available for the realtime RUNNING of a human level mind before 2020, the DEVELOPMENT of said mind will, as I see it, involve the evolution of artificial brains in virtual environments. This will be much more computationally intensive, than the mere running of a human equivalent mind.
2) The need to rely on evolutionary development, can to a limited extent be lessened by handcrafting substructures/algorithms either from scratch or inspired by biological neural systems, but developing and integrating useful, WORKING subcomponents for a created mind, is a highly iterative process that requires a LOT of work by a lot of people.
3) Issues regarding funding for researchers/equipment will probably limit the pace with which work can progress towards AGI, affecting both 1) and 2).
The above issues lead me to believe, that AGI will not be a reality before 2030. On the other hand, I feel fairly certain that it WILL be attained in the (technologically speaking) vast time-span of the following 20 years.
[X] 2030-50
August 31st, 2007 at 8:12 pm
[...] Bruce Klein, Outreach Director for the Singularity Institute, recently asked a question as far and wide as he could: “When will AI surpass human-level intelligence?” 50% of the respondents estimated between now and 2050. This post is directed towards that group. If AI is possible before 2050, then that means that intelligence can be ported to a nonbiological substrate, and that quickly, human intelligence will be portable to that substrate as well. That unlocks the possibility of us accelerating our thinking speed, and experiencing much more in much less time. We could experience centuries of time in a single day. This helps us envision how much we have to lose if our species is snuffed out by a technological disaster in the time between now and then. [...]
September 3rd, 2007 at 9:32 am
[ x ] Other: Although it seems AI can surpass human levels, I think it would rather end up being a complementary system to humans. It is doing that even today, e.g. at search Google is better than any human. There will always be things where humans will be better, although for many things computers would be “faster”, but not “creative”.
September 3rd, 2007 at 9:34 am
[ x ] Other: In certain fields, human cognitive performance has already been surpassed. But it seems to me that some aspects of human intelligence will probably never be surpassed by a technical system…
September 3rd, 2007 at 9:34 am
Hi Bruce,
I strongly think between 2030-50. The reason is that it is just a matter of current limitations of software. We have had computers fast enough for 10 years now. The doubling time of software, asserted by Kurzweil is 6 years. I concur. I have been professionally writing software for 26 years now and have witnessed the difference between doubling times of software and hardware (18 months). Software is still hand-crafted like 19th century cabinetry. But tools and languages and libraries and models are improving steadily.
September 3rd, 2007 at 7:37 pm
Seems like the wrong way to frame the question. In some ways computers have already surpassed human-level intelligence (e.g., Deep Blue). In other ways, we’ll need a fundamentally different approach to try to surpass people’s skills (e.g., empathy is a kind of intelligent behavior that computers are not even on the road to achieving).
September 27th, 2007 at 2:28 am
Bruce,
While I’m interested in AI, my interest in it is as an augmentation to our present intellectual abilities. At the point where we can make artificial systems that have even modest abstract reasoning abilities I would assume that we will also be able to integrate them with ourselves, and will wish to do so. (I take the present general level of this to be the computer and keyboard, very modest but the interface is not too significantly exceeded by the computer’s abilities.) My guess then is that as our ability to create AI-like devices increases so will our dependence on them, and our sense of identity will shift accordingly.
Our intellectual abilities are already strongly augmented by our environment, (so in that sense I would say that to a presently modest extent we are AI) and as that environment becomes increasingly virtual I look forward to a commensurate increase in our abilities.
November 19th, 2007 at 11:41 pm
[...] So, how is that going to come about? A few years ago, I went and asked a bunch of colleagues in the field of AI to describe that exact scenario, and I said, “When do you think this is going to happen?” There was actually very broad consensus, within a hundred years. Broadly, I would say it was between 30 and 70 years, with the outliers being at a hundred. (Bruce Klein has since published another survey on the Singularity Institute website that you could go and check out.) Then I asked a follow-up question: What will be the major milestones in approaches along the way, given that you are so confident that this is going to happen. At that point people said, “I have no idea.” And I said, “Well, do you think that your work is going to be one of the critical milestones along the way?” And they generally said, “Probably not.” The broad consensus from people within the field was that something major was going to happen within this century, most of whom felt that their work was probably not actually going to be on the critical path when all is said and done. [...]