Support Books Under the Bridge

Friday, January 25, 2008

Spitting in the Eye of the Technological Singularity

The Technological Singularity is a big idea in the world of Sci-Fi. In case you don't know what the Technological Singularity is, I've identified two major definitions with Wikipedia's help:

1. The Singularity Institute for Artificial Intelligence defines it as "the technological creation of smarter-than-human intelligence."

2. Futurist Ray Kurzweil says, "The Singularity is technological change so rapid and so profound that it represents a rupture in the fabric of human history. Some would say that we cannot comprehend the Singularity, at least with our current level of understanding, and that it is impossible, therefore, to look past its 'event horizon' and make sense of what lies beyond."

The idea is that once a "smarter-than-human" AI is created, it will be able to build AI smarter than itself, and so on, and the changes to technology accelerate from there. It conjures images of futures filled with world-ruling robots, technologically-augmented humans, and thinking computers that hold the answers to questions humans have not even thought to ask yet. On its surface it's a neat idea and its name has a fun science-y sound. However, besides being fun to talk about, I'm not convinced the idea has much to offer. It has a number of problems that make it implausible. Instead of the definitions listed above, I contend that the Technological Singularity is three things:

1. A Buzzword that Leverages the Mystique of Artificial Intelligence
2. Magic Masquerading as Science
3. A Crutch for Sci-Fi Writers

Now, I'll elaborate.

1. A Buzzword that Leverages the Mystique of Artificial Intelligence

Kurzweil's definition is pretty over-the-top. I don't even know what a "rupture in the fabric of human history" would be. Perhaps this is just another way to say it would be "earth-shattering," or "change everything," etc. Anyway, I don't buy it. Technological advancement won't speed up to the levels he and others have theorized. They've claimed that the exponential curve of technological progress the human race has been making over its existence will continue its ramp up that curve for a long time to come. This just won't happen.

Here's why: True exponential curves do not exist in real life for very long. There are hard limits in this universe such as the speed of light, or the size of an atom. Finite resources and the laws of physics slow everything down eventually. Sometimes a paradigm shift or genius insight can move technology past a roadblock, but some roadblocks you just cannot pass ... unless you believe in magic.

Intelligence is less well understood than physics, making it ripe ground for speculation and forecasting the future (or "future casting" if you're a weatherman) . However, self-driven intelligence has been an elusive animal to this date. It is easy for humans to manipulate computers to do what we tell them to do, or even perform simple decisions based on a mathematical weighting function. However, in 30+ years of AI research, we have little to show other than fancy search techniques, decision trees, and algorithms that can recognize patterns. This does not mean there will not be a breakthrough into a self-motivated AI that can make complex decisions.

However, I do not think the Singularity will occur as stated, even if we create such an AI. The idea is that once an artificial mind is created that contains "intelligence" to surpass human intelligence, it can then create a better artificial mind with higher intellectual function. This is daydreaming at its finest. Intelligence is no easy thing to quantify. How do we know when we have a machine that is more intelligent than its creator?

Answer: It can perform mental tasks that we cannot.

Now, I'm not talking about all the great things computers can do now, such as performing math faster, winning at chess, and drawing fancy pictures on our computer screens. The idea of a "smarter" computer, is one that can do something that we, as humans, can not do, something that is impossible to do with just a human brain and a process to follow.

Knowing how to program a computer, I have a deep sense that this is impossible. It is especially absurd when you look inside a computer program and discover that if you just performed the same steps as the computer, you could come out with the same answer, just taking more time. If you used a tool to automate some of the menial portions, you could do the same thing, perhaps almost as fast as the computer. You might need some training to do it, but you could do it, too.

What this all boils down to is tools and training. At the core of an AI is a machine to make decisions. At the core of a person is a brain that makes decisions. With the right tools, one can be as good at a particular task, or almost as good as another. Therefore, balancing the premise of the Singularity on AI is incorrect. An AI is like a nice calculator, or someone you pay to do your homework. Doing your homework without them will take longer, but you'll still finish with plenty of time to go play.

Therefore, when we boil down the definitions and take a clear look at them, we see that there's nothing inherently special about AI to make all this happen, and this idea of creating a "smarter" AI than what the human mind can produce is bunk. Therefore, there's no such thing as this "Singularity," this event of creating that AI.

As far as the ideas of robotic armies and AI overlords, that's nothing new. We've had Attila the Hun and Hitler, and all sorts of other nasty sociopaths and their movements to deal with in the past. Dealing with these new jerks might be a little harder, but let's hope that we can keep some of those technological advantages for ourselves if we are indeed stupid enough to engineer our own worst enemies.

2. Magic Masquerading as Science

As I wrote above, I believe that it may be possible for a human to create an AI that can perform the same functions as a human. However, to believe that the creation of such an AI will speed our progress ever forward at increasingly-breakneck speeds because of it is fantasy. For one, because a smarter AI cannot be created, there will no longer be this promised ramp of continually smarter AIs. Also, even with fancy AI that is as smart as a human, we are limited by resources. An AI needs to run on a processor with some other peripheral hardware, including memory, network components, robotics, etc. Also, all of these parts need maintenance, and to be replaced on occasion. Lastly, they need energy to function. Even if we reduce sizes down to the quantum level, these all still hold true.

Compare this to a human. Humans have a lot of the necessary hardware built in, and we self-maintain pretty well, some running for 100 years or more. We also procreate. However, resources are still necessary for us, too. There's also an argument here for biological computers, but if you're going to do that, why not just grow human brains? It comes to the same thing anyway.

It also takes time to build a machine to host the AI, and to train the AI, sort of like it takes time to create a functioning human. Granted, the time to create a computer is much smaller than the time necessary to grow a baby, but the resources become harder to procure over time, even if you start going to space to find them and trying to harness the sun for as much energy you can get. Space travel is expensive and time consuming, and harnessing solar energy has a long way to go. Even if you allow for big advances in these technologies, the theory behind the Singularity is starting to get clunky ("Oh, we need AI, and cheap solar, and better manufacturing, and fast space travel, etc.").

So, if we take all of this into account, we see that we'll reach our hard resource limitations faster, which will kill that exponential speed-up. Getting past that is going to take some magical thinking.

3. A Crutch for Sci-Fi Writers

The Singularity has caught on in the Sci-Fi community, and we are seeing more of these stories set after the Singularity. I like that authors who do this are ignoring the whole, "you can't predict what's going to happen after the Singularity!" idea. It's a stupid thing to posit because it's unverifiable and self-satisfying: it's hard to predict a lot of things, such as when we're going to get flying cars. However, just because I like their guts, does not mean that I think it's good form to posit some sort of Technological Singularity in fiction. It moves Sci-Fi into Fantasy, and makes the Sci-Fi less based in science. It's no worse than saying, "Then aliens gave us a bunch of great technology and now we do all sorts of fantastic new things with science," but just like getting all your great tech from aliens, it feels a little like a cop out, and when it gets overused, it starts to feel like a fad.

Let's Hear Your Thoughts

If you vehemently disagree, let me know. I have a feeling my view is not a popular one, especially since the idea of the Technological Singularity is an interesting one. However, I think I'm on the right track in guessing that the futurologists have thought this one out just as well as they thought out flying cars back in the fifties.


Mister Troll said...

Yeah, I agree - Kurzweil's singularity is bogus. To be fair, I could get behind it if it were presented as a possibility. (Let's envision a future world in which technology takes off at a certain point, beyond which we humans cannot comprehend it.) Kurzweil - I think - argues that his Singularity is inevitable (and close-at-hand). Sounds to me like the Rapture wrapped up in science packaging. "Oooh, look at me, I use science words, I must be right!"

Gah, he makes me angry.

On another note, Billy Goat, are you suggesting that smarter-than-human artificial intelligence is impossible? I got that feeling from your article but maybe I mis-read it. If so, I think you should explain that more thoroughly. (We once had an argument about this, but let's pretend that never happened.)

Back to the singularity nonsense, I thought Vernor Vinge's "zones of thought" to be a neat way to keep the singularity idea plausible.

- Mister Troll

Mister Troll said...

Cringely wrote about the singularity some time ago.

Billy Goat said...

Trolly - Haha! I didn't realize Kurzweil riled you up so much! Yeah, he's a bit much.

As for whether I think smarter-than-human AI is impossible.... Thinking about the "Singularity" and the whole idea of "smarter and smarter AIs" really got me thinking about what "smarter" means, and that question boils down to the question "What is intelligence?" So, I started mentally categorizing things that computers can do better than us. This list includes things such as mathematical calculations, searching, data storage and retrieval, performing repetitive tasks, and probably some other things. Right now, they aren't really very good at doing things such as observing the world (sensors aren't great, and our algorithms, including AI techniques such as neural nets, etc. aren't that great yet), making intuitive leaps or creating mathematical proofs (although there has been some movement in this area, I haven't been impressed). Let's assume that even these areas see improvements.

Now, none of these actions are things that computers can do that people can't do. And theoretically, if there's an algorithm to do something, a human can understand it just as well as a computer. The only difference might be speed. Is speed or stored knowledge a measure of intelligence? If so, can a human can use a tool to enhance her abilities in this sense (i.e. use a google search) and consider herself more intelligent? I'd answer that with a big "No." So, if she can use tools to do the same thing an AI can do with its software utilities (i.e tools), then the AI is not obviously more intelligent than the human.

Taking this into consideration, we can say, "Okay, there's a skill set, which you can consider your "tools," and there's a core decision-making process. Maybe most of "intelligence" is just a collection of skill sets? Then, when you strip an AI and a human of those collections, they are each equivalent: A core decider (ha ha) with a set of tools. That really seems to devalue intelligence, but the idea resonates with me.

So, what about our most valued thinkers, inventors, and artists, people like Einstein, Tesla, Mozart, etc.? What about the "intuitive leaps" I mentioned earlier? What about people with 190 IQ scores? My answer to these questions is that these people worked hard, focused on a particular area, and perhaps had good "search" functions (to use an AI term). They had a few breakthroughs in their lives, but they stood on the shoulders of the people who came before them, and they did not necessarily do anything that could not be reproduced by someone with 5 IQ points less than them. Also, maybe they got lucky (which correlates well with hard work).

Let me know if I need to further clarify. :) I'm looking forward to hearing your thoughts.

Mister Troll said...

Hmmm - I think I disagree on at least a few points.

"....if she can use tools to do the same thing an AI can do with its software utilities (i.e tools), then the AI is not obviously more intelligent than the human." Err... it's not obvious to me, anyways! :-) I don't see what distinguishes this from other similar statements. If I can ride a bicycle and match you in a sprint, then obviously you aren't a faster runner than I? And why are the AI's "tools" distinguished from anatomic structures in the brain (not tools?)?

I see that you are not equating speed with intelligence, but I don't find your argument compelling. I would accept it as a postulate (although I don't think it jives with the usual ideas of what intelligence is).

This may be slightly tangential:

There are many areas in which hard work can get you very far. For example, I've heard that most chess masters agree that just about anyone -- with dedication and very hard study -- can earn a Master title at chess. While I'm not a Master (not even a titled player!), I'm good enough that I would tend to agree. Whether anyone can become a Grandmaster, that is unclear (I don't know if the Polgar experiment is relevant). But just because I could become a Master, doesn't mean that I am.

Billy Goat said...

I guess my main point is that there's nothing inherently special about AI over any other self-directed actor. I'm saying that the basis for the Technological Singularity, AI, is not the be-all-end-all that its proponents make it out to be.

To use your sprinter-bicycler example, let's say that a robotic was the fastest sprinter in the world, faster than any human. If an average human can keep up with him on a bicycle, they both reach their destinations at the same time. Does it really matter that the robotic sprinter could run faster? Not when it comes to the results, and not when it applies to a Running Singularity theory that puts magical significance into mechanical legs.

Also, I want to clarify something and address your question:

"And why are the AI's "tools" distinguished from anatomic structures in the brain (not tools?)?"

In my original discussion about "tools" I lumped anatomic structures in as tools (or meant to, if it wasn't clear). I was looking at the AI's computerized brain and the human brain as analogs. If you can simplify each to an actor component (the autonomous part that makes decisions), and a set of skill components, then they are functionally equivalent.

As for competitive sports, including chess, well those aren't germane to the discussion. If I had Deep Blue's database and search algorithms at the touch of a key, I could beat Gary Kasparov, too. But that would be cheating (and this is why they're not germane, because they focus on innate qualities, and disallow external assistance of any sort). However, like I said above, I don't care how I got there. I just care that an AI wasn't necessary to the equation, thus knocking out one leg from beneath the Technological Singularity hogwash.