The Singularity is the technological creation of smarter-than-human
intelligence. There are several technologies that
are often mentioned as heading in this direction. The
most commonly mentioned is probably Artificial Intelligence,
but there are others: direct brain-computer interfaces,
biological augmentation of the brain, genetic engineering,
ultra-high-resolution scans of the brain followed by computer
emulation. Some of these technologies seem likely
to arrive much earlier than the others, but there are nonetheless
several independent technologies all heading in the direction
of the Singularity - several different technologies which,
if they reached a threshold level of sophistication, would
enable the creation of smarter-than-human intelligence.
A future that contains smarter-than-human minds is genuinely
different in a way that goes beyond the usual visions
of a future filled with bigger and better gadgets. Vernor Vinge originally
coined the term "Singularity" in observing
that, just as our model of physics breaks down when it tries
to model the singularity at the center of a black hole,
our model of the world breaks down when it tries to model
a future that contains entities smarter than human.
Human intelligence is the foundation of human technology;
all technology is ultimately the product of
intelligence. If technology can turn around and enhance intelligence,
this closes the loop, creating a positive feedback
effect. Smarter minds will be more effective at building still
smarter minds. This loop appears most clearly in the
example of an Artificial Intelligence improving its own source code, but it would
also arise, albeit initially on a slower timescale, from
humans with direct brain-computer interfaces creating the
next generation of brain-computer interfaces, or biologically
augmented humans working on an Artificial Intelligence project.
Some of the stronger Singularity technologies, such as Artificial
Intelligence and brain-computer interfaces, offer the possibility
of faster intelligence as well as smarter intelligence.
Ultimately, speeding up intelligence is probably comparatively
unimportant next to creating better intelligence;
nonetheless the potential differences in speed are worth
mentioning because they are so huge. Human neurons operate
by sending electrochemical signals that propagate at a top
speed of 150 meters per second along the fastest neurons.
By comparison, the speed of light is 300,000,000 meters
per second, two million times greater. Similarly, most human
neurons can spike a maximum of 200 times per second; even
this may overstate the information-processing capability
of neurons, since most modern theories of neural information-processing
call for information to be carried by the frequency of the
spike train rather than individual signals. By comparison,
speeds in modern computer chips are currently at around
2GHz - a ten millionfold difference - and still increasing
exponentially. At the very least it should be physically
possible to achieve a million-to-one speedup in thinking,
at which rate a subjective year would pass in 31 physical
seconds. At this rate the entire subjective timespan from
Socrates in ancient Greece to modern-day humanity would
pass in under twenty-two hours.
Humans also face an upper limit on the size of their
brains. The current estimate is that the typical human brain
contains something like a hundred billion neurons and a
hundred trillion synapses. That's an enormous amount
of sheer brute computational force by comparison with today's
computers - although if we had to write programs that ran
on 200Hz CPUs we'd also need massive parallelism to do anything
in realtime. However, in the computing industry, benchmarks
increase exponentially, typically with a doubling time of
one to two years. The original Moore's Law says that
the number of transistors in a given area of silicon doubles
every eighteen months; today there is Moore's Law for chip
speeds, Moore's Law for computer memory, Moore's Law for
disk storage per dollar, Moore's Law for Internet connectivity,
and a dozen other variants.
By contrast, the entire five-million-year evolution of modern
humans from primates involved a threefold increase in brain
capacity and a sixfold increase in prefrontal cortex. We
currently cannot increase our brainpower beyond this; in
fact, we gradually lose neurons as we age. (You may have
heard that humans only use 10% of their brains. Unfortunately,
this is a complete urban legend; not just unsupported, but
flatly contradicted by neuroscience.) One possible use of
broadband brain-computer interfaces would be to synchronize
neurons across human brains and see if the brains can learn
to talk to each other - computer-mediated telepathy, which
would try to bypass the problem of cracking the brain's
codes by seeing if they can be decoded by another brain.
If a sixfold increase in prefrontal brainpower was sufficient
to support the transition from primates to humans, what
could be accomplished with a clustered mind of sixty-four
humans? Or a thousand? (And before you shout "Borg!", consider
that the Borg are a pure fabrication of Hollywood scriptwriters.
We have no reason to believe that telepaths are necessarily
bad people. A telepathic society could easily be a nicer
place to live than this one.) Or if the thought of clustered
humans gives you the willies, consider the whole discussion
as being about Artificial Intelligence. Some discussions
of the Singularity suppose that the critical moment in history
is not when human-equivalent AI first comes into existence
but a few years later when the continued grinding of Moore's
Law produces AI minds twice or four times as fast as human.
This ignores the possibility that the first invention of
Artificial Intelligence will be followed by the purchase, rental, or less formal
absorption of a substantial proportion of all the computing
power on the then-current Internet - perhaps hundreds or
thousands of times as much computing power as went into
the original Artificial Intelligence.
But the real heart of the Singularity is the idea of better
intelligence or smarter minds. Humans are not
just bigger chimps; we are better chimps. This
is the hardest part of the Singularity to discuss - it's
easy to look at a neuron and a transistor and say that one
is slow and one is fast, but the mind is harder to
understand. Sometimes discussion of the Singularity tends to focus
on faster brains or bigger brains because brains are relatively
easy to argue about compared to minds; easier to visualize
and easier to describe. This doesn't mean the subject
is impossible to discuss; Section
III of Levels of Organization
in General Intelligence, on SIAI's website, does take a stab at discussing some
specific design improvements on human intelligence. But
that involves a specific theory of intelligence, which we
don't have room to go into here.
However, that smarter minds are harder to discuss than
faster brains or bigger brains does not show that smarter
minds are harder to build - deeper to ponder, certainly,
but not necessarily more intractable as a problem. It may
even be that genuine increases in smartness could be achieved
just by adding more computing power to the existing human
brain - although this is not currently known. What is known
is that going from primates to humans did not require exponential
increases in brain size or thousandfold improvements in
processing speeds. Relative to chimps, humans have threefold
larger brains, sixfold larger prefrontal areas, and 98.
4% similar DNA; given that the human genome has 3 billion
base pairs, this implies that at most twelve million bytes
of extra "software" transforms chimps into humans. And there
is no suggestion in our evolutionary history that evolution
found it more and more difficult to construct smarter and
smarter brains; if anything, hominid evolution has appeared
to speed up over time, with shorter intervals between larger
developments.
But leave aside for the moment the question of how to build
smarter minds, and ask what "smarter-than-human" really
means. And as the basic definition of the Singularity points
out, this is exactly the point at which our ability to extrapolate
breaks down. We don't know because we're not that smart.
We're trying to guess what it is to be a better-than-human
guesser. Could a gathering of apes have predicted the rise
of human intelligence, or understood it if it were explained?
For that matter, could the 15th century have predicted the
20th century, let alone the 21st? Nothing has changed in
the human brain since the 15th century; if the people of
the 15th century could not predict five centuries ahead
across constant minds, what makes us think we can outguess
genuinely smarter-than-human intelligence?
Because we have a past history of people making failed predictions
one century ahead, we've learned, culturally, to distrust
such predictions - we know that ordinary human progress,
given a century in which to work, creates a gap which human
predictions cannot cross. We haven't learned this
lesson with respect to genuine improvements in intelligence
because the last genuine improvement to intelligence was
a hundred thousand years ago. But the rise of modern
humanity created a gap enormously larger than the gap between
the 15th and 20th century. That improvement in intelligence
created the entire milieu of human progress, including
all the progress between the 15th and 20th century. It
is a gap so large that on the other side we find, not failed
predictions, but no predictions at all.
Smarter-than-human intelligence, faster-than-human intelligence,
and self-improving intelligence are all interrelated. If
you're smarter that makes it easier to figure out how to
build fast brains or improve your own mind. In turn,
being able to reshape your own mind isn't just a way of
starting up a slope of recursive self-improvement; having
full access to your own source code is, in itself, a kind
of smartness that humans don't have. Self-improvement
is far harder than optimizing code; nonetheless, a mind
with the ability to rewrite its own source code can potentially
make itself faster as well. And faster brains also
relate to smarter minds; speeding up a whole mind doesn't
make it smarter, but adding more processing power to the
cognitive processes underlying intelligence is a
different matter.
But despite the interrelation, the key moment is the rise
of smarter-than-human intelligence, rather than recursively
self-improving or faster-than-human intelligence, because
it's this that makes the future genuinely unlike the
past. That doesn't take minds a million times faster than
human, or improvement after improvement piled up along a
steep curve of recursive self-enhancement. One
mind significantly beyond the humanly possible level
would represent a full-fledged Singularity. That we
are not likely to be dealing with "only one" improvement
does not make the impact of one improvement any less.
Combine faster intelligence, smarter intelligence, and recursively
self-improving intelligence, and the result is an event
so huge that there are no metaphors left. There's
nothing remaining to compare it to.
The Singularity is beyond huge, but it can begin with something
small. If one smarter-than-human intelligence exists, that
mind will find it easier to create still smarter minds.
In this respect the dynamic of the Singularity resembles
other cases where small causes can have large effects; toppling
the first domino in a chain, starting an avalanche with
a pebble, perturbing an upright object balanced on its tip.
(Human technological civilization occupies a metastable
state in which the Singularity is an attractor; once the
system starts to flip over to the new state, the flip accelerates.)
All it takes is one technology - Artificial Intelligence,
brain-computer interfaces, or perhaps something unforeseen
- that advances to the point of creating smarter-than-human
minds. That one technological advance is the equivalent
of the first self-replicating chemical that gave rise to
life on Earth.