The Game Friday, Aug 3 2007 

Existential Risks: Serious Business. Thursday, Aug 2 2007 

Update: it looks like this whole thing may be nonsense, i.e., Pianka never actually said what is asserted below. I’ll look into it in a bit more detail tomorrow and modify this post accordingly.

Today, to my dismay, I rediscovered this, a Dr. Pianka who publicly proposed the use of the airborne Ebola virus to kill off 90% of the human race. This man, a member of the Texas Academy of Science and chairman of its Environmental Science Section, received a standing ovation from students and Academy members when he gave a talk on the matter at the University of Texas at Arlington. As Eliezer Yudkowsky points out at Overcoming Bias, it’s silly to be “surprised” by something like this: given the professor’s knowledge set and lax sense of ethics, exterminating 90% of the human population with a flesh-eating virus must seem like a great solution to the planet’s problems, or he wouldn’t be talking about it in the first place.

The problem is the rest of us. We don’t want to have our internal organs turned to mush by a fatal virus. This issue reminds me of an acronym that Phillippe Van Nedervelde used in his talk on existential risks and the Lifeboat Foundation at Transvision 2007 — SIMD — Single Individual, Massively Destructive. He also pointed to the Unabomber, and showed a picture of him when he was a math teacher, looking just like a typical professor. There is a risk from radical, out-of-control nutcases like Al Qaeda, yes, but these people tend to have problems infiltrating truly relevant organizations or acquiring the complex knowledge necessary to do real damage.

In the case of AI and synthetic biology, the biggest risks will come from smart people who have a grudge against society, and even those with noble motives but insufficient caution or sense of professional ethics. After all, if it were possible for humanity to destroy itself, it would have done so a long time ago… right? Wrong. Selection effects ensure that we will always find ourselves in a civilization that hasn’t previously destroyed it.

In the comments section of a blog I was reading yesterday, someone had this to say:

Much of the problem faced by those trying to tell us about existential risks is the fact that we’ve been bitten to hard and too long by wolf-criers for the past six years. As a result, ANYONE who talks about dangers is likely to get the cold shoulder, regardless of whether 1) they are sincere as opposed to jockeying for power, or 2) whether the risk they’re talking about is actually real or not.

This does seem true, and admonitions about global warming may be partially to blame, as well as terrorist fearmongering (some of which may also, in fact, be well-founded). Anthropogenic global warming is a reality, yes, but I don’t think it’s an existential risk, especially not in the next few decades. Bombardment with warnings on anthropogenic climate change, as well as terrorist attacks, is desensitizing the populace to warnings of existential risk. I’m not saying such warnings are a bad thing, just pointing out the fact that they’re desensitizing us. The fact that the most severe risks have to do with technologies just barely beginning to roll off the assembly lines — advanced AI and robotics, and synthetic biology — doesn’t help matters either.

But, as always, you, the reader, can refuse to be a part of the problem. You can take existential risk seriously, and refuse to write off those who discuss these dangers, like Martin Rees and Stephen Hawking, as “Doomsayers”. For most of the past 10,000 years, catastrophic technological risk has been impossible. Even global thermonuclear war would be more likely to kill off 10% or 20% of the population rather than 99% or 100%. And if you care about the long-term future of humankind as a whole, killing a billion and killing everyone makes a hell of a lot of difference.

(Other good posts on this domain: Concept Funneling, Rapture of the Nerds, Not.)

First Reference to RSI in Fiction? Tuesday, Jul 31 2007 

What follows is possibly the first reference to AI/robotic recursive self-improvement in fiction, from all the way back in 1935. Quote from Technovelgy:

In this story of a future Earth, humanity had all of its needs met by a device - an intelligent machine.

“You have forgotten your history, and you have forgotten the history of the Machine, humans…”

“On the planet Dwranl, of the star you know as Sirius, a great race lived, and they were not too unlike you humans. …they attained their goal of the machine that could think. And because it could think, they made several and put them to work, largely on scientific problems, and one of the obvious problems was how to make a better machine which could think.

The machines had logic, and they could think constantly, and because of their construction never forgot anything they thought it well to remember. So the machine which had been set the task of making a better machine advanced slowly, and as it improved itself, it advanced more and more rapidly. The Machine which came to Earth is that machine.”

From The Machine, by John W. Campbell.
Published by Astounding Science Fiction in 1935.

Looks like the Singularity idea is not so new after all.

AI and Effective Sagacity, by Mitchell Howe Tuesday, Jul 31 2007 

In the field of AI, the supergoal is to create an information processing
system that does something truly significant. (Whether this something is
good, bad, of financial worth to a few, of world-ending importance to many,
etc., depends upon who is doing the programming and how successful they are
at it.) The seemingly essential subgoal that defines AI research is to
create a system that can both learn and improve itself in self-reinforcing
manner to eventually meet the end objective of significant action. Some
minimal yet critical combination of software elegance and hardware
capability is required to get to this point.

Discussion often lingers on the questions of how near to the capacity of the
human brain such a system would need to be in order to meet this goal, or
even to what degree of human brain might be required. I believe such
questions are largely meaningless because they lose sight of the only
supergoal - that such a system sustainably learn and improve, leading to
eventual significant action.

Consider this in light of the debate about whether a person with 50 IQ can
ever hope to achieve the results of someone with a 100 IQ by remembering
that within the wide range of IQ scores held by capable adults there are
many with high IQ’s who have failed to contribute anything insightful or
even useful, just as there are many with lower IQ’s who have come up with
world-changing ideas and become leaders in business. (While far from
scientific, an issue of TIME from early this year had fun with this idea.)
The ability to solve simple problems and make logical conclusions from given
data, as measured by IQ scores, does not directly correlate to the AI
supergoal of doing something truly significant. Somebody may know how to
design a better mousetrap yet never do anything with this knowledge. We
would hope that an AI not likewise ‘fizzle’ (unless its better mousetrap
design was a grey goo that would wipe out all mammalian life).

I believe that a large part of the surprisingly common discord between IQ
scores and societal significance can be explained by my simple theory of
‘Effective Sagacity’. It begins with the idea that there are various levels
of thought experienced in the human mind, and that only the time spent at
the highest level contributes to genuinely productive intelligence. I
prefer to identify just two levels of thought with the disclaimer that there
is no hard line between them. I like to call them Fidget and Sage.

Fidget is the level of thought that involves making numerous small, trivial
decisions and enacting any routine physical actions these decisions require.
Many activities, once learned, become Fidgetized. Card shuffling and
dealing. Assembly line tasks. Simple arithmetic. Brushing your teeth.
You know that they are Fidgetized because you can think about something else
entirely while doing them. But you don’t always think about something else
because Fidget is often capable of bringing Sage mind behind it lock-step.
(I’ll talk about that more about interplay between these two in a second.)
Fidget cannot intentionally change your life, but it is very useful and
powerful nonetheless.

Sage is the level of thought that involves conscious consideration and
complex decision-making. It is the level you are at when you not only hear
what your professor is saying, but also think about it, relate it to your
model of the universe, and implement it accordingly - *learning* Sage is
responsible for pondering the deeper questions of life, sustaining
meaningful conversation, and making conclusions about your identity. It was
hopefully the level you were at if/when you decided on a career, spouse,
etc. Sage is not all-powerful, though. For starters, it has very low
endurance when most actively engaged, like someone who can walk for miles
but can barely run a lap around the track. It is also easily distracted by
inconsequential tasks, like a dog happily entertained for hours by a simple
game of catch. In fact, given the choice between running a lap and
repeatedly grabbing a stick in its mouth, Sage will usually bring you a
drool-covered stick.

Because of the complimentary talents of Fidget and Sage, they have a very
friendly relationship. People are often most satisfied when both are
simultaneously occupied at a low-to-middle stress level. Solitaire on the
computer is mostly a thoughtless exercise of mouse clicks under Fidget
control, with occasional input from Sage when an actual strategic decision
needs to be made. Neither mind is working terribly hard but both are
occupied and satisfied - a condition of well-being some researchers have
called “flow”. Fidget is just as happy to spend hours throwing a stick as
Sage is to chase it and bring it back — the seductive addiction of video
games and jigsaw puzzles is explained.

The poor endurance of Sage, and its desire to rest at an optimum
lower-stress activity level also sheds light on many kinds of
procrastination, since the thing you put off doing is often some special
case that requires a higher Sage activity level. “I can’t study anymore for
my final. I must go for a swim and work on my tan.” “I can’t finish
writing about levels of thought right now. I must play Diablo II for a
couple of hours.”

(Five hours later)

There are times though, when one level of thought operates almost
independently from the other. If you have ever been putting staples in
hundreds documents when you realized that you had run out of staples a dozen
slams of the stapler ago, you know what I am talking about. The fully
Fidgetized task did not require the attention of Sage, who found something
else to do and failed to notice and report the absence of staples. It is
either called “daydreaming” or “spacing out”, depending on whether Sage was
meandering through the park or asleep on the bench when it was discovered.
Driving is an activity that unfortunately lends itself to inappropriate
Fidgetization. While first learning to drive, few can really think about
much else besides driving, but over time the procedures become more routine.
Many, many traffic accidents have occurred because people allowed Sage to
leave driving completely up to Fidget, who does not react promptly when
something unexpected occurs. Perhaps Sage was talking to his stock broker
on the cell phone, or perhaps just carrying on an imaginary conversation
with an ex-lover who would be oh-so jealous about seeing him with so-and so
behind the truck that just stopped suddenly in front of -WHAM!-. (I mean,
honestly, there are few excusable reasons to rear-end someone.)

Sage can also be deliberately put out to pasture, and this is frequently
done when Fidget is busy and can’t play. Many drivers and workers in
repetitive jobs either consciously or unconsciously silence Sage by
listening to music - an activity that for many gets Sage absently swaying to
the beat. (This is not always the case when listening to music, but a use to
which it is frequently put.)

Even if Fidget is not busy, Sage can be intentionally suppressed. For some,
like angst-ridden teenagers, conversations with Sage may be so disturbing
that loud music is the best way to drown them out. For others, chatting
with Sage may simply be dull and unsatisfying. Alcohol and Marijuana are
known Sage-suppressants. TV offers many levels of basic thought occupation
catered mostly to minds ranging from the “moronic” to the “typical
American” - which is why many noticeably intelligent people have just one or
two favorite shows and renounce the rest as a worthless morass of glandular
titillation.

So what do I mean by “Effective Sagacity”? Well, by now it should be
obvious that humans, on average, spend very little time with Sage hard at
work. Sage is usually engaged in trivial games with Fidget, deliberately
distracted while Fidget is busy, or intentionally suppressed because of
boring or uncomfortable mental dialogue. It may even be that Sage, when
allowed to slack off so much, becomes even more out of shape and incapable
of running laps. (I reluctantly make this conclusion knowing that I give
ammunition to those who derail mine and subsequent generations as having no
attention span thanks to today’s ubiquitous entertainment technology.) The
problem is, high-level Sage-thought is the only kind that fosters true
learning, creativity, experimentation, etc. Therefore, even the most
high-IQ human may never produce anything new or useful to society if she is
unable or unwilling to regularly put her lanky-but-lazy Sage through its
paces. The low-IQ underdog may climb to the top of his field because his
awkward-but-fit Sage is continually running marathons. The formula is as
follows:

**The amount previously invested and currently spent in highest-level
thought combine to form one’s “Effective Sagacity.” In the end, this is the
*only* measurement of mental capacity an AI researcher ought to be
interested in.**

Note that I did not say that Effective Sagacity was the proportion of high
Sage thought to other thought, nor did I say that it was the average height
of one’s thoughts. Only highest-level ‘Sage’ thoughts count. Only thoughts
already completed (which by definition have enriched the mind) or currently
undertaken count. This means that a mind too unsophisticated to think any
deep thoughts will automatically be disqualified from having a high
Effective Sagacity. It also means that a high-IQ — the mere potential to
think really big thoughts, is meaningless.

When we talk about AI, it must be said that a self-improving seed
intelligence has the potential to have an Effective Sagacity score
completely off the charts compared to humans. This is fine. If, due to
faster-than-neuron circuitry and clever software, the AI thinks through the
equivalent of 1,000 human years of high Sage thought in just two weeks, the
scale is not broken - just embarrassing to humans. It may also be that this
same AI is thinking thoughts of far higher Sage than humans are capable of.
This is more of a stretch for the Effective Sagacity scale, but if such is
demonstrably the case than the machine is already a superintelligence that
is probably doing something very significant. Hope it’s friendly.

An AI researcher, then, should also take heart in the knowledge that most of
the human mind’s activity may not need to be replicated in order to create a
machine that thinks high Sage thoughts. Others have already stated well the
reality of the human mind’s origins and its preoccupation with biological
drives. These same forces undoubtedly worked in some way that I do not
fully understand to create the range of generally low-endurance Sage most of
us rely upon to learn and create. An artificial intelligence would not only
be free of the bio-burdens of survival, but also of the human limitations on
sustained high-level thought. It may not be necessary to come even close to
matching human neural capacity in silicon, not only because so much of the
brain’s body-minded tasks need not be wired for, but because the primary
thought tasks that are programmed will be consistently carried out. If a
software engineer spends just 30 minutes a day actually entering code, she
is probably not spending the other 7.5 hours thinking about that code, but
rather some 2.5 hours thinking about the code, 2 hours thinking about food,
sex, or social status, and 2 hours “spaced out” or otherwise incapacitated
by Sage lazily chasing down or soaking up trivial thoughts of some kind or
other. An AI should be able to tweak strongly in favor of the on-target
thought.

It is possible that this conclusion is wrong; It could be that there is some
fundamental limitation inherent in the brain’s level computational capacity
that makes it possible to learn effectively for short periods of time but
impossible to do it for weeks on end - but I doubt it. It could also be
that an AI would have its own crippling correlates to human Fidget
activities - exhaustive memory or data-stream management, perhaps. These
Fidget distractions could easily demand so much attention that little
capacity is left for Sage thought. (This metaphor may very crudely apply to
Ben Goertzel’s early incarnation of Webmind.) More efficient coding and
more powerful hardware seem very likely to overcome this potential
bottleneck soon, however.

All these happy conclusions seem to support the view of a hard, fast AI
takeoff sooner rather than later. I’m all too happy to stand by that, but
the Effective Sagacity view suggests an additional hurdle a growing
seed-AI - the limits of human knowledge obtained thus far. A highly
Sagacious AI would be very adept at learning new material, at internalizing
input to create a more accurate model of the universe, and using this model
to produce insightful output. The problem potentially arises after the
young AI has devoured all available texts and treatises on computer science
along with all examples of program code - and perhaps managed to make only
modest improvement on its own design. Further progress could be very slow
without additional instructional materials. Fortunately, the truly
Sagacious AI could also effectively find its way out of this cul-de-sac of
human thought. It could do so the same way outstanding scientists do today:
by identifying the limits of current understanding and coming up with the
right questions to ask in order to expand those limits. The AI could either
come up with great experiments to advance human knowledge, or, more
efficiently in the software field, create and perform experiments on its
own. Even if the AI is -merely- capable of directing humans in bold new
experiments, it has already done something truly significant. This would
also increase the likelihood that it would continue to be capable of
improvement and further ultimate significance.

The Effective Sagacity view suggests that the goal of AI is simpler than it
is often made out to be. Not only does AI not require replication of the
human brain, it should not prove as susceptible to subtle weaknesses that
sap the capacity of even the most brilliant humans to sustain high-level
thought. It would be naive, however, to suggest that creating an AI is a
simple task. Coding and wiring for a truly significant new intelligence
demands both daring creativity and enviable perseverance. It will require
thinkers of the highest Sagacity.

AcceleratingFuture.com Updated Tuesday, Jul 31 2007 

The title page of this domain has been simplified to make it more interesting:

Visualizing Power in Watts Tuesday, Jul 31 2007 

Below is a list containing various power values and the quantity of water they can boil. Click for the larger version. For the water, the initial temperature is approximately that of the ocean’s surface, 20 °C. The text version may be downloaded here.

Source: Wikipedia - Orders of magnitude (power).

Transvision 2007 Pictures Monday, Jul 30 2007 

Click above to see my images (and a few by others) of the Transvision 2007 conference, held last week in Chicago. Regrettably, I left my camera charger at home so I only caught the first half of the conference. George Dvorsky posted his photos here. There is also a video of Ray Kurzweil’s acceptance speech for the H.G. Wells award, given each year to an outstanding transhumanist. Previous winners include Aubrey de Grey, Ramez Naam, and Charlie Stross.

Next Page »