Why is the Singularity Institute starting out by focusing
its efforts on Artificial Intelligence? We aren't limited
to AI; if the Singularity movement (and that movement's support
for SIAI) grows enough to support several projects on a firm
basis, we could branch out into neurotechnology, brain-computer
interfaces, or other more expensive research areas. We have,
however, picked AI as the best place to start.
Explaining this choice requires considering the question from an activist perspective, rather than a futurist perspective, as discussed in the Quick Answer on Activism. Thus, we did not pick AI by asking "Is AI likely to give birth to the Singularity?", or "Which of these several plausible technologies is most likely to give birth to the Singularity?", or even "Which of these several technologies has the best combination of speed to the Singularity, probability of reaching the Singularity, and probable integrity of the resulting Singularity?" What matters is the comparative degree to which well-directed actions can affect the speed, likelihood, and integrity of a Singularity technology, and the degree to which this improvement to that particular technology is likely to affect the predicted speed, likelihood, and integrity of the Singularity.
Usually it isn't necessary to keep track of all of those factors simultaneously, but they are individually important at one point or another in the decision process.
So why do we believe that the improvement a moderately-sized effort can make to the speed, likelihood, and integrity of Artificial Intelligence is likely to bring the overall greatest improvement to the predicted speed, likelihood, and integrity of the Singularity?
Here are some of the factors involved:
- According to our best current analysis, Artificial Intelligence does seem to be in the lead. This means that rather than needing to accelerate another technology past AI in order to accelerate the Singularity itself, any acceleration of Singularity AI accelerates the Singularity. It also means that improvements to the integrity of AI (i.e., Friendly AI) have the most direct effect on the integrity of the Singularity.
- The idea that AI is "in the lead" - with respect to when future progress will cross the Singularity line - may seem strange given the current slowness of AI. See What is Seed AI? for some of the reasons why we think AI may move quickly once it gets started.
- By comparison with, e.g., neurotechnology, Artificial Intelligence projects seem likely to be easier to start up using few resources.
- The fewer resources required to start up the project, the earlier the project can get started, and the more likely the project is to get started.
- The earlier there are visible results - not project completion, but the first visible results - the less time required for the research project's results can begin attracting additional funding. Seed AI is also more likely to provide a small spinoff application that could be licensed (nonprofits are allowed to do this, as long as the resulting revenues go back into the nonprofit purpose).
- As far as we can tell, right now we're the only group in the world that's put in any serious effort toward developing a workable theory of Friendly AI. This shouldn't be taken as a criticism of those other groups; we're trying for a Singularity, they're not. But it does mean that our efforts can make a major difference to the integrity of a Singularity-based AI.
- We think it would be a good idea to make sure that, at any given time, the most advanced AI project on the Singularity pathway - the AI project closest to crossing the line into recursive self-improvement or real AI - is one that has put a lot of effort into Friendly AI. It may not be possible to predict in advance where this line becomes crossable.
- Many people wistfully prefer that AI would stay in Pandora's box. This isn't one of our reasons; in our experience, this preference usually turns out to ground in inapplicable human emotions originally developed to deal with feuding tribes, rather than a principled consideration of whether a human or an AI is more likely to stay sane through recursive self-enhancement. Still, even from that wistful perspective, it may be worth trying to advance Friendly AI - even if that also advances AI - because the improved outlook if AI comes first is worth whatever dislike attaches to the prospect that AI will come first.
- According to Friendly AI theory as we currently understand it, it's important to accumulate as much Friendly AI content as possible before recursive self-improvement starts taking off. Even if taking the last step to fully recursive self-improvement only becomes technologically possible 20 years in the future (which we don't think is the case, but anyway...), it may still be a big advantage to have spent the past 20 years working on Friendly AI in dumber-than-human AIs.
- According to Friendly AI theory as we currently understand it, there should be essentially no difference between a successful Singularity that starts with a human and a successful Singularity that starts with an AI.
- This separates the issue of how likely we are to succeed in protecting the integrity of the Singularity in each case, from the question of whether a success in one case is as good as a success in the other. For more about the specific issues here, see our section on Friendly AI.
- It's true that we don't see any way to eliminate the risk that the Singularity doesn't happen or that the Singularity "goes wrong" in some way. But we do not see as acceptable any compromise of the Singularity's integrity in the case of success. The Singularity only happens once, and in our view, anything less than a complete win - the best possible long-term outcome - is a loss. If we didn't think it was possible to achieve a complete win through AI, we would not regard AI as an acceptable Singularity technology (although this would create some serious strategic issues; see the next items).
- AI may have enough of a technological advantage relative to some other Singularity technologies commonly proposed as alternatives to AI, particularly uploading, that any preferences in the issue are moot. (If no realistic amount of effort is likely to accelerate another technology ahead of AI, the issue is moot.)
- For example, uploading would require a very mature nanotechnology (implying nanocomputers with enormously more computing power than the human brain) and a tremendously advanced understanding of cognitive science. Thus no amount of effort is likely to accelerate uploading ahead of AI, since the required technologies for uploading are light-years beyond what it would take to create AI.
- As stated above, if a success in Friendly AI didn't promise to be at least as good as a Singularity centered on any human or group of humans, we wouldn't do anything that would accelerate the arrival of AI compared to other Singularity technologies, even if this meant taking refugee in quixotic or unlikely approaches. But since we do think that a success in Friendly AI is at least as good as a success in any other Singularity technology, the shorter time to AI becomes a very important factor in our Singularity strategy.
- We know about the dreadful state of modern-day Artificial Intelligence too, believe us. Nonetheless, the very disarray of the field may represent an opportunity to bring order out of chaos; the mess in AI is reminiscent of many past scientific messes that did eventually turn out to make sense.
- Do you have a very high initial skepticism about any new outlook on AI? That's probably wise, considering the number of previous failures. But no matter how many would-be pilots crashed and burned, the Wright Brothers did take off from Kitty Hawk eventually - the past failures are a good reason for skepticism, but not an impossibility proof.
- It really isn't easy to sum up our outlook on AI in a couple of paragraphs, especially since our outlook doesn't consist of one grand new idea that is supposed to explain everything. The closest thing we have to an AI concept that fits on a T-Shirt is the idea of recursively self-improving seed AI, but that's only one of the ideas involved, and perhaps not the most important one. Probably the only real way to get an appreciation for our viewpoint on AI is to start reading through "Levels of Organization in General Intelligence".
- The current movement toward human enhancement technologies is very broad-based, with no obvious critical point where a small effort can have great leverage, or where an effort can "run the last mile". This factor doesn't contribute to human enhancement being less likely than AI, but it does contribute to needing more resources to make a difference to the technological trajectory of human enhancement.
A more detailed exposition of the above reasons is not available at this time, but it's on our to-do list.