Creating Friendly AI [New in 2001!] (782K)
My longest work ever, Creating Friendly AI describes how to construct a benevolent, altruistic seed AI - such that the AI will remain Friendly no matter what you throw at it, from radiation bitflips to programmer mistakes; such that the AI's complete access to the underlying source code presents no threat to Friendliness; such that the AI can recover from major philosophical errors on the part of the programmers; and such that Friendliness remains stable under rapidly improving intelligence - or "at least as stable" as could be expected from any human or community of humans.This document necessarily treads all over grounds that are the traditional preserves of philosophy, but the body of the document is almost exclusively a work of cognitive science - whichever philosophical conclusions you choose to draw are your own.
Creating Friendly AI is ©2001 by the Singularity Institute, of which I am a Research Fellow, but I am the only author so far.
See also the Friendly AI section and "What is Friendly AI?" at the Singularity Institute.
General Intelligence and Seed AI (348K)
Probably the most intelligent question I get asked about the Singularity is "Just because we have all this computing power doesn't mean we know how to use it. Can we really program an Artificial Intelligence that's smarter than human?" This page explains how.GISAI discusses how to build a general intelligence, plus those issues particular to self-modifying or "seed" AIs.
Computer science and cognitive science, not philosophy, although some of my pages refer to GISAI as support and background for futuristic and even philosophical issues. The current version of GISAI is ©2001 by the Singularity Institute, but as with CFAI above I am currently sole author. The section on "Cognition" is in progress. The published sections "Paradigms" and "Mind" are complete and self-contained.
See also the Seed AI section, "What is General Intelligence?", and "What is Seed AI?" at the Singularity Institute.
Staring into the Singularity (91K)
If computing power doubles every two years, what happens when computers are doing the research?This document contains the most concentrated future shock of all my writings.
The Singularitarian Principles 1.0 (14K)
The Principles we profess in our high quest to bring about the creation of greater-than-human intelligence: Singularity, Activism, Ultratechnology, Globalism; Apotheosis, Solidarity, Intelligence, Independence, Nonsuppression.Commentary may be found in the Extended Edition (58K).
The Plan to Singularity (403K)
A vision of how to reach the Singularity (the creation of greater-than-human intelligence) using open-source Artificial Intelligence development to spark the creation of an AI industry. This page describes the technological timeline leading up to Singularity; discusses strategy including developmental, organizational, and memetic issues; and estimates how much it will cost to get started.Most of this page has become obsolete, since it was written before the creation of the Singularity Institute and before "General Intelligence and Seed AI". Also, statements found here may not necessarily be representative of my current opinions.
The address given above is for a multi-file version. A single-file version is also available.
Frequently Asked Questions about the Meaning of Life (238K)
This is Ask Jeeves's answer to the question "What is the Meaning of Life?" (I mean, I knew the answer, so why not?)This page is targeted at a non-transhumanist audience. SL3s and above may not find much to be shocked about. Also, this document is now old enough that statements found here may not necessarily be representative of my current opinions.
Bookshelf (27K)
Books which you are not permitted to not read.
Thanks to Cole Kitchen, Lord Proofreader and Supreme Catcher-of-Errors, for many helpful suggestions. (On one occasion, he emailed me a list of all the mistakes in a newly published version of a page - not even a new page, but an update of an old page - 18 hours after I uploaded it and before I announced it anywhere. I can only assume that he's an AI, or that some type of magic is involved.)The Singularitarian mailing list (20K)
This is the mailing list to keep people apprised of efforts to reach the Singularity - a gathering place for people interested in helping out. Restricted membership.The SL4 mailing list (11K)
This is where Singularitarians and other people with extremely high levels of future shock hang out. (List is currently fairly active, with over a hundred members.) Unrestricted membership.Future Shock Levels (8K)
Some handy terminology. Also useful for figuring out how much future shock you can safely expose your audience to.PGP public key for Eliezer Yudkowsky (2K)
Send me secure email using public-key cryptography.
Help annoy the NSA.
Thanks to the Singularity Institute for Artificial Intelligence, Inc. for supporting my more recent work.
Thanks to kia.net for donating hosting to the Low Beyond.