Research Grants
2010 Singularity Research Challenge
Offering unusually good philanthropic returns — meaning greater odds of a positive Singularity and lesser odds of human extinction — the Singularity Institute has launched a new challenge campaign. The sponsors, Edwin Evans, Rolf Nelson, Henrik Jonsson, Jason Joachim, and Robert Lecnik, have generously put up $100,000 of matching funds, so that every donation you make until February 28th will be matched dollar for dollar. If the campaign is successful, it will raise a full $200,000 to fund SIAI's 2010 activities.
Starting this campaign, we've put more details of our ongoing and potential work online than ever before, so you can get an overview of what projects we're doing, and so you can easily fund the proposals you support most. For more, see our grants below.
Jump to:
Our Goals
For almost a decade, the Singularity Institute has been asking questions on the future of human civilization: How can we benefit from increasingly powerful technology without succumbing to the risks, up to and including human extinction? What is the best way to handle artificial general intelligence (AGI): programs as smart as humans, or smarter?
Among SIAI's core aims is to continue studying "Friendly AI": AI that acts benevolently because it holds goals aligned with human values. This involves drawing on and contributing to fields like decision theory, computer science, cognitive and moral psychology, and technology forecasting.
Creating AI, especially the Friendly kind, is a difficult undertaking. We're in it for as long as it takes, but we've been doing more than laying the groundwork for Friendly AI. We've been raising the profile of AI risk and Singularity issues in academia and elsewhere, forming communities around enhancing human rationality, and researching other avenues that promise to reduce the most severe risks the most effectively.
How Earmarked Donations Work
If you make a donation to the Singularity Institute, you can choose which grant proposal your donation should help to fill. Any time a grant proposal is fully funded, it goes into our "active projects" file: it becomes a project that we have money enough to fund, and that we are publicly committed to funding. (Some of the projects will go forward even without earmarked donations, with money from the general fund — but many won't, and since our work is limited by how much money we have available to support skilled staff and Visiting Fellows, more money allows more total projects to go forward.)
Any remaining money allocated to partially funded grants on Feb 28 (at the close of the Challenge Campaign) will be returned to the general fund.
Recent Achievements
We have put together a document to inform supporters on our 2009 achievements. The bullet point version:
- Singularity Summit 2009, which received extensive media coverage and positive reviews.
- The hiring of new employees: President Michael Vassar, Research Fellows Anna Salamon and Steve Rayhawk, Media Director Michael Anissimov, and Chief Compliance Officer Amy Willey.
- Founding of the Visiting Fellows Program, which hosted 14 researchers during the Summer and is continuing to host Visiting Fellows on a rolling basis, including graduate students and degree-holders from Stanford, Yale, Harvard, Cambridge, and Carnegie Mellon.
- Nine presentations and papers given by SIAI researchers across four conferences, including the European Conference on Computing and Philosophy, the Asia-Pacific Conference on Computing and Philosophy, a Santa Fe Institute conference on forecasting, and the Singularity Summit.
- The founding of the Less Wrong web community, to "systematically improve on the art, craft, and science of human rationality" and provide a discussion forum for topics important to our mission. Some of the decision theory ideas generated by participants in this community are being written up for academic publication in 2010.
- Research Fellow Eliezer Yudkowsky finished his posting sequences at Less Wrong. Yudkowsky used the blogging format to write the substantive content of a book on rationality and to communicate to non-experts the kinds of concepts needed to think about intelligence as a natural process. Yudkowsky is now converting his blog sequences into the planned rationality book, which he hopes will help attract and inspire talented new allies in the effort to reduce risk.
- Throughout the Summer, Eliezer Yudkowsky engaged in Friendly AI research with Marcello Herreshoff, a Stanford mathematics student who previously spent his gap year as a Research Associate for the Singularity Institute.
- In December, a subset of SIAI researchers and volunteers finished improving The Uncertain Future web application to officially announce it as a beta version. The Uncertain Future represents a kind of futurism that has yet to be applied to Artificial Intelligence — futurism with heavy-tailed, high-dimensional probability distributions.
Grant Proposals
For our future work, we have prepared the following grant proposals. Each of these requires funding from individual donors — possibly you. Select a grant to put your money toward, below:
Total raised so far: $100,000, with $0 remaining
General Fund: $78,455
- Peter Platzer Popular Book Planning Project: Complete
- Peter Platzer Existential Risks Conference Grants : Fully funded, work in progress
- Academic Paper Grant: Digital Intelligences and the Evolution of Superorganisms: Fully funded, work in progress; we have now completed a working paper and are proceeding toward a journal article
- Academic Paper Grant: Machine Ethics and Superintelligence: Complete
- Academic Paper Grant: AI Risks Philanthropy: How Many Lives Can We Save per Dollar?: Fully funded, work in progress
- Academic Paper Grant: Whole Brain Emulation and Ab Initio AI Risks: an Integrated Picture: Fully funded, work in progress
- Landing Pages Grant: $7,200 donated, with $2,100 remaining
- Academic Paper Grant: Strategies for Transparency and Cooperation in AI Development: $1,000 donated, with $4,900 remaining
- Writing a Comprehensive Singularity FAQ: $970 donated, with $4,030 remaining
- Academic Paper Grant: The Coherence of Human Goals: $640 donated, with $3,760 remaining
- Academic Paper Grant: Software Minds and Endogenous Growth: $500 donated, with $3,100 remaining
- Academic Paper Grant: Containing Superintelligence: Feasibility and Strategies: $135 donated, with $4,265 remaining
- Academic Paper Grant: Existential Risk and Unknown Unknowns : $100 donated, with $4,300 remaining
- Pilot YouTube Video Contest: Making Core Content Memorable: $100 donated, with $5,900 remaining
- Academic Paper Grant: Collective Action Problems and AI Risk: $45 donated, with $7,155 remaining
- Academic Paper Grant: "Probabilities of AI — Unseeing the Evidence": $15 donated, with $5,945 remaining
- Academic Paper Grant: Anthropic Reasoning and Decision Theory: What We Don't Know, and Why It Matters: $10 donated, with $5,950 remaining
- Improving the Uncertain Future web application: $4,800 remaining
- Singularity Institute Visitors Grant: Fully funded for three visitors; still fundable with $650 remaining
- Academic Paper Grant: Why AI Risk Demands Better Scientific Methodologies : $5,900 remaining
Or, if you'd like to jump-start efforts to reduce human extinction risk other than the projects above, and if you're interested in donating $1,000 or more — email Anna Salamon, at annasalamon at singinst dot org. We'd love to hear your thoughts, and to work with you to create an effective new project.
Donor Comments
- "Good stuff SIAI, keep up the great work." - Adam Ford
- "We need to sort out the mess that is human morality before any attempt to make AGI is succesful. The complexity and subtlety of human wishes should be common knowledge in the entire AGI community." - Johan Edström
- "Many groups admirably aim to reduce suffering, but SIAI is one of the few that seriously examines the most cost-effective ways of doing so in light of physics, computation, and cognitive science." - Brian Tomasik
- "Work related to the singularity is humanity's greatest hope to solve the unignorable problems of our world. I donate for our future." - Diane Parish
- "Great to see the articles, conference talks, and new plans!" - Joshua Fox
- "The SIAI may be the only thing that saves humanity." - Robin Powell
- "Here's to the future. May it be better than the past." - Paul Gentemann
- "If I had to donate to a single non profit organization, this would be it! The work being done here will undoubtedly greatly benefit this and future generations." - L'Emir-Nader Chehab
- "Working on Friendly AI is solving every human problem by other means." - Scott Dickey
- "Of the many organizations trying to do good, SIAI has probably most rigorously considered how to really maximize good." - Alex Edelman
- "A donation to the SIAI should be part of every investment portfolio." - Arthur Breitman