Singularity Summit 2007
Held September 8 and 9, 2007 in San Francisco, CA, the Singularity Summit brought together thinkers and pioneers to discuss the future of Artificial General Intelligence and its impact on humanity. Thanks to The Singularity Institute for Artificial Intelligence, hosts of the summit, for providing this audio.
In this talk from the 2007 Singularity Summit, James Hughes predicts that while artificial general intelligence is likely, it is also likely to seem alien to our way of thinking and difficult to control. He also discusses some of the rarely mentioned negative impacts AGI could have on society.
As Christine Peterson, cofounder of the Foresight Nanotech Institute puts it, "It is a scary world ahead." With threats possible by traditional, biological, and nano-technological means, Peterson questions the current approaches to security in this speech from the 2007 Singularity Summit. Instead, she proposes using lessons from the open source software model in a bottom-up approach might provide more effective security sensing.
Design or evolution? In building complex, artificial intelligence systems, is it best to use top down design, a gradual evolutionary process, or a combination of the two in order to maintain some level of control? Steve Jurvetson, Managing Director of Draper Fisher Jurvetson, is placing his money on iterative evolutionary algorithms as the best path to the future of artificial intelligence.
Dr. Charles L. Harper, Jr. asks some "off the wall questions" to challenge readiness of the scientific community to recognize the potential risks and implications of rapid human technological development. Where should our concerns lie given the potential ofsuper intelligent machines that could far exceed human intellectual capabilities? Are we up to the task of proper stewardship of such powerful new advances in technology, or more significantly will that role even be ours?
Could Hammurabi have written the laws to prevent the Enron scandal? J. Storrs Hall, scientist and author Of Beyond AI, poses this question to demonstrate the near impossible challenge confronting scientists in the current discussion of machine ethics. The future of AI envisions machines with the capacity to far exceed humans in knowledge and intelligence. It is a far greater problem than the one for which Isaac Asimov originally wrote the Three Laws of Robotics.
How do you create a friendly Artificial Intelligence? Eliezer Yudkowsky, Co-Founder & Research Fellow at the Singularity Institute for Artificial Intelligence, has focused his work on overcoming some of the mathematical impediments to building a self-improving AI. In this presentation he discusses the very speculative possibilities of creating an artificial mind infused with a sense of direction, and capable of learning from its own mistakes.
The Singularity is near; it will arrive in 10, 50, or 100 years depending on whom you talk to. Peter Norvig, Director of Research at Google, examines the value of expertise in predicting the future, and discusses his thoughts on artificial general intelligence, based on his past experiences at NASA and current work with Google.
Do you appreciate when someone brings a fresh perspective to a complex and daunting issue? Well can you imagine an issue more impenetrable or discouraging then the Singularity? From the "How Far are We from Advanced AI?" session of the 2007 Singularity Summit, Paul Saffo offers some new advice. He recommends that we find some poets and novelists and whisper in their ears about this stuff. Then hopefully they will help shape what the Singularity should be, rather than what we hope it will not be.
"Nine years to the Singularity, if we really, really, try," says Dr. Ben Goertzel, chief science officer and acting CEO of Novamente. Is this really possible? Dr. Goertzel believes the path to the development of Artificial General Intelligence - a real thinking machine with human level intelligence and beyond - can be accelerated through the use of virtual worlds as incubators for nascent artificial intelligence systems.
Despite misconceptions to the contrary, early state AI systems are working in the real world and creating a lot of value for the companies that use them. Neil Jacobstein discusses practical uses of AI in many diverse industries and tasks. He compares the technology around early AI systems with the modern ones being developed today, and tells what has and hasn't worked based on 50 years of perspective.