The Prospects for AI Panel takes the form of four individual presentations by the panellists followed by a lengthy Q&A session.
Neil Jacobstein is first up, with a talk that ranges from a look at the design and build of a classic Frank Lloyd Wright house to the innovative approach used to design new Lexus cars. Since the early 1980s the systematic codification of knowledge in computer languages has enabled a wide range of useful applications in industry and government. These applications may include the performance of complex tasks but none has really exhibited general intelligence. However, for all the technical and cultural limitations manifested by these applications, each has contributed incrementally to our ability to harness the power of knowledge.
Jacobstein sees the future for AI becoming much brighter, thanks to the confluence of factors such as advances in neurosciences, the advent of large scale ontologies and the semantic web, as well as the emerging development of nanotechnology and molecular manufacturing, and the exponential increases in computing hardware speed and memory. There still remains a need, however, for a systematic approach to the cultural and organizational problems involved in the co-evolution of machines and humans. To this end, Jacobstein ends his session by listing a set of ten rules that he believes provide fertile ground for a successful implementation of complex AI projects.
Next up to the microphone is Patrick Lincoln. He believes that the primary purpose of AI is to augment intelligence. Although the value of the IT portion of products, services, and the entire economy increases steadily, this has come with an increasing reliance on automated computing systems. At the same time, there is a decreasing visibility of the critical properties of these systems by both users and designers. Lincoln posits a new law: Moore's Wall, which accepts that human capabilities are growing but no fast enough to control fully what we build. It will become beneficial, therefore, to provide designers and users tools and methods to enable them to understand and improve the trustworthiness of complex digital systems.
Lincoln proposes that we should enable rapid analysis and understanding of the critical properties of complex systems, even when the complex systems under study involve tight interactions with human components. More importantly, we should do this before we align our interests strongly with automated systems. Although recent rapid advances in automated reasoning make this plausible, it still requires a greater and more focused effort to make it reality.
Peter Norvig proposes the slogan 'AI in the middle', meaning that AI technology becomes a mediator between authors and readers. History has so far produced exactly one system in which trillions of facts are transmitted to billions of learners: the system of publishing the written word. No other system comes within a factor of a million of this performance benchmark. This in spite of the fact that the written word is notoriously imprecise and ambiguous.
In the early days of AI, most work was on creating a new system of transmission - a new representation language, and/or a new axiomization of a domain. Well-structured data was manipulated by sound means. Although it will remain expensive to create knowledge in any formal language, AI can leverage the work of millions of authors by understanding, classifying, prioritizing, translating, summarizing and presenting the written word in an intelligent just-in-time basis to billions of potential readers.
Bruno Olshausen believes that, despite much effort in the engineering and mathematics community over the past 40 years, there has been little progress emulating even the most elementary aspects of intelligence. This lack of progress is especially striking, considering the fact that, in the past two decades alone, we have seen a 1000-fold increase in computer power. The actual intelligence of computers, on the other hand, has improved only moderately by comparison.
If we are to make progress in building truly intelligent systems, Olshausen says we need to turn our efforts toward understanding how intelligence arises within the brain. Neuroscience has produced vast amounts of data about the structure and function of neurons but what is missing is a theoretical framework for linking these details to intelligence. Theoretical neuroscience seeks to bridge this gap by constructing mathematical and computational models of the underlying neurobiological mechanisms involved in perception, cognition, learning, and motor function.
Neil Jacobstein is President and CEO of Teknowledge Corporation, a 24-year-old Nasdaq small cap software company that focuses on knowledge-based computer systems and services for commercial and government applications. Neil has been a technical consultant on software research and development projects for: DARPA, the U.S. Air Force, Army, Navy, and Marines, NASA, NIH, EPA, NSF, DOE, NRO, NIST, GM, Ford, P&G, Boeing, Applied Materials, and many others. He has developed and delivered tutorials and seminars on knowledge based systems and applications of artificial intelligence techniques. Neil chaired the American Association for Artificial Intelligenceâ€™s 17th Innovative Applications of Artificial Intelligence conference in 2005.
Patrick Lincoln is Director of the Computer Science Laboratory at SRI International in Menlo Park, CA. He has a Ph.D. in Computer Science from Stanford University. Before coming to SRI in 1989, he worked at the Los Alamos National Laboratory and MCC Software Technology (STP). He has published numerous articles and is currently preparing three papers: "Nonlithographic, Nanoscale Memory Density Prospects", "Interactive Proof-Carrying Code", and "Towards a Semantic Framework for Secure Agents".
Peter Norvig has been at Google Inc since 2001 as the Director of Machine Learning, Search Quality, and Research. He is a Fellow of the American Association for Artificial Intelligence and co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field. Previously he was the senior computer scientist at NASA and head of the 200-person Computational Sciences Division at Ames Research Center. Before that he was Chief Scientist at Junglee, Chief designer at Harlequin Inc, and Senior Scientist at Sun Microsystems Laboratories.
Bruno Olshausen's research attempts to unravel how the brain constructs meaningful representations of sensory information. Much of his work has focused on developing probabilistic models of natural images, and relating these models to the sorts of representations found in the cerebral cortex. Bruno is director of the Redwood Center for Theoretical Neuroscience, established in July 2005 as one of four research centers administered by the Helen Wills Neuroscience Institute at the University of California at Berkeley.
This free podcast is from our Accelerating Change series.