Isaac Asimov envisioned machines that would be completely safe and subserviant to humans. In the first robot story he wrote titled "Robbie," the robot's owner defends the machine to his wife, "He just can't help being faithful and loving and kind. He's a machine - made so." The ethics implied by Asimov in this first story were more specifically defined as the Three Laws of Robotics in a later story; (1) a robot may not injure a human being or, through inaction, allow a human being to come to harm, (2) a robot must obey orders given to it by human beings, except where such orders would conflict with the First Law,and (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
According to Mr. Hall, discussion of machine ethics starts with the Three Laws but subsequently departs. The discussion leads to the prospect of building superhuman, intelligent machines, machines that can learn and grow, no longer servants but potentially masters. It is a completely different problem than the one that concerned Asimov.
Dr. J. Storrs Hall is an independent scientist and author. His research focus is in AI and machine ethics. His latest book, in press (Prometheus, 2007), is Beyond AI: Creating the Conscience of the Machine. His previous book, Nanofuture: What's Next for Nanotechnology (Prometheus, 2005), won the Foresight Institute's Communications Prize and Drew University's Bela Kornitzer Prize. Previously, Hall was the founding chief scientist of Nanorex Inc., a software company developing computational modeling tools for the design and analysis of productive nanosystems. His research background includes microprocessor design, compilers, massively parallel processor design, CAD software, and automated multi-level design. His inventions include swarm robotic systems, self-bootstrapping automated manufacturing systems, adiabatic logic, and agoric operating systems.
This free podcast is from our Singularity Summit series.