SitemapSubscribeFeedbackAdvertiseSearch
graphic www.VancouverSun.com
_blank|
Last Updated: Friday 13 July 2001 MIX

Is there a ghost in the machine?

Steven Spielberg's A.I.: Artificial Intelligence raises the enduring question of whether a robot has a soul

Peter T. Chattaway Special to the Sun

Clockwise from top left: Jude Law plays Gigolo Joe, literally a sex machine, in A.I.: Artificial Intelligence.

Clockwise from top left: The computer Hal 9000 adopts the human trait of paranoia in 2001: A Space Odyssey

Clockwise from top left: Futura (right), in Fritz Lang's Metropolis, was one of the first movie robots to blur the line between mechanical and organic.

I like to think I'm a fairly tolerant and broad-minded person, so it came as a complete surprise when, several years ago, I stumbled into a lengthy argument with some online friends who accused me, in effect, of dehumanizing an entire class of people. The subject of our discussion? Robots. Specifically, C-3PO and R2-D2, the fussy, neurotic droids who provide much of the comic relief in the Star Wars movies.

Because these robots showed the same signs of emotion and self-awareness that we could see in the human characters, one of my friends said she found it disturbing that everyone -- even the heroes of these films -- regarded the droids as slaves that could be bought, sold and swapped. I replied that I could see her point, but, if we were going to look at this realistically, machines are created to perform certain tasks, and they remain, essentially, the property of whoever makes them, or whomever they are sold or given to. Even if the droids were programmed to mimic emotion -- and I had to admit I couldn't imagine why anyone would make C-3PO, a protocol droid, so peevish and cowardly -- I contended they would not be able to feel the emotion. My friends jumped on this, and it wasn't long before someone brought up the Nazis and said how easy it was to deny people their humanity.

No doubt one could argue that we all took George Lucas's space opera far, far too seriously. But there are other filmmakers who do want their audiences to explore such issues, and I found myself reliving the debate last week as I watched A.I.: Artificial Intelligence, the top-grossing film Steven Spielberg wrote and directed, from a concept developed by the late Stanley Kubrick, about a robot child who has been programmed to love and, after he has been abandoned, looks in vain for a way to become a "real boy."

I particularly cringed during one scene, in which a gang of robophobic rednecks captures David (Haley Joel Osment), the mechanical child in question, and takes him to the Flesh Fair, a rally where humans who feel threatened by sentient machines take sadistic glee in torturing them to death: shooting them from cannons, pouring acid on them, and so on. David cries out for help -- apparently he is the first robot ever to do so -- and one of the fair's ringleaders tells the crowd not to be fooled: no matter how closely it may mimic human emotions, the child is just a machine. But the crowd, swayed by David's cries, turns on the ringleader, and the robot boy goes free.

This scene is layered with ironies. On one level, the ringleader is absolutely correct and the crowd is wrong -- David is a machine, a thing made of fibres and circuit boards, and not a human being. But the crowd may have intuitively caught on to an essential truth: namely that any intelligent being that behaves like a person could be, for all intents and purposes, an actual person. (If it walks like a duck and quacks like a duck, then it is a duck, and all that.) We in the audience are not spared this ambiguity, because even though we know David is a robot, he is played by a very talented young actor, who invests the android with a great deal of soul. But this points up yet another ambiguity -- we are not watching a real android in peril, but an actor who is merely pretending to be in peril. If the movies themselves can trick us into thinking we are sharing an emotional moment with someone, there is no reason a sufficiently complex computer couldn't do the same.

Don't get me wrong -- I have nothing against robots. Indeed, I am tempted to say that some of my best friends have been robots. One of the reasons my argument with my online friends, and the intensity of our debate, came as such as a surprise to me was that I had entertained the notion of intelligent, soulful machines for as long as I could remember, thanks to a steady diet of science-fiction books, films and TV shows. Most people know about HAL 9000, the computer that mysteriously kills most of a starship's crew in Kubrick's 1968 film 2001: A Space Odyssey. But what they may not know is that, in the novels that follow, by Arthur C. Clarke, HAL lives on as a sort of disembodied spirit; his intelligence survives even after the spaceship housing his circuits is destroyed. When HAL's creator bids him a sad farewell in 2010: The Year We Make Contact, the one sequel that was made into a film, I found the moment genuinely moving.

Similarly, many of my favourite TV movies were about robots, from The Questor Tapes (1974), in which Robert Foxworth plays an emotionless android who searches for his creator, to Prototype (1983), in which David Morse plays a robot who goes into hiding with the scientist who built him. Prototype ended on a particularly striking note, both tender and mysterious. The robot realizes its creator will have to give in to the demands of the military that financed its creation, so it volunteers to destroy itself, thus putting its creator in a position to make demands of his own before building another android. The robot's decision is perfectly logical, and the initiative it shows is a striking sign that it has moved beyond its programming and learned to make choices for itself, choices that even seem to hint that it truly loves its creator.

And then there was Star Trek, which, by the time my friends and I debated the subject, had undergone a significant shift in the way it treated artificial intelligence. In the original series, Captain Kirk was constantly tricking androids, computers, and other sentient machines into short-circuting themselves -- killing them, in effect, or prompting them to commit suicide. But by the time Star Trek: The Motion Picture kicked off the movie franchise in 1979, Kirk was ready to do business with a "living machine" -- a space probe that had amassed so much knowledge it had gained consciousness, and was now searching for its creator, hoping to find the meaning of life. Since then, Star Trek's prevailing attitude toward sentient mechanisms -- whether represented by androids like The Next Generation's lieutenant commander Data or Voyager's Doctor -- has been one of compassion, not antagonism. Captain Picard even went to court to defend Data's right to self-determination, arguing that Data was a new lifeform, and that, conversely, humans are machines too, but of an organic sort.

The idea that humans themselves might be another kind of machine was not entirely new to me. When I was six years old, my parents gave me a book on human anatomy, published by Usborne, called How Your Body Works. Each of the body's systems was portrayed as a network of mechanisms, among them a conveyor-belt tongue and an ovary that looked rather like a gumball machine. But every now and then, the book would use people to fill the gaps. A two-page diagram of the brain portrayed it as a network of control rooms, with operators manning phones and librarians sitting by a stack of books in the memory room. But sometimes, I wondered if each of those people had a little group of men and women running around in their heads, too, or if it all came to an end, somewhere, in a cluster of mental gears, grinding away without any personality or intentionality.

So I was familiar with some of the issues my friends raised, yet I was still unprepared for the utter seriousness with which they asserted that robots might someday have the same basic qualities and rights as human beings. In part, I think I had always kept a distinction in my mind between the fantasy of these stories, however engaging they were, and reality. But my disagreement was also, in some ways, philosophical or theological, reflecting our different understandings of the soul, its origins, and what role it plays in human nature.

There are, it seems to me, two basic ways of looking at human consciousness. You can see it as something that comes from above, so to speak; this is what we see in Disney's version of Pinocchio, where the Blue Fairy gives life to the wooden puppet after Geppetto gives it physical form, and in later films like Short Circuit, in which a lightning bolt hits a military robot and gives it curiosity, a sense of humour and a conscience that prompts it to rebel against its programming and refuse to kill. This suggests the Judeo-Christian creation myth, where God breathes his spirit into the first human, and gives him a creative mind that is unique within creation. By this rationale, I might be able to put some metal and plastic parts together and build an android, but I could never give it life -- though there's nothing preventing God or some similar supernatural agent from doing so.

Alternatively, you can think of human consciousness as an emergent property, something that rises out of more basic elements. If, as some of the scientists of our day claim, all psychology can be chalked up to biology, and all biology to chemistry, and all chemistry to physics, then the mind may be little more than a symptom of basically impersonal natural forces. And if consciousness can arise from putting together bits of the human body, then who knows? It might arise from putting together other bits and pieces too. This is the approach taken by Isaac Asimov's Bicentennial Man (recently made into a film with Robin Williams), about a robot that becomes progressively more human over the centuries; one could also argue that it reflects the original version of Pinocchio, written by Carlo Collodi, in which the title character starts out as a log that can talk and giggle, well before it is carved into a puppet.

My friends came from this latter position, and I appreciate their argument -- a machine that exhibited the same behaviour as us could be self-aware according to the same natural principles that we are, and thus would make the same moral demands of us that any other person might make.

There are advantages to this perspective, if you're a robot scientist. If a machine goes wild and commits a few crimes, it can be prosecuted for its own sins, and its creators will be let off the hook. But the less appealing aspect of this point of view is that we may begin to diminish our own understanding of what it means to be human. In Errol Morris's mesmerizing recent documentary Fast, Cheap & Out of Control, Rodney Brooks, director of the artificial intelligence lab at MIT, says it is tempting to think of himself as a network of feedback loops, like the robots he builds for his work. But he admits he falls back into more conventional beliefs about human nature. "Otherwise," he says, "I think if you analyze it too much, life becomes almost meaningless."

So are robots human or are they not, and if they are, how can we tell the difference? The ads for A.I. give us a clue; they tell us that David's love is real, even though he himself is not. In the film itself, the scientist who creates David says that he is not only the first robot that can love, but also the first to have dreams, a subconscious mind, and desires that were not built into him.

But is this the case? The one thing that is supposed to distinguish David from these other robots is love. But here, too, the film strikes an ambivalent chord. Gigolo Joe, an android prostitute who can push a woman's buttons and give her sex, but not love, questions whether love really exists. At one point, he tells David that his adoptive parents do not love him, but only what he does for them, and the question hanging over the film is whether all human behaviour can be reduced to this covert form of selfishness.

David, alas, does not give any indication that it can't be. His need for his adoptive mother's affection has been hard-wired into him, and his relentless quest to win her love becomes an obsession that, at one point, even compels him to commit a jealous act of violence against one of his own kind. It also has the effect of dehumanizing his mother and turning her into an object that fulfills his needs. I think the film is right in suggesting that love is the key to the human spirit, but love is more than an emotion, more than getting someone to make you feel good. Love involves free will -- an assertion of self that goes beyond, and sometimes against, the impulses built into us by nature and nurture -- and it involves sacrifice, the ability to place another person's welfare ahead of our own.

I think I could get along just fine with a robot that had these traits. But these are two qualities that David never really displays, so I will continue to look to other movie robots for inspiration, and ponder the question of whether machines in real life might ever learn to love.

Peter T. Chattaway last wrote for Mix about the Christian film Left Behind.

send_this_story
Send a copy of this story to a friend. Click here.

 

site design by Palmer Jarvis DDB Digital
Back Back to Top