SIAI Bloggers
  • Michael Anissimov Media Director
  • Ioven Fables
  • Louie Helm Director of Development
  • Luke Muehlhauser Executive Director
  • Anna Salamon Research Fellow
  • Amy Willey Chief Operating Officer
  • Eliezer Yudkowsky Research Fellow
Tag Cloud
Archives

Should ethicists be inside or outside a profession?

October 21st, 2007Eliezer Yudkowsky

Marvin Minsky in an interview with Danielle Egan for New Scientist:

Minsky: The reason we have politicians is to prevent bad things from happening. It doesn’t make sense to ask a scientist to worry about the bad effects of their discoveries, because they’re no better at that than anyone else. Scientists are not particularly good at social policy.

Egan: But shouldn’t they have an ethical responsibility for their inventions?

Minsky: No they shouldn’t have an ethical responsibility for their inventions. They should be able to do what they want. You shouldn’t have to ask them to have the same values as other people. Because then you won’t get them. They’ll make stupid decisions and not work on important things, because they see possible dangers. What you need is a separation of powers. It doesn’t make any sense to have the same person do both.

The Singularity Institute was recently asked to comment on this interview – which by the time it made it through the editors at New Scientist, contained just the unvarnished quote “Scientists shouldn’t have an ethical responsibility for their inventions. They should be able to do what they want. You shouldn’t have to ask them to have the same values as other people.” Nice one, New Scientist. Thanks to Egan for providing the original interview text.

This makes an interesting contrast with what I said in my “Cognitive biases” chapter for Bostrom’s Global Catastrophic Risks:

Someone on the physics-disaster committee should know what the term “existential risk” means; should possess whatever skills the field of existential risk management has accumulated or borrowed. For maximum safety, that person should also be a physicist. The domain-specific expertise and the expertise pertaining to existential risks should combine in one person. I am skeptical that a scholar of heuristics and biases, unable to read physics equations, could check the work of physicists who knew nothing of heuristics and biases.

Should ethicists be inside or outside a profession?

It seems to me that trying to separate ethics and engineering is like trying to separate the crafting of paintings into two independent specialties: a profession that’s in charge of pushing a paintbrush over a canvas, and a profession that’s in charge of artistic beauty but knows nothing about paint or optics.

The view of ethics as a separate profession is part of the problem. It arises, I think, from the same deeply flawed worldview that sees technology as something foreign and distant, something opposed to life and beauty. Technology is an expression of human intelligence, which is to say, an expression of human nature. Hunter-gatherers who crafted their own bows and arrows didn’t have cultural nightmares about bows and arrows being a mechanical death force, a blank-faced System. When you craft something with your own hands, it seems like a part of you. It’s the Industrial Revolution that enabled people to buy artifacts which they could not make or did not even understand.

Ethics, like engineering and art and mathematics, is a natural expression of human minds.

Anyone who gives a part of themselves to a profession discovers a sense of beauty in it. Writers discover that sentences can be beautiful. Programmers discover that code can be beautiful. Architects discover that house layouts can be beautiful. We all start out with a native sense of beauty, which already responds to rivers and flowers. But as we begin to create – sentences or code or house layouts or flint knives – our sense of beauty develops with use.

Like a sense of beauty, one’s native ethical sense must be continually used in order to develop further. If you’re just working at a job to make money, so that your real goal is to make the rent on your apartment, then neither your aesthetics nor your morals are likely to get much of a workout.

The way to develop a highly specialized sense of professional ethics is to do something, ethically, a whole bunch, until you get good at both the thing itself and the ethics part.

When you look at the “bioethics” fiasco, you discover bioethicists writing mainly for an audience of other bioethicists. Bioethicists aren’t writing to doctors or bioengineers, they’re writing to tenure committees and journalists and foundation directors. Worse, bioethicists are not using their ethical sense in bio-work, the way a doctor whose patient might have incurable cancer must choose how and what to tell the patient.

A doctor treating a patient should not try to be academically original, to come up with a brilliant new theory of bioethics. As I’ve written before, ethics is not supposed to be counterintuitive, and yet academic ethicists are biased to be just exactly counterintuitive enough that people won’t say, “Hey, I could have thought of that.” The purpose of ethics is to shape a well-lived life, not to be impressively complicated. Professional ethicists, to get paid, must transform ethics into something difficult enough to require professional ethicists.

It’s, like, a good idea to save lives? “Duh,” the foundation directors and the review boards and the tenure committee would say.

But there’s nothing duh about saving lives if you’re a doctor.

A book I once read about writing – I forget which one, alas – observed that there is a level of depth beneath which repetition ceases to be boring. Standardized phrases are called “cliches” (said the author of writing), but murder and love and revenge can be woven into a thousand plots without ever becoming old. “You should save peoples lives, mmkay?” won’t get you tenure – but as a theme of real life, it’s as old as thinking, and no more obsolete.

Boringly obvious ethics are just fine if you’re using them in your work rather than talking about them. The goal is to do it right, not to do it originally. Do your best whether or not it is “original”, and originality comes in its own time; not every change is an improvement, but every improvement is necessarily a change.

At the Singularity Summit 2007, several speakers alleged we should “reach out” to artists and poets to encourage their participation in the Singularity dialogue. And then a woman went to a microphone and said: “I am an artist. I want to participate. What should I do?”

And there was a long, delicious silence.

What I would have said to a question like that, if someone had asked it of me in the conference lobby, was: “You are not an ‘artist’, you are a human being; art is only one facet in which you express your humanity. Your reactions to the Singularity should arise from your entire self, and it’s okay if you have a standard human reaction like ‘I’m afraid’ or ‘Where do I send the check?’, rather than some special ‘artist’ reaction. If your artistry has something to say, it will express itself naturally in your response as a human being, without needing a conscious effort to say something artist-like. I would feel patronized, like a dog commanded to perform a trick, if someone presented me with a painting and said ‘Say something mathematical!’”

Anyone who calls on “artists” to participate in the Singularity clearly thinks of artistry as a special function that is only performed in Art departments, an icing dumped onto cake from outside. But you can always pick up some cheap applause by calling for more icing on the cake.

Ethicists should be inside a profession, rather than outside, because ethics itself should be inside rather than outside. It should be a natural expression of yourself, like math or art or engineering. If you don’t like trudging up and down stairs you’ll build an escalator. If you don’t want people to get hurt, you’ll try to make sure the escalator doesn’t suddenly speed up and throw its riders into the ceiling. Both just natural expressions of desire.

There are opportunities for market distortions here, where people get paid more for installing an escalator than installing a safe escalator. If you don’t use your ethics, if you don’t wield them as part of your profession, they will grow no stronger. But if you want a safe escalator, by far the best way to get one – if you can manage it – is to find an engineer who naturally doesn’t want to hurt people. Then you’ve just got to keep the managers from demanding that the escalator ship immediately and without all those expensive safety gadgets.

The first iron-clad steamships were actually much safer than the Titanic; the first ironclads were built by engineers without much management supervision, who could design in safety features to their heart’s content.  The Titanic was built in an era of cutthroat price competition between ocean liners.  The grand fanfare about it being unsinkable was a marketing slogan like “World’s Greatest Laundry Detergent”, not a failure of engineering prediction.

Yes, safety inspectors, yes, design reviews; but these just verify that the engineer put forth an effort of ethical design intelligence. Safety-inspecting doesn’t build an elevator. Ethics, to be effective, must be part of the intelligence that expresses those ethics – you can’t add it in like icing on a cake.

Which leads into the question of the ethics of Artificial Intelligence. “Ethics, to be effective, must be part of the intelligence that expresses those ethics – you can’t add it in like icing on a cake.” My goodness, I wonder how I could have learned such Deep Wisdom?

Because I studied Artificial Intelligence, and the Art spoke to me.  Then I translated it back into English.

The truth is that I can’t inveigh properly on bioethics, because I am not myself a doctor or a bioengineer. If there is a special ethic of medicine, beyond the obvious, I do not know it. I have not worked enough healing for that Art to speak to me.

What I do know a thing or two about, is Artificial Intelligence. There I can testify definitely and from direct knowledge, that anyone who sets out to study “AI ethics” without a technical grasp of cognitive science, is absolutely doomed.

It’s the technical knowledge of AI that forces you to deal with the world in its own strange terms, rather than the surface-level concepts of everyday life. In everyday life, you can take for granted that “people” are easy to identify; if you look at the modern world, the humans are easy to pick out, to categorize. An unusual boundary case, like Terry Schiavo, can throw a whole nation into a panic: Is she “alive” or “dead”? Artificial Intelligence explodes the language that people are described of, unbundles the properties that are always together in human beings. Losing the standard view, throwing away the human conceptual language, forces you to think for yourself about ethics, rather than parroting back things that sound Deeply Wise.

All of this comes of studying the math, nor may it be divorced from the math. That’s not as comfortably egalitarian as my earlier statement that ethics isn’t meant to be complicated. But if you mate ethics to a highly technical profession, you’re going to get ethics expressed in a conceptual language that is highly technical.

The technical knowledge provides the conceptual language in which to express ethical problems, ethical options, ethical decisions. If politicians don’t understand the distinction between terminal value and instrumental value, or the difference between a utility function and a probability distribution, then some fundamental problems in Friendly AI are going to be complete gibberish to them – never mind the solutions. I’m sorry to be the one to say this, and I don’t like it either, but Lady Reality does not have the goal of making things easy for political idealists.

If it helps, the technical ethical thoughts I’ve had so far require only comparatively basic math like Bayesian decision theory, not high-falutin’ complicated damn math like real mathematicians do all day. Hopefully this condition does not hold merely because I am stupid.

Several of the responses to Minsky’s statement that politicians should be the ones to “prevent bad things from happening” were along the lines of “Politicans are not particularly good at this, but neither necessarily are most scientists.” I think it’s sad but true that modern industrial civilization, or even modern academia, imposes many shouting external demands within which the quieter internal voice of ethics is lost. It may even be that a majority of people are not particularly ethical to begin with; the thought seems to me uncomfortably elitist, but that doesn’t make it comfortably untrue.

It may even be true that most scientists, say in Artificial Intelligence, haven’t really had a lot of opportunity to express their ethics and so the Art hasn’t said anything in particular to them.

If you talk to some Artificial Intelligence scientists about the Singularity / Intelligence Explosion they may say something cached like, “Well, who’s to say that humanity really ought to survive?” This doesn’t sound to me like someone whose Art is speaking to them. But then Artificial Intelligence is not the same as Artificial General Intelligence; and, well, to be brutally honest, I think a lot of people who claim to be working in AGI haven’t really gotten all that far in their pursuit of the Art.

So, if I listen to the voice of experience, rather to the voice of comfort, I find that most people are not very good at ethical thinking. Even most doctors – who ought properly to be confronting ethical questions in every day of their work – don’t go on to write famous memoirs about their ethical insights. The terrifying truth may be that Sturgeon’s Law applies to ethics as it applies to so many other human endeavors: “Ninety percent of everything is crap.”

So asking an engineer an ethical question is not a sure-fire way to get an especially ethical answer. I wish it were true, but it isn’t.

But what experience tells me, is that there is no way to obtain the ethics of a technical profession except by being ethical inside that profession. I’m skeptical enough of nondoctors who propose to tell doctors how to be ethical, but I know it’s not possible in AI. There are all sorts of AI-ethical questions that anyone should be able to answer, like “Is it good for a robot to kill people? No.” But if a dilemma requires more than this, the specialist ethical expertise will only come from someone who has practiced expressing their ethics from inside their profession.

This doesn’t mean that all AI people are on their own. It means that if you want to have specialists telling AI people how to be ethical, the “specialists” have to be AI people who express their ethics within their AI work, and then they can talk to other AI people about what the Art said to them.

It may be that most Artificial Intelligence people will not be above-average at AI ethics, but without technical knowledge of AI you don’t even get an opportunity to develop ethical expertise because you’re not thinking in the right language. That’s the way it is in my profession. Your mileage may vary.

In other words:  To get good AI ethics you need someone technically good at AI, but not all people technically good at AI are automatically good at AI ethics. The technical knowledge is necessary but not sufficient to ethics.

What if you think there are specialized ethical concepts, typically taught in philosophy classes, which AI ethicists will need? Then you need to make sure that at least some AI people take those philosophy classes. If there is such a thing as special ethical knowledge, it has to combine in the same person who has the technical knowledge.

Heuristics and biases are critically important knowledge relevant to ethics, in my humble opinion. But if you want that knowledge expressed in a profession, you’ll have to find a professional expressing their ethics and teach them about heuristics and biases – not pick a random cognitive psychologist off the street to add supervision, like so much icing slathered over a cake.

My nightmare here is people saying, “Aha! A randomly selected AI researcher is not guaranteed to be ethical!” So they turn the task over to professional “ethicists” who are guaranteed to fail: who will simultaneously try to sound counterintuitive enough to be worth paying for as specialists, while also making sure to not think up anything really technical that would scare off the foundation directors who approve their grants.

But even if professional “AI ethicists” fill the popular air with nonsense, all is not lost. AIfolk who express their ethics as a continuous, non-separate, non-special function of the same life-existence that expresses their AI work, will yet learn a thing or two about the special ethics pertaining to AI. They will not be able to avoid it. Thinking that ethics is a separate profession which judges engineers from above, is like thinking that math is a separate profession which judges engineers from above. If you’re doing ethics right, you can’t separate it from your profession.

Share and Enjoy:These icons link to social bookmarking sites where readers can share and discover new web pages.
  • blinkbits
  • BlinkList
  • blogmarks
  • co.mments
  • connotea
  • del.icio.us
  • De.lirio.us
  • digg
  • Fark
  • feedmelinks
  • Furl
  • LinkaGoGo
  • Ma.gnolia
  • NewsVine
  • Netvouz
  • RawSugar
  • Reddit
  • scuttle
  • Shadows
  • Simpy
  • Smarking
  • Spurl
  • TailRank
  • Wists
  • YahooMyWeb

Comments (13) (RSS feed)

Toggle comment visibility Comment by Nato Welch
Oct 22, 2007 1:23 am

More important than whether scientists or engineers are ethicists is that ethicists be scientists and engineers that are well-informed about the subjects they examine the ethics of.

But I wonder if Minsky is talking about whether scientists/engineers must //necessarily// be ethicists. It’s one thing to say all scientists don’t have to be ethicists; it’s another the say no ethicists should be scientists.

 
Toggle comment visibility Comment by Roko
Oct 22, 2007 5:13 am

“My nightmare here is people saying, “Aha! A randomly selected AI researcher is not guaranteed to be ethical!” So they turn the task over to professional “ethicists” who are guaranteed to fail”

Indeed – this is a good point, I completely agree.

Of course this is not likely to happen in s fast-takeoff scenario, because in such a scenario, no-one (including professional ethicists and politicians) will take any notice of AGI until its already too late to do anything about it.

In a slow takeoff scenario, you could end up with technically illiterate “professional ethicists” icing a cake of AGI researchers. What would the result of such a system be? Anyone got any ideas? I somehow suspect that the ethicists’ main job would be rationalizing whatever the most politically popular position was.

Toggle comment visibility Comment by tony
Dec 27, 2008 7:17 pm

I seem to recall in RoboCop 2 an OCP public relations board filling Murphy with a huge number of conflicting directives. I think any governmental response would be roughly like that.

 
 
Toggle comment visibility Comment by Tom McCabe
Oct 22, 2007 12:38 pm

“Losing the standard view, throwing away the human conceptual language, forces you to think for yourself about ethics, rather than parroting back things that sound Deeply Wise.”

Perhaps we should invent a new dialect of English, specifically for avoiding cognitive biases and anthropomorphicisms.

“What would the result of such a system be? Anyone got any ideas?”

AGI research will get tangled up in political “ethical issues”, like the current stem cell nonsense. None of the politicians and “ethicists” will realize how important AGI is, and they will seek to get it bogged down for years. Real AGI research will continue under cover of darkness, probably under a different label.

Toggle comment visibility Comment by Roko
Oct 24, 2007 9:32 am

“Real AGI research will continue under cover of darkness, probably under a different label.”

Yes, that does sound plausible to me. And I think that would probably the worst possible outcome!

 
 
Toggle comment visibility Comment by Keith Elis
Oct 23, 2007 2:19 pm

Eliezer, do you think it is possible for a non-professional to obtain adequate mastery of a subject matter such that his or her ethical commentary is not irrelevant (say, a patent clerk mastering physics)?

If so, then this post doesn’t seem to support its answer to the title question.

 
Toggle comment visibility Comment by Josh Brighton
Oct 23, 2007 3:08 pm

How is it that someone, such as yourself, who knows nothing about “art” or painting, as one particular art, writes about art?

In fact, I recall having heard you claim art was useless and a waste of time.

Toggle comment visibility Comment by Peter de Blanc
Oct 24, 2007 1:41 am

In fact, I recall having heard you claim art was useless and a waste of time.

That would really surprise me if true. Can you provide a reference?

 
 
Toggle comment visibility Comment by Jeffrey Herrlich
Oct 24, 2007 1:22 pm

“Should ethicists be inside or outside a profession?”

Considering that, in the future, a single researcher could ruin the entire Universe, I’d say that researchers should definitely consider the implications of what they are doing. If you’d like to label that consideration as “ethics” then go ahead. I used to spend more time thinking about what is “ethical” and what isn’t. But when I realized that someone could potentially ruin the Universe forever, I decided that pragmatic thinking was more important at this point in time.

 
Toggle comment visibility Comment by Jeffrey Herrlich
Oct 25, 2007 6:26 pm

I’ve recently become completely convinced that there is no such thing as “objective human morality”. There are existence proofs of this, literally everywhere you look. So, since I’ve recently rejected “objective morality” I’ve also recently rejected “objective immorality”. If it is not objective, then does immorality, in the platonic realm, have any meaning? I actually don’t think so. To borrow a line from the movie “3:10 to Yuma” : “Every man is right by his own mind”. I think there is great wisdom here. Before people misunderstand, I’m not saying that morality is unimportant. Just that we should be focusing on the pragmatic, not the romantic. That what matters most at this time is the *actions* of intelligent agents. Not the emotional/philosophical ruminations that lead humans to act and to claim what is ethical and what is unethical. Minds are deterministic. If people need someone to blame occasionally for the terrible injustices of the world, let them blame the mechanics of this Universe – for not having been a nicer, just place to begin with. Let’s now focus on the technical problems of reshaping the Universe into a good place. The good place that we should have had from the beginning.

 
Toggle comment visibility Comment by Joe Hunkins
Oct 26, 2007 1:52 am

I think this is an important early question to ask the first conscious computers, and I suspect they’ll say “inside or outside, it’s all the same to us”. I’ll be OK with that answer.

Of course if they say “What is ethics?” do we need to pull the plug before they figure out how to keep us from doing that?

 
Toggle comment visibility Comment by Eliezer Yudkowsky
Oct 27, 2007 6:51 pm

Robin Hanson had a nice way of putting this: We need experts at AI ethics, not experts on AI ethics.

 
Dec 3, 2007 10:49 pm

[...] about the “wisdom of repugnance”. Eliezer Yudkowsky dumps on those same bioethicists here, but he and I apparently disagree (I am not sure exactly how we differ) on how we regard ethics in [...]

 

Leave a reply

Comments may take a while to appear, as they are moderated for spam.