Deontology is a bug

Politics is about what should happen. Should the US army invade Elbonia, or stay at home? Should there be state schools and hospitals? Should we have a government at all?

It behooves us to reduce the concept “should”. I’ll let E. Yudkowsky explain reductionism:

I [define] the reductionist thesis as follows: human minds create multi-level models of reality in which high-level patterns and low-level patterns are separately and explicitly represented.  A physicist knows Newton’s equation for gravity, Einstein’s equation for gravity, and the derivation of the former as a low-speed approximation of the latter.  But these three separate mental representations, are only a convenience of human cognition.  It is not that reality itself has an Einstein equation that governs at high speeds, a Newton equation that governs at low speeds, and a “bridging law” that smooths the interface.  Reality itself has only a single level, Einsteinian gravity.  It is only the Mind Projection Fallacy that makes some people talk as if the higher levels could have a separate existence—different levels of organization can have separate representations in human maps, but the territory itself is a single unified low-level mathematical object.

A Boeing 747 isn’t ontologically fundamental; it can be explained in terms of something simpler (atoms, quarks, quantum amplitudes…). The aeroplane nonetheless exists implicitly in any detailed description of the volume of physics in which it exists.

Jumbo jets can be reduced, and so can words, concepts and mental algorithms. Consider Yudkowsky’s reduction of the word “arbitrary”:

A piece of cognitive content feels “arbitrary” if it is the kind of cognitive content that we expect to come with attached justifications, and those justifications are not present in our mind.

The concept has been simplified—he has opened the black box concept “arbitrary”, and removed two gears and a smaller black box. He might continue and reduce the concepts “justification” and “cognitive content”, until eventually these abstract concepts can be described (at great length) as basic physical processes involving atoms in the brain.

So what is “shouldness”, really?

Should statements arrive in two common forms:

  • Type-1 should-statements: I should choose to do Y, because that would optimise the environment towards terminal goal Z, via instrumental goals A—X.
  • Type-2 should-statements: I should choose to do Y, because I am morally required to do so (regardless of my terminal goal Z).

For example:

  • Type-1: I should buy a burger, because I am hungry.
  • Type-2: I should not steal a burger, because it is morally wrong.

Type-2 shouldness (also known as “deontology”) is supposed to override personal desires. To quote Richard Joyce’s The Evolution of Morality:

Research reveals that “common-sense morality” does include certain claims to objectivity. One study (Nichols and Folds-Bennett 2003; Nichols 2004) looked at young children’s responses concerning properties such as icky, yummy, and boring and compared them with their attitudes toward moral and aesthetic properties. [...]

The children treated the instantiation of all properties as existentially independent of humans (i.e., before anyone was around, grapes were yummy, roses were beautiful, and so on), yet made a striking distinction between properties that depend on preferences and those that did not: Things that are yummy or icky are yummy and icky for some people, whereas things that are good are good “for real.” Having reviewed such evidence, Shaun Nichols (2004: 176) comes to this conclusion: “The data on young children thus suggest, as a working hypothesis, that moral objectivism is the default setting on common-sense metaethics.” […]

Larry Nucci (1986, 2001) has even found that among Mennonite and Amish children and adolescents God’s authority does not determine moral wrongness. When asked whether it would be OK to work on a Sunday if God said so, 100 percent said “Yes”; when asked whether it would be OK to steal if God said so, over 80 percent said “No.” Such findings contribute to a compelling body of evidence that moral prescriptions and values are experienced as “objective” in the sense that they don’t seem to depend on us, or on any authoritative figure.

It is peculiar that “should” has two contradictory meanings. Let’s examine each type of should-statement, starting with type-1, and decide which is worth keeping.

A terminal goal is something done for its own sake—a basic objective that evolution and enculturation have put in a brain. It is just something that the brain does. Consider: “I want to feel happy”. If someone were to describe my brain in detail, they would note a tendency for my brain to optimise its environment towards the state, “experiences qualia of happiness”. The degree of optimisation tends to increase as the brain’s intelligence increases, and as its beliefs become more accurate. Tendency-to-optimise is the only sensible meaning that can be attached to the concept “goal”—otherwise, how would we learn of a goal’s existence?

An instrumental goal is a pinned-down objective that the brain expects to further its terminal goal. “I want to possess a new leather jacket” could be an instrumental goal for “I want to feel happy”. Instead of repeatedly deducing “I want a leather jacket” from “I want to feel happy”, the brain may save time by caching “I want a leather jacket”. Over time, an instrumental goal may crystallise into a new terminal goal.

“I should buy a leather jacket”, in this context, reduces to “buying a leather jacket is likely to further my terminal goal ‘experience happiness’.” That explains Type-1 should-statements, which are quite simple. Type-2 should-statements are another matter. I may have the terminal goal, “I want to feel happy”, and yet elect not to do something that would make me happy, because the required act is “immoral”.

All true statements are reducible to claims about physics and logic. This is because reality appears to consist of (at most) these two things, and a statement’s truth is determined by comparison to reality. Thus, “I should do Y because it is morally required” ought to reduce to a physical or logical belief, just as “I should do Y because I desire Z” reduces to a belief about the configuration of atoms in my brain.

There are two possibilities:

1. Objects and events have physical properties, which we indicate when we speak of morally “good” and “bad” actions.

2. Type-2 shouldness is a bug.

Those who favour explanation #1 face a serious problem. If the physical act of murder has a little physical tag attached to it, made of atoms, which says “bad”, then the Universe’s complexity is vastly increased. Occam’s razor, or Solomonoff induction, strongly prefers simple theories, which don’t propose the existence of these little tags littering the world.

In addition, putative moral properties are suspiciously anthropomorphic. Why should features of the external environment, independent of human brains, be such a good fit for our contingent, evolved intuitions?

The challenge for adherents of explanation #2 is to explain why this bug exists. Why do humans intuitively accept moral shouldness-claims, if these cannot be reduced to a sensible statement about physics? Richard Joyce addresses this question:

What are the practical advantages of moral thinking? Irrespective of one’s views on the evolution of a moral sense, this is a good question to ponder. It is important that the question be tensed correctly here, for we are not trying to discover how morality is adaptive, but rather how it might have come to be an adaptation—that is, how it was adaptive. […]

I think it clarifies matters to begin by subdividing the problem into distinct questions […] there are four possible questions worthy of investigation:

  • In what way might judging others in moral terms benefit one’s group?
  • In what way might judging others in moral terms benefit oneself?
  • In what way might judging oneself in moral terms benefit one’s group?
  • In what way might judging oneself in moral terms benefit oneself?

Joyce gives a little credence to David Sloan Wilson’s multi-level selection idea, but a plausible account of morality’s evolution need not invoke this—questions #1 and #3 may be ignored.

Why might judging oneself in moral terms—that is, having a conscience—enhance one’s reproductive fitness relative to competitors lacking such a trait? […] Suppose there was a realm of action of such recurrent importance that nature did not want practical success to depend on the frail caprice of ordinary human practical intelligence. That realm might, for example, pertain to certain forms of cooperative behavior toward one’s fellows. The benefits that may come from cooperation—enhanced reputation, for example—are typically long-term values, and merely to be aware of and desire these long-term advantages does not guarantee that the goal will be effectively pursued, any more than the firm desire to live a long life guarantees that a person will give up fatty foods. The hypothesis, then, is that natural selection opted for a special motivational mechanism for this realm: moral conscience. If you are thinking of an outcome in terms of something that you desire, you can always say to yourself “But maybe forgoing the satisfaction of that desire wouldn’t be that terrible.” If, however, you are thinking of the outcome as something that is desirable—as having the quality of demanding desire—then your scope for rationalizing a spur-of-the-moment devaluation narrows. When a person believes that an act of cooperation is morally required—that it must be performed whether he likes it or not—then the possibilities for further internal negotiation on the matter diminish. If a person believes an action to be required by an authority from which he cannot escape, if he believes that in not performing it he will not merely frustrate himself, but will become reprehensible and deserving of disapprobation—then he is more likely to perform the action. The distinctive value of imperatives imbued with practical clout is that they silence further calculation, which is a valuable thing when our prudential calculations can so easily be hijacked by interfering forces and rationalizations. What is being suggested, then, is that self-directed moral judgments can act as a kind of personal commitment, in that thinking of one’s actions in moral terms eliminates certain practical possibilities from the space of deliberative reasoning in a way that thinking “I just don’t like X” does not.

Developing the theme of pre-commitment:

Which kind of person would you want as a companion in a dangerous cooperative venture: someone whose cooperative behavior is governed by an ongoing prudent deliberative procedure, or someone who can commit to cooperating and will continue to do so even when it may be prudentially irrational? […] If your survival depends on your being selected as a partner in cooperative ventures (including your being selected as a mate), then it will be rational for you to choose to be the second kind of person. In other words, in circumstances where cooperative exchanges are important it is often rational to choose to have a faculty that urges you to what would otherwise be irrational behavior.

Joyce continues to probe the other question, “In what way might judging others in moral terms benefit oneself?”:

[A] moral judgment affects motivation not by giving an extra little private mental nudge in favor of certain courses of action, but by providing a deliberative consideration that (putatively) cannot be legitimately ignored, thus allowing moral judgments—even self-directed ones—to play a justificatory role on a social stage in a way that unmediated desires cannot.

This reasoning leads me to supplement the simple hypothesis with which we started (i.e., that the evolutionary function of moral judgment is to provide added motivation in favor of certain adaptive social behaviors). Morally disapproving of one’s own action (or potential action)—as opposed to disliking that action—provides a basis for corresponding other directed moral judgments. No matter how much I dislike something, this inclination alone is not relevant to my judgments concerning others pursuing that thing: “I won’t pursue X because I don’t like X” makes perfect sense, but “You won’t pursue X because I don’t like X” makes little sense. By comparison, the assertion of “The pursuit of X is morally wrong” demands both my avoidance of X and yours. Near the beginning of this chapter, I distinguished between the question of what benefits self-directed moral judgments bring and the question of what benefits other-directed moral judgments bring. These two questions, I am now observing, are not independent of each other—which should hardly be surprising. To be sure, we should now see that one of the adaptive advantages of moral judgment is precisely its capacity to unite these two matters. By providing a framework within which both one’s own actions and others’ actions may be evaluated, moral judgments can act as a kind of “common currency” for collective negotiation and decision making.

There you have it: Type-2 shouldness (“deontology”) is an evolutionarily useful bug. Henceforth, let all of our should-statements be Type-1 (“consequentialist”).

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>