Showing posts with label argumentative fallacies. Show all posts
Showing posts with label argumentative fallacies. Show all posts

Thursday, June 5, 2014

Political Animals



"I wish nothing but good; therefore, everyone who does not agree with me is a traitor and a scoundrel."

King George III of the United Kingdom, born June 4, 1738



What motivates this blog post is the quote from King George III above, which appeared on one of my Pinterest boards: Dobson's Improbable Quote of the Day for June 4. It had a lot of resonance for me: while George III said it aloud, a lot of people — including me — feel that way, at least from time to time. I didn't aim it at left or right, because that particular feeling is independent of which side of the fence you fall on.

Within a minute or two of sharing the link on Facebook, a comment appeared: "Substitute 'racist' for 'traitor,' and you have Barack Obama." I deleted it at once and sent a message to the poster explaining my desire to keep politics off my wall.

It was some hours before I checked Facebook again. Obviously, that quote had triggered strong political reactions, because now I had two political comments: "Didn't realize the tea party was so old," and "Still the official motto of the Republican party?" People on both sides clearly saw this quote as describing the attitudes of their political opponents.

- * -

There are, I think, few things in the world more useless than having a political argument on Facebook. There are never any winners, only losers to one degree or another. Those who've known me for a long time know I am a man of passionate political opinions, with an unfortunate tendency to use scorched earth rhetoric when provoked — and I provoke more easily than I should. This tends not to end well.

One of the most popular posts in this history of this blog appeared March 2, 2010: You're Not Being Reasonable. In that post, I tried to establish some objective standards for reasonableness in political discussions, virtually all of which I've violated at one time or another. The cost of those violations has been high: I've lost several friendships I valued. Mind you, it takes two to tango, but my own behavior is the only thing I can control, and I look at my own lapses of reasonableness and decorum as cause for shame and embarrassment.

My obsession with cognitive biases (collected in my personal magazine Random Jottings 6) and argumentative fallacies (only partially done, and on my ever-growing list of projects to complete One Of These Days) has done a lot to convince me of the futility of political argument.  Confirmation bias  alone, the tendency to interpret information in a way that conforms to your preexisting beliefs, derails most discussions before they get started. None of us can shake personal bias altogether, but we can work to limit its effects on our thinking.

Because of the operation of cognitive bias (I include my own bias as well as that of others), I can't think of a single situation in which someone's mind has been changed through a Facebook argument, or indeed through an argument of any kind. When I have changed my mind, it's not because of someone's argument but rather someone's behavior or personal actions, or my evolving understanding of the world around me. If I have changed someone else's mind, it is for the same reasons.

- * -

My social media activity is a combination of personal and professional. A lot of my Facebook friends are actually friends, or at least friendly acquaintances, but many are professional relationships (LinkedIn has been pretty useless in that regard), and a surprising number are people I've actually met and gotten to know through Facebook itself. (Having grown up in science fiction fandom, I'm used to having friends I've never actually met in person.) Some of my Facebook friends have political opinions I find congenial, but a whole lot of them don't.

Mixing the personal and the professional has its dangers. For the first few years I was on Facebook, I mostly posted a daily "Dobson's Law" of project management (collected here). I learned a lot from the discussions that followed. For both personal and professional reasons, I began wishing people happy birthday on Facebook (sadly, I've neglected that in recent months), and started adding a list of shared birthdays. That led to my second blog, Dobson's Improbable History (pace Peabody), a more-or-less daily post of events and people associated with each day, which in turn supports a series of books I'm writing: The Story of a Special Day, one for every day of the year. (Only $7.95 print, $2.99 ebook — it's like a birthday card they'll never throw away!™ — adv.)

In other words, there are plenty of reasons for me to stay out of politics on Facebook.

- * -

My formative political experience was Alabama during the civil rights years. We supported civil rights in a time and place where that was an extreme minority position. My father considered marching in Selma, and only refrained because he was told his job was on the line. He was generally unafraid of confrontation and had a take-no-prisoners attitude toward those with whom he disagreed, and in that, I took after him.  I believed — and still believe — that the segregation side was not merely wrong, but evil. That's not to say I thought the people who held those beliefs were necessarily evil, but when you call someone's beliefs evil, it's hard for them not to take it personally. I didn't have a lot of friends in those days.

Strangely, however, I now call lots of the same people my friends, though in many cases my feelings about their political opinions remain unchanged. Perhaps it's the shared Stockholm syndrome experience of high school; perhaps it's simply the realization that personal history matters. I can get along with people of dramatically different beliefs, but the way to do that is to focus on what we agree on and what we share, not what separates us. At least that's the plan — the execution has been less than perfect.

Yes, I occasionally find political pieces I want to repost, or comments I feel compelled to make. I created a Facebook list of people likely to find my political positions congenial, and when I can't refrain from political comment, at least I don't feel the need to rub it in the noses of those who disagree. I also feel more free to jump into other people's Facebook discussions, especially when I agree with the original poster. After all, they brought up the subject, not me. It's rare for me to comment when someone posts an opinion I find repugnant; instead, I simply hide the most regular offenders from my timeline and thus keep my blood pressure under control.

- * -

I can never tell what posts will draw political reactions. In addition to Dobson's Improbable History and Dobson's Improbable Quote of the Day, I  also explore my fascination with maps on a Pinterest board called, oddly enough, More Fun With Maps!  I generally share them without much in the way of comment: a map should speak for itself. Some maps, however, lend themselves to one political position or another.  I avoid sharing some maps because of the likelihood of triggering a political argument, but that doesn't always do the trick. Maps I think of as fairly neutral have drawn sharply partisan responses. The same thing is true of quotes. Honestly, I didn't think the King George III quote was going to provoke this kind of reaction.

It is much easier, of course, to see the mote in thy brother's eye than the beam that is in thine own eye. We notice the horrible and disgusting things people on the other side say about our side much more clearly than the horrible and disgusting things that our fellow travelers say about them. Whatever the merits of one position over the other, none of us have clean hands when it comes to rhetorical excess. While I'm wary of false equivalence, or "both sides do it," as a general argument, in this particular case, I think it's a fair observation. I wrote this blog post, and then shared it.

The next comment was political as well: "I stand firmly for the Mugwump party."

I appreciated the sentiment. After all, politics is a Mug's game.

••••













Tuesday, September 25, 2012

Goldfinger Takes Fort Knox! (Propositional Fallacies, Part 2)

Bond villain Auric Goldfinger
In propositional calculus, we can describe certain arguments in mathematical terms. Some arguments are true if the component statements are true. The statement “It is raining here now, and it is raining where you are now as well” can be written as P⋀Q. It is true if both its component statements are true. On the other hand, “It is raining here now OR it is raining where you are now” (written as P⋁Q) is true as long as at least one of the statements is true.

Propositional fallacies involve fallacies of mathematical reasoning. They are fallacious regardless of the truth value of the component statements. Last time, we discussed affirming a disjunct, the fallacy of turning an inclusive OR into an exclusive one. The two remaining propositional fallacies are known as affirming the consequent and denying the antecedent.

Affirming the Consequent

If Auric Goldfinger owned Fort Knox, then he would be rich. Auric Goldfinger is rich. Therefore, Auric Goldfinger owns Fort Knox. Even if the first two statements are true, the conclusion is invalid because there are other ways to be rich besides owning Fort Knox.

Here's how to cast the argument in propositional calculus:

P→Q 
∴ P 

(If P, then Q. Q is true. Therefore, P.)

This is different from the argument "if and only if." If Auric Goldfinger is rich if and only if he owns Fort Knox, then the statement "Auric Goldfinger is rich" makes "Auric Goldfinger owns Fort Knox" necessarily true. But that's the case only if the first statement is true — which it isn't. In propositional calculus, we'd write that:

P⟷Q
Q
∴ P

Affirming the consequent is sometimes called converse error.

Denying the Antecedent

The opposite fallacy, denying the antecedent, is also known as inverse error.

If Auric Goldfinger owned Fort Knox, then he would be rich. Auric Goldfinger does not own Fort Knox. Therefore, Auric Goldfinger is not rich. This is wrong for the same reason as the previous argument was wrong: there are other ways to be rich.

In propositional calculus, this takes the form:

P→Q 
 ¬P
∴ ¬Q

If P, then Q. P is false (not-P). Therefore, Q is false (not-Q). As in the previous case, the rules for if and only if are different from if alone.



Tuesday, September 11, 2012

Propositional Fallacies, Part 1


There’s a branch of math known as propositional calculus that treats arguments like mathematical propositions. Using propositional calculus, you can demonstrate the truth or falsity of certain arguments.

Take the statement “It is raining here now.” Depending on when you make the statement, it can be either true or false. In propositional calculus, you’d represent the statement as “P,” and the opposite, “It is not raining here now” as “¬P.” If P is true, then ¬P has to be false; if ¬P is true, then P has to be false.

You can link together statements with connectors. Common connectors are AND, OR NOT, ONLY IF, and IF AND ONLY IF. If we say “It is raining here now, and it is raining where you are now as well,” we can label the second statement as Q. Represent AND with the symbol ⋀, and we can write “It is raining here now, and it is raining where you are now as well” as P⋀Q.

Of course, maybe it is raining here or it isn’t; maybe it’s raining at your house and maybe it isn’t. Because the individual statements can be true or false, we can prepare a truth table.

P                    Q                    P⋀Q
True         True            True
True         False              False
False             True            False
False             False           False

With and as a connector, the proposition P⋀Q is only true if both statements are true.

The connector OR (represented as “⋁”), on the other hand, makes the proposition true as long as at least one of the statements are true. “It is raining here now OR it is raining where you are now” results in the following truth table.

P                    Q                    P⋁Q
True         True           True
True         False          True
False          True              True
False           False             False

Notice that OR is used here inclusively rather than exclusively. That is, P doesn’t exclude Q from being true. If it’s raining at my house, that doesn’t mean it’s not raining at yours.

Given the idea of propositional logic, it's easy to conclude that there are fallacies to go with it. The first of these is known as affirming a disjunct.

Affirming a Disjunct

Also known as the fallacy of the alternative disjunct, or the false exclusionary disjunct, this particular fallacy occurs when you change an inclusive OR into an exclusive one. “It is raining here now or it is raining where you are now” gets interpreted as “If it is raining here now, then it isn’t raining where you are now.”

In our symbolic structure, that gets represented as the following argument (with “therefore” represented by ∴).

P⋁Q
P
∴¬Q

That’s a fallacy because it could be raining both places. One doesn’t preclude the other.

While OR in logic always means an inclusive “or,” that doesn’t mean you don’t sometimes want to be more concrete. The logical operator XOR is an exclusive or. When you use it, you’re saying “one or the other, but not both.” The symbol for that is ⊻.

More next week.

Tuesday, August 14, 2012

The Drake Equation (Formal Fallacies, Part 1)

Frank Drake
In February, I completed a 25-part series on red herrings, a category of argumentative fallacies that are intended to distract from the argument, rather than address it directly. That's only one category of argumentative fallacy. In this series, we'll look at formal fallacies. Formal fallacies are errors in basic logic. You don't even need to understand the argument to know that it is fallacious. Let's start with the appeal to probability.

Appeal to Probability

If I play the lottery long enough, I'm bound to win, and I can live on the prize comfortably for the rest of my life! Yes, it's possible that if you play the lottery, you'll win. Somebody has to. The logical fallacy here is to confuse the possibility of winning with the inevitability of winning. Of course, that doesn't follow.

In our study of cognitive bias (also available in compiled form here), we learned that numerous biases result from the misapplication or misunderstanding of probability in a given situation. Examples include the base rate effect, the gambler's fallacy, the hindsight bias, the ludic fallacy, and overall neglect of probability. Use the tag cloud to the right to learn more about each. We are, as a species, generally bad at estimating probability, especially when it affects us personally.

Various arguments about the Drake equation can fall into this trap. The Drake equation, developed by astrophysicist Frank Drake in 1961, provides a set of guidelines for estimating the number of potential alien civilizations that might exist in the Milky Way galaxy. Here's the formula:


N = R^{\ast} \cdot f_p \cdot n_e \cdot f_{\ell} \cdot f_i \cdot f_c \cdot L



in which:
N = the number of civilizations in our galaxy with which communication might be possible;
R* = the average rate of star formation per year in our galaxy
fp = the fraction of those stars that have planets
ne = the average number of planets that can potentially support life per star that has planets
fℓ = the fraction of the above that actually go on to develop life at some point
fi = the fraction of the above that actually go on to develop intelligent life
fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time for which such civilizations release detectable signals into space
There are various arguments about the Drake equation. Some argue for additional terms in the equation, others point out that the value of many of the equation's terms are fundamentally unknown. There's a reasonable argument to be made that "N" has to be a fairly low number, on the simple grounds that we have not yet detected any extraterrestrial civilizations. Depending on the assumed values of the terms in the equation, you can derive conclusions that range from the idea that we're alone in the galaxy (see the Fermi Paradox) to an estimate that there may be as many as 182 million alien civilizations awaiting our discovery (or their discovery of us).

From a fallacies basis, however, the problem comes when people argue that the vast number of stars makes it certain that alien civilizations exist. As much as I'd personally prefer to believe this, the logic here is fallacious. Probable — even highly probable — doesn't translate to certainty.

That's not an argument against the Drake equation per se, but merely a problem with an extreme conclusion drawn from it. The Drake equation was never intended to be science, but rather a way to stimulate dialogue on the question of alien civilizations.

Tuesday, May 15, 2012

Why We Need Hokum (Part 3)

The third and final installment of "Why We Need Hokum." Part one is here. Part two is here.

We often assume that the human mind is designed to be rational, and failures of rationality are seen as defects in human thought. In my long study of cognitive biases and decision fallacies, that’s been a continuing theme: here’s why your mind isn’t working right.

But as it turns out, that may be the wrong way to look at it. Dr. Stephen Pinker, a Harvard professor specializing in evolutionary psychology and the computational theory of mind, argues that the process of natural selection is not concerned with the truth per se, and in many cases actually disfavors a truth-seeking mind. In an emergency, factual truth-seeking may be way too slow; a fast approximation, even if of questionable accuracy, can promote survival.

Even more importantly, non-factual or even counter-factual beliefs play an important social role. Believing that your own social group is better than other social groups helps you and your group be more successful. Believing that your romantic partner is unique, amazing, and special — even if objective evidence argues otherwise — contributes to a successful relationship.

In fact, some evolutionary psychologists argue that the role of cognitive biases and decision fallacies exist to protect your mind against challenges that would weaken your non-rational beliefs. You want the truth? You can’t handle the truth, and thanks to cognitive biases and decision fallacies, you don’t have to. Beliefs are deeply rooted in the human psyche, and it’s not an accident that they are so resistant to reason.

If a belief increases the survival potential of you and your group, on some level it’s irrelevant whether it’s actually correct. Beliefs that are clearly and immediately contra-survival (“Look, ma, I can fly!”) are evolutionarily self-correcting. It’s not logical argument and rational thinking that changes your mind, but rather the impact of your belief when it collides with the cold, hard ground. When cause and effect is less clear and less immediate, the lesson tends not to sink in.

We are not rational animals. We are thinking animals, but much of our thought isn’t rational at all. We eagerly consume self-admitted lies whenever we read fiction, and as legions of media fans can attest, the imaginary worlds we enter are often more satisfying and fulfilling than the one in which we officially live.

Hokum can expand our universe or contract it. From David Copperfield, who cheerfully admits that what he’s doing is trickery, it’s one small step to Uri Geller, who turns an otherwise unremarkable spoon-bending trick into claims of telekinesis, and not a giant leap to conclude that Area 51 is hiding a world of alien powers right under our very noses. Conspiracy theories and supernatural beliefs of all sorts give us a simple, fulfilling way to eliminate complexity and ambiguity in our lives.

Rationality, enshrined above all in the scientific method, has been under continuing and unremitting attack ever since the scientific revolutions of the 16th and 17th centuries. As soon as Copernicus dethroned the earth as center of the cosmos, the backlash began. In fighting the battle for truth, justice, and the scientific way, we’ve tended to assume that we shared a mutual goal with our opposition: a desire for truth. But that, as we've seen, is not a good assumption.

Which brings up the obvious question, pace Pilate: what is truth? Gandhi argues that truth is self-evident; Churchill argues that it is incontrovertible. But Mark Twain says it best:

“Truth is mighty and will prevail. There is nothing the matter with this, except that it ain’t so.”


Tuesday, February 7, 2012

Wrong is Right (Red Herrings, Part 25)

The 25th and final installment of Red Herrings (a subset of Fallacies) ends with the red herring known as “two wrongs make a right.” There’s actually one more red herring in the Wikipedia list, previously covered in my series on cognitive biases: the Texas sharpshooter fallacy. Next week, something completely different. Or maybe not.

Two Wrongs Make a Right

Given the reality that no side is completely innocent of all wrongdoing, the “two wrongs make a right” argument crops up with surprising frequency. When Side A is accused of some misdeed, the response all too often becomes “Side B is even worse!” But the sins of Side B, no matter how true or how severe, don’t excuse Side A.

The fundamental test of any red herring fallacy is that the truth or falsehood of the counterclaim is irrelevant to the merit of the primary claim. Whether a different group is as bad or worse changes nothing. Two wrongs, as we all know (or should know), don’t make a right.

But that doesn’t stop people from trying.

Tuesday, January 31, 2012

Drawing Straws (Red Herrings Part 24)

Part 24 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly. This week, the straw man.

Straw Man

The Economist reports Mitt Romney’s stump speech about President Obama, in which Romney says,
Just a couple of weeks ago in Kansas, President Obama lectured us about Teddy Roosevelt’s philosophy of government. But he failed to mention the important difference between Teddy Roosevelt and Barack Obama. Roosevelt believed that government should level the playing field to create equal opportunities. President Obama believes that government should create equal outcomes.
Of course, Mitt Romney is lying. As the article goes on to say, “Barack Obama doesn't ‘believe that government should create equal outcomes’ any more than Mitt Romney believes that 1% of Americans should have all the wealth while the rest get nothing, or that companies should fire all their American workers and send their jobs to China because Americans are overpaid and lazy.” Instead, Romney is attempting to reframe a discussion on income inequality in a way that is more advantageous to his own arguments.

This is an example of the Straw Man fallacy. Instead of arguing with what a person actually says or believes, the responder creates a distorted version of that argument and attacks that, instead.

Straw man arguments take the following form:

1. Person A holds position X.
2. Person B disregards certain key points of X and instead presents the superficially similar position Y. Position Y is distorted from position X in varying ways, including:
a. A direct misrepresentation of the opponent's position. 
b. Quoting an opponent's words out of context — i.e. choosing quotations that misrepresent the opponent's actual intentions.
c. Presenting someone who defends a position poorly as the main defender, then refuting that person's arguments — thus giving the appearance that every upholder of that position (and thus the position itself) has been defeated. 
d. Inventing a fictitious persona with actions or beliefs which are then criticized, implying that the person represents a group of whom the speaker is critical. 
e. Oversimplifying an opponent's argument, then attacking this oversimplified version.
3. Person B attacks position Y, concluding that X is false/incorrect/flawed.

There may well be honest and legitimate reasons to attack Position X, but attacking Position Y instead is always dishonest.

Tuesday, January 24, 2012

Hume’s Guillotine (Red Herrings Part 23)


Part 23 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly. This week, the is-ought problem.

Is-Ought Problem

The “is-ought problem” is also known as Hume’s Law or Hume’s Guillotine, first articulated by Scottish philosopher David Hume in his 1739 work A Treatise of Human Nature.
In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, 'tis necessary that it should be observed and explained; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.
As in the case of many logical fallacies, it’s not necessarily the case that “is” precludes “ought,” but rather that “is” doesn’t constitute a sufficient proof by itself. To reach a conclusion of “ought” requires additional argument.

The problem is easier in goal-setting than in morality. For example, if you want to win a race, then you ought to run quickly. However, whether you ought to want to win in the first place is a different question.

Tuesday, January 17, 2012

All-Natural (Red Herrings Part 22)

Part 22 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly. This week, the appeal to nature.

Appeal to Nature

There’s nothing inherently wrong in appealing to nature as part of an argument, but it becomes a red herring logical fallacy when it turns into an unwarranted assumption. The form of the logical fallacy is:

N is natural.
Therefore, N is good or right. 
U is unnatural.
Therefore, U is bad or wrong.

A medicine made with “all natural” ingredients could well contain arsenic and uranium, which hardly qualify as safe. The definition of “natural” itself can be twisted using judgmental language, which you’ll see in various arguments against homosexuality.

But that’s just a load of santorum.

Tuesday, January 10, 2012

I Am Curious (Yellow) (Red Herrings Part 21)


Part 21 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly. This week, the naturalistic fallacy.

Naturalistic fallacy

The naturalistic fallacy was first described and named by British philosopher G. E. Moore in his 1903 book Principia Ethica. It describes the problem of trying to prove an ethical claim by appealing to a definition of “good” in terms of natural properties such as “pleasant,” “more evolved,” or “desirable.” For example, if something is both pleasant and good, inferring that “pleasant” and “good” are therefore the same quality is one bridge too far.

Moore stands in contrast to philosophers who argue that “good” can be defined in terms of natural properties we already understand. Instead, Moore argues that properties are either simple or complex, and complex properties are made from simple ones. Complex properties can be defined by breaking out their simple components, but the simple ones are indefinable. You can define “yellow” as the color of a ripe lemon, or as the primary color between green and orange on the visible spectrum, or as electromagnetic radiation with a wavelength between 570 and 590 nanometers, but none of those are sufficient to help a blind person perceive what you’re talking about. As Justice Potter famously observed about pornography, “I know it when I see it.” But that doesn’t enable one to produce a meaningful operational definition.

Tuesday, January 3, 2012

Thank God It's Friday! (Red Herrings, Part 20)


Part 20 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly. This week, judgmental language.


Judgmental language

You can’t take a management or communications course without hearing about the dangers of judgmental language, but it’s a red herring fallacy as well. Like all red herrings, judgmental language skips over the actual descriptive part of the argument and rushes right to the conclusion.

“He hates me” is judgmental language, not because it’s necessarily false, but because it cites the conclusion without citing the evidence. What’s the evidence that he does, indeed, hate you? Perhaps he called you names, he told someone else that he hated you, he went to your boss to complain about you, or he let the air out of your tires. Those are descriptions of behavior, and at some point they add up to sufficient evidence to draw the conclusion that yes, indeed, he does hate you. Alternatively, it could mean something else altogether, such as a desire to take your job or win a promotion. It’s not personal, it’s strictly business.

Judgmental language criticizes or praises, condemns or applauds, evaluates or interprets the behaviors (actions, deeds, sayings) of human individuals and groups. “Obama is a Kenyan socialist” neatly avoids the requirement of citing evidence and goes right for the jugular. Political propaganda tends to consist of nothing but.

There are a number of problems with judgmental language. First, your conclusion may be wrong. Second, judgmental language tends to be resisted and to provoke arguments. Avoid “red flag” words and phrases. Try to stay away from “should” and “ought.” Go with the Joe Friday technique: just the facts, ma’am.

Tuesday, December 27, 2011

A Cute Angle (Fallacies/Red Herrings Part 19)


Part 19 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly. This week, the Genetic Fallacy.

Genetic fallacy

A guy I went to high school with used to claim that calling someone “cute” was an insult, because “cute” originally meant “bowlegged.” According to the OED, that’s wrong. “Cute” is an aphetic (initial vowel dropped) version of “acute,” and originally referred to someone sharp, clever, or quick-witted. A secondary meaning lists “cute” as a synonym of cur, a worthless dog.

Of course, that’s not the sense in which we most commonly use the word today. If I refer to someone as “cute,” I don’t mean bowlegged, clever, or dog-like — I’m usually referring to someone’s physical appearance. (There’s a sarcastic version — “Are you being cute?” — that refers back to the original meaning, but tone of voice usually makes it clear which meaning we intend.)

The genetic fallacy is the argument that because a word or phrase once meant something different, it continues to mean the same thing, regardless of how usage has evolved. The legend that a “rule of thumb” describes the maximum thickness of a stick with which it was permissible for a man to beat his wife has been debunked repeatedly, but even if it were true, it’s not how we use the term today.

Genetic fallacies aren’t always about language. Australia may have been settled in part by British criminals (as was Georgia), but any conclusion about today’s Australians can’t rest on the legal status of its founders.

Tuesday, December 20, 2011

We Stand on the Shoulders of Pygmies (Fallacies/Red Herrings Part 18)


Part 18 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly.

Chronological snobbery

We stand, Newton observed, on the shoulders of giants, but intellectually we tend to treat them as pygmies. The practice, for example, of using leeches in medicine isn’t just medieval, it’s positively ancient, with citations going back 2,500 years. When leeching went out of fashion (the late 19th century), it was obvious in retrospect how stupid these ancients were. After all, in those days they still believed the earth was flat.

The red herring of chronological snobbery is the argument that because A is an old argument, dating back to when people believed the obviously-false B, A must therefore also be false. The fact that some ancients (though fewer than you’d suppose) believed the earth was flat doesn’t in itself constitute a valid argument against anything other ancient idea. If you want to discredit A, you have to show it’s false: the proof that B is false may be valid, but utterly beside the point.

Leeches, after all, came back into medical fashion in the 1980s. It turns out that leeches are helpful in the aftermath of microsurgeries, promoting healing by allowing fresh, oxygenated blood to reach the area. The fallacy of chronological snobbery would have led investigators away from looking at a clearly outmoded idea.

Tuesday, December 13, 2011

Anything You *Don’t* Say May Be Held Against You! (Red Herrings, Part 17)


Part 17 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly. This week we cover the argument from silence.

Argumentum e Silentio

If I refuse to answer the question, what conclusions can you draw? Unlike most red herring fallacies, this one depends on the circumstances. In pure reasoning, a conclusion drawn from the silence of another is automatically fallacious. But in abductive reasoning (the process of drawing logical inferences), however, it is sometimes quite reasonable to draw meaning from the absence of communication.

If I claim to be an expert speaker of German, but won’t tell you the proper German phrase for happy birthday, which is more probable: (a) I know, but won’t tell you out of spite, or (b) I really don’t know and my claims to be a German linguist are overblown? Common sense suggests (b) is more likely than (a) — after all, it would be so easy for me to just tell you and move on. On the other hand, if I claim to know my wife’s password but refuse to tell you, it’s unreasonable to conclude that because I won’t tell you, I don’t know.

A fallacious use of the argument from silence is to shift the burden of proof. Bertrand Russell wrote that if he claimed a teapot were orbiting the Sun somewhere between the Earth and Mars, it would be unreasonable to expect others to believe him on the grounds they couldn’t prove him wrong. In other words, your silence in being unable to disprove an argument doesn’t constitute proof that the argument was right all along.

Historians sometimes use the argument from silence in drawing conclusions about what one group of people knew about another, on the grounds that some facts are so natural that their omission legitimately implies ignorance.

In American jurisprudence, the right to silence has the effect of barring the argument from silence, although there are some subtle ways to work it in.

Tuesday, December 6, 2011

Impure Motives (Part 16 of Fallacies/Red Herrings)


A short installment this week. Part 16 of Red Herrings covers still more responses to arguments that distract from the argument rather than address it directly. This week we cover the appeal to motive.

Appeal to Motive

Scientist A says that man-made climate change is a real threat. Scientist A is a candidate for tenure at a well-known university, and the head of his department believes the same thing. This means that Scientist A’s motives are impure, and therefore his argument is false.

This is the appeal to motive, another subcategory of the argumentum ad hominem. Its special feature is that it’s only necessary to show that there is a possibility of a motive, however small. What’s missing is proof that (a) the motive actually exists, (b) if it does exist, it played a role in formulating the argument and conclusion, and (c) any other proof or evidence offered is ipso facto invalid.

It’s related to general claims of conflict of interest. In the run up to the Supreme Court hearings on the Affordable Care Act, accusations of conflict of interest have been leveled at three justices, two conservative and one liberal. Showing the potential for a conflict of interest isn’t the same thing as demonstrating that the votes of these justices is necessarily corrupt. In the particular instance, it’s more likely the case that the predispositions of the justices predate the events and actions leading to the charge.

Tuesday, November 29, 2011

Repugnicans and Libtards (Fallacies, Part 15)


Returning now to those thrilling days of yesteryear, the fifteenth installment of my series on argumentative fallacies continues our list of red herrings — responses to arguments that distract from the argument rather than address it directly. This week we cover the abusive fallacy, the appeal to equality, and the appeal to accomplishment.

Abusive Fallacy

Repugnicans and libtards — who could possibly take seriously anything they have to say? The abusive fallacy is an extreme form of argumentum ad hominem in which name-calling overcomes every other part of the discussion. The objective is to smear the individual and group so completely that anything they have to say is discredited.

When someone surrenders to the abusive fallacy, any pretense of rational discussion goes out the window. If Occupy Wall Street protestors are “dirty, smelly hippies,” there’s no reason to address the substance of any of their arguments. Similarly, if there’s nothing to the Tea Party but astroturf racists, then nothing they say need be taken seriously either.

Appeal to Equality

What does “equal rights” mean? After all, people aren’t “equal” in most normative senses. We aren’t all of the same height, or the same age, or the same weight, or the same IQ, or the same income, or the same education. Though I subscribe fully to the moral concept of equal rights, the logical issues of equality are more complex. For example, I believe in equality of marriage rights, but I don’t accept that a fetus should be considered legally equal to a human being. I believe in freedom, but I’m willing for society to impose imprisonment or other penalties for people who commit certain offenses. Is this logically inconsistent?

No. It’s an example of a logical fallacy known as the “appeal to equality.” In other words, citing “equality” as proof that Person A should be treated the same as Person B is insufficient to make the case.

The argument that gays and lesbians should be permitted to marry, or that a fetus deserves the civil rights of a person, requires additional reasoning. This is important, because we all — properly — make distinctions. The murderer does not have the same rights as a non-murderer; a child does not have the same rights as an adult.

It’s not inconsistent or logically inappropriate to make judgments and distinctions; in fact, it’s required.

Appeal to Accomplishment

In 1970, the Nobel Prize-winning chemist Linus Pauling published Vitamin C and the Common Cold, in which he claimed that very large doses of Vitamin C had a variety of health effects. It’s probably fair to say that if I had written that book, no one would have taken it seriously.

The appeal to accomplishment is the logical fallacy that the accomplishments of the arguer serve as evidence in favor of his or her claim, whether or not the claim is necessarily related to the area of accomplishment.

While it’s reasonable to take a close look at a proposition because Expert A claims it’s true, it’s important not to confuse that with the belief that Expert A is necessarily right.

Tuesday, October 4, 2011

Riddikulus! (Part 14 of Fallacies)

More red herrings, argumentative fallacies that distract from the argument rather than address it directly. This week, the final three appeals to emotion: the appeal to ridicule, the appeal to spite, and wishful thinking.

Reductio Ad Ridiculum

Riddikulus! As all Harry Potter fans know, the way to defeat a boggart is to convert it from an object of terror to an object of mockery. While the spell clearly works, in real life, the appeal to ridicule is a type of red herring fallacy in which the opponent presents the original argument in a way that turns it into a mockery of itself, either by emphasizing the counter-intuitive aspects of the original argument, or by creating a straw man to debunk it.

An example of the first approach is the argument, “If Einstein's theory of relativity is right, that would mean that when I drive my car it gets shorter and more massive the faster I go. That's crazy!” It’s also true. The problem is that the effects are not easily measured at automobile speeds, but only become significant as the object nears the speed of light.

The second approach misrepresents the argument in order to ridicule it. “If evolution were true, that would mean that all the apes wouldn't be here any more, since they all would have evolved into humans!” That’s ridiculous indeed — but it’s not actually implied or stated in the Theory of Evolution.

Argumentum Ad Odium

The appeal to spite exploits existing bitterness or dislike in its attack. The various attacks on union benefits (such as retirement), particularly in government workers, relies on the negative emotions aimed at the target group as the primary justification for cutting back or cancelling previously agreed-upon benefits. “Why should people enjoy a comfortable retirement with my tax dollars?”

Wishful Thinking

Wishful thinking is based on the premise “I wish P were true/false, therefore P is true/false.” You see this in a lot of superstitious behavior, from chain letters to the belief in UFOs. Personally, I think it would be really cool if aliens did in fact visit Earth — but that doesn’t make it true.

Tuesday, September 20, 2011

Dives and Lazarus (Part 13 of Fallacies)

More red herrings, argumentative fallacies that distract from the argument rather than address it directly.

Argumentum ad crumenam

If you’re so damn smart, why ain’t you rich?

The argumentum ad crumenam, or argument to the purse, suggests that the truth of the proposition can be supported by the wealth of the speaker. If you’re so smart, why ain’t you rich? In other words, if you’re rich, you must be smart.

The rebuttal to this argumentative red herring can be made in only two words:

Donald Trump.

Argumentum ad lazarum

The reverse is known as the appeal to poverty. It takes its name from the parable of the rich man and Lazarus (Luke 16:19-31), in which the rich man suffers the torments of Hades while the beggar Lazarus enjoys the delights of heaven.

While there’s significant Biblical support for the comparative virtue of poor versus rich (see Matthew 19:24, “And again I say to you, It is easier for a camel to go through the eye of a needle, than for a rich man to enter into the kingdom of God.”), virtue and logical argument don’t necessarily correlate. If it’s not necessarily true because a rich person says it, it’s no more true if a poor one does.

Tuesday, September 13, 2011

Known by the Company We Keep (Part 12 of Fallacies)

In the next part of our continuing survey of red herrings (responses to arguments that don’t address the actual argument but merely distract from it, we’ll look at the two types of association fallacies: guilt by association and honor by association. Depending on your point of view, they can be one and the same.

Guilt by Association

Association fallacies take the following form: (1) A is a B. (2) A is also a C. (3) Therefore, all Bs are also Cs. (More formally, (∃x ∈ S : φ(x)) → (∀x ∈ S : φ(x)), which means “if there exists any x in the set S so that a property φ is true for x, then for all x in S the property φ must be true.”

Of course, that’s not at all a necessary condition. The classic rebuttal goes like this:

(1) All dogs have four legs.
(2) My cat has four legs.
(3) Therefore, my cat is a dog.

The PolitiFact Truth-O-Meter recently gave a “Pants On Fire” rating to an August 17 blog post by Texas radio host Dan Cofall, which read in part, “The magic number ‘70’ is the number of members of the 111th Congress who are members of the Democratic Socialists of America (DSA). These are not just politicians who vote left of center; these are card-carrying members of ‘The Democratic Socialists of America.’” (The "70" to which Cofall refers is the membership of the Congressional Progressive Caucus — though I do not believe they actually issue membership cards.)

There are two guilt-by-association attacks here, one direct and one indirect. The indirect attack is in the term “card-carrying,” an echo of McCarthy-era HUAC anti-communist campaigning. The implication goes like this:

(1) Dues-paying members of the Communist Party carry cards.
(2) Members of Group X (Democratic Socialists, ACLU, etc.) carry cards.
(3) Therefore, members of Group X are Communists.

The direct attack takes this form:

(1) The  Democratic Socialists of America have a platform with a number of ideas.
(2) Some members of the Congressional Progressive Congress have ideas that overlap with some items on the DSA agenda.
(3) Therefore, all members of the Congressional Progressive Congress are "card carrying" members of the Democratic Socialists of America.

Of course, liberal Democrats and Tea Party members also share some specific ideas (they both like the idea of voting, for example), but it hardly follows that all liberal Democrats are Tea Party members, or vice versa.

The Democratic Socialists are somewhat chagrined. “If we had formal political relationships with 70-odd members [of Congress], we would be making a lot more money” from dues. And as far as they’re concerned, the problem with the Congressional Progressive Congress is that the members aren’t nearly socialist enough — they prefer a third party movement.

Whether it’s “guilt by association” or “honor by association” may depend on your point of view. When Bill O’Reilly said on his January 19, 2005, broadcast, “Hitler would be a card-carrying ACLU member. So would Stalin. Castro probably is. And so would Mao Zedong,” I decided to look at it this way:

(1) Bill O’Reilly and George Bush say bad things about “card carrying” ACLU members.
(2) I think O’Reilly and his fellow-travelers are jackasses.
(3) Therefore, I joined the ACLU…just so I can carry my card.

Tuesday, August 30, 2011

Queen for a Day (Part 11 of Fallacies)


Fallacies involve incorrect or invalid reasoning. Red herrings are a category of fallacy in which the response to an argument doesn’t address the argument, but rather offers a distraction from it. One class of red herrings consists of appeals to emotion, in which a given feeling is used as the evidence for or against a given proposition.

Argumentum Ad Misericordiam

“Would YOU like to be Queen for a day?”

The forerunner to today’s reality show epidemic, Queen for a Day, premiered as a radio show in 1945, only moving to television in 1956, and lasted until 1964. The format involved three different women talking about financial, health, or other emotionally gripping hard times they had recently experienced, and what they most needed to deal with it — medical care, therapeutic equipment, or a major appliance. An applause meter registered the level of sympathy, and the winner had her wish granted, along with other merchandise. (The runners-up also received prizes; no one went away empty handed.)

The appeal to pity (argumentum ad misericordiam) is a red herring fallacy because it doesn’t in itself prove or disprove the proposition at hand. The contestant with the worst problems or greatest need isn’t necessarily the one whose needs are greatest — the winner is the one most able to win the audience’s sympathy.

Pleading with the teacher for a better grade because an “F” means you can’t be on the football team doesn’t mean you deserve a better grade — but that’s not to say the appeal to pity isn’t effective, or that it’s automatically wrong to make or change a decision because of pity. As dustman-philosopher Alfred P. Doolittle so artfully argues in Pygmalion:
I ask you, what am I? I'm one of the undeserving poor: that's what I am. Think of what that means to a man. It means that he's up agen middle class morality all the time. If there's anything going, and I put in for a bit of it, it's always the same story: 'You're undeserving; so you can't have it.' But my needs is as great as the most deserving widow's that ever got money out of six different charities in one week for the death of the same husband. I don't need less than a deserving man: I need more. I don't eat less hearty than him; and I drink a lot more. I want a bit of amusement, cause I'm a thinking man. I want cheerfulness and a song and a band when I feel low. Well, they charge me just the same for everything as they charge the deserving. What is middle class morality? Just an excuse for never giving me anything. Therefore, I ask you, as two gentlemen, not to play that game on me. I'm playing straight with you. I ain't pretending to be deserving. I'm undeserving; and I mean to go on being undeserving. I like it; and that's the truth.
He won’t win Queen for a Day, but it’s a fine argument nonetheless.

In the law, appeals to pity are not supposed to be made during the trial (though you can sneak it in if you can camouflage it as part of another argument), but they're completely appropriate during sentencing.