The following is an extract from The Demon-Haunted World  by Carl Sagan (pp 196-204)[1]

In science we may start with experimental results, data, observations, measurements, 'facts'.  We invent, if we can, a rich array of possible explanations and systematically confront each explanation with the facts.  In the course of their training, scientists are equipped with a baloney detection kit.  The kit is brought out as a matter of course whenever new ideas are offered for consideration.  If the new idea survives examination by the tools in our kit, we grant it warm, although tentative, acceptance.  If you're so inclined, if you don't want to buy baloney even when it's reassuring to do so, there are precautions that can be taken;  there's a tried-and-true, consumer-tested method.

What is in the kit?  Tools for skeptical thinking.

What skeptical thinking boils down to is the means to construct, and to understand, a reasoned argument and, especially important, to recognize a fallacious or fraudulent argument.  The question is not whether we like the conclusion that emerges out of a train of reasoning, but whether the conclusion that emerges out of a train follows from the premise of starting point and whether that premise is true.

Among the tools:


The reliance on carefully designed and controlled experiments is key, as I tried to stress earlier. We will not learn much from mere contemplation.  It is tempting to rest content with the first candidate explanation we can think of One is much better than none.  But what happens if we can invent several?  How do we decide among them?  We don't.  We let experiment do it.  Francis Bacon provided the classic reason:


"Argumentation cannot suffice for the discovery of new work, since the subtlety of Nature is greater 

many times than the subtlety of argument."


Control experiments are essential.  If, for example, a new medicine is alleged to cure a disease 2o percent of the time, we must make sure that a control population, taking a dummy sugar pill which as far as the subjects know might be the new drug, does not also experience spontaneous remission of the disease 20 percent of the time.

Variables must be separated.  Suppose you're seasick. and given both an acupressure bracelet and 50 milligrams of meclizine.  You find the unpleasantness vanishes.  What did it- the bracelet or the pill?  You can tell only if you take the one without the other, next time you're seasick.  Now imagine that you're not so dedicated to science as to be willing to be seasick.  Then you won't separate the variables.  You'll take both remedies again.  You've achieved the desired practical result; further knowledge, you might say, is not worth the discomfort of attaining it.

Often the experiment must be done "double-blind", so that those hoping for a certain finding are not in the potentially compromising position of evaluating the results.  In testing a new medicine, for example, you might want the physicians who determine which patients' symptoms are relieved not to know which patients have been given the new drug.  The knowledge might influence their decision, even if only unconsciously. Instead the list of those who experienced remission of symptoms can be compared with the list of those who got the new drug, each independently ascertained. Then you can determine what correlation exists.  Or in conducting a police lineup or photo identification, the officer in charge should not know who the prime suspect is, so as not consciously or unconsciously to influence the witness.

In addition to teaching us what to do when evaluating a claim to knowledge, any good baloney detection kit must also teach us what not to do.  It helps us recognize the most common and perilous fallacies of logic and rhetoric.  Many good examples can be found in religion and politics, because their practitioners are so often obliged to justify two contradictory propositions.  Among these fallacies are:


Knowing the existence of such logical and rhetorical fallacies rounds out our toolkit.  Like all tools, the baloney detection kit can be misused, applied out of context, or even employed as a rote alternative to thinking.  But applied judiciously, it can make all the difference in the world-not least in evaluating our own arguments before we present them to others.


My favorite example is this story, told about the Italian physicist Enrico Fermi, newly arrived on American shores, enlisted in the Manhattan nuclear weapons project, and brought face-to-face in the midst of World War Two with US flag officers:

So-and-so is a great general, he was told.
"What is the definition of a great general?" Fermi characteristically asked.
"I guess it's a general who's won many consecutive battles"
"How many?"
After some back and forth they settled on five.
"What fraction of American generals are great?"
After some more back and forth, they settled on a few per cent.

But imagine, Fermi rejoined, that there is no such thing as a great general, that all armies are equally matched, and that winning a battle is purely a matter of chance.  Then the chance of winning one battle is one out of two, or 1/2; two battles 1/4, three 1/8, four 1/16 and five consecutive battles 1/32, which is about three per cent.  You would expect a few per cent of American generals to win five consecutive battles, purely by chance.  Now has any of them won ten consecutive battles ..... ?

[1]    Sagan, C. 1997, The Demon-Haunted World, Headline Book Publishing, London.