Showing posts with label bias blind spot. Show all posts
Showing posts with label bias blind spot. Show all posts

Sunday, November 1, 2009

“Looking for the Pony” — Cognitive Biases, Part 2

Welcome back to part two of our discussion of cognitive and decision-making biases. The series begins here.

Everyone's subject to cognitive biases of one sort or another. None of us is capable of pure objectivity; we cannot see reality without distortion. But we can try.

There are around 100 different identified cognitive and decision biases, and some of them have subsets, as we'll see shortly. Today, we'll cover three more: the base rate fallacy, congruence bias, and everyone's traditional favorite, experimenter's bias.

Base rate fallacy. There are 100 terrorists trying to sneak through airline security for every one million non-terrorists. TSA has set up an automated face recognition system that has 99% accuracy. The alarm goes off, and trained Homeland Security agents swoop down. What is the probability their captive is really a terrorist?

Well, if the failure rate is 1%, that means there’s a 99% chance the person is a terrorist, and a 1% chance that he or she is not, right? That justifies a significant assumption of guilt.

But this actually gets it backward. The chance the person isn't a terrorist is far greater — in fact, it's 99.02% likely that the new prisoner is completely innocent!

The mistake that leads to the first conclusion is called the base rate fallacy. It occurs when you don't notice that the failure rate (1 in 100) is the not the same as the false alarm rate. The false alarm rate is completely different, because there are, after all, far more non-terrorists than terrorists. Let's imagine that we walk everyone — 100 terrorist and 1 million non-terrorists, for a total of 1,000,100 people — in front of the face recognition tool. A 1% failure rate means it's going to ring incorrectly one time for each 100 passengers, 10,099 times in total. It will catch 99 terrorists and miss one, but it's also going to catch 10,000 non-terrorists. The ratio is actually 99:10,099, or a miniscule 0.98%, that the person caught is actually a terrorist.

This does not argue against the value of screening. Screening might be perfectly reasonable. Overreaction, however, is not. If you’re 99% sure you’ve caught a terrorist, you will behave differently than if you’re only 1% sure.

To avoid the base rate fallacy, look at the “prior probability.” If there were no terrorists, what would the face recognition system produce? With a 1% failure rate, it would never pick a real terrorist (there would be none), but it would trigger 10,000 false positives. Now you’ve found the missing fact.

(Footnote: Notice that the base rate fallacy only produces incorrect analysis when the scale is unbalanced, as is our case with 100 terrorists in city with a population of 1 million. As the populations approach 50/50, the failure rate and false alarm rate would converge. Mind you, we'd have different problems then.)

Congruence bias. In congruence bias, you only test your hypothesis directly, potentially missing alternative explanations. In the famous Hawthorne experiment, Frederick W. Taylor, father of Scientific Management, wanted to test whether improved lighting in factories would increase worker productivity. He performed a direct test: he measured productivity, installed better lighting, and measured productivity again. Productivity went up. If you are falling into congruence bias, you’re done. Experiment confirmed; case closed.

But Taylor avoided the trap. He tested his hypothesis indirectly. If improved lighting increased productivity, he reasoned that worse lighting should lower it. So he tested that proposition as well. He took out a lot of lights and measured again: and to everyone’s surprise, productivity went up! A deeper analysis revealed what is now known as the Hawthorne Effect: when people feel others are paying attention to them, their productivity tends to go up, at least temporarily. (It’s a huge benefit of management consultants; just by showing up, we’re likely to make things better.)

To avoid congruence bias, don’t be satisfied with direct reasoning alone. Direct confirmation asks, “If I behaved in accordance with my hypothesis, what would I expect to occur?” Indirect confirmation asks, “If I acted in conflict with my hypothesis, what would I expect to occur?” If Taylor had stopped with the first question, we’d all be fiddling with the lights. Only the second question allowed him to discover the deeper truth.

Experimenter’s bias. This bias is well known to anyone in scientific fields. It’s the tendency for experimenters to believe and trust data that agrees with their hypothesis, and to disbelieve and distrust data that doesn’t. It’s a natural enough feeling; there’s a price to pay if we’re wrong, even if it’s only a hit to our egos. It’s impossible for any human being to be completely objective. Our perceptions and intelligence are constrained, and we are looking from the inside, not the outside.

Experimenter’s bias can’t be avoided; it has to be managed instead. Last week, we discussed the “bias blind spot,” the recursive bias of failing to recognize that you have biases. Self-awareness helps. Another good technique is the “buddy system.” I frequently work with co-authors so I have someone to challenge my thinking. That reduces the problem, though it doesn’t eliminate it — wherever my co-author and I see it the same way, the risk remains.

The best technique is to understand the components of the bias. A 1979 study of sampling and measurement biases listed 56 different experimenter’s biases: the “all’s well” literature bias, the referral filter bias, the volunteer bias, the insensitive measure bias, the end-digit preference bias, and my favorite, the data dredging bias, also known as “looking for the pony.”

More next week

Saturday, October 24, 2009

Unknown Knowns — A Survey of Assumptions, Biases, and Bigotry

“The bigot is not he who knows he is right; every sane man knows he is right. The bigot is he whose emotions and imagination are too weak to feel how it is that other men go wrong.”

- G. K. Chesterton, Alarms and Discursions, 1910

Last week, we explored Donald Rumsfeld’s observation about “unknown unknowns.” Unknown unknowns aren't just about what you don't know, they're about what you don't even know that you don't know. The other categories, of course, are known knowns (things you know and know you know) and known unknowns (things you know that you don't know).

But there is one missing combination: unknown knowns, the things you don't know that you really do know. How could you not know something that you actually do know? The answer involves cognitive biases, the ways in which your mind deceives you. Cognitive biases can blind you to what is in fact right in front of you, and also can make you see things that really aren't there.

In looking at cognitive bias, the essential first step is to realize that no one is immune. It's easier to see the mote of self-deception in someone else's eye than it is to see the big heavy curtains that are draped over our own perceptions. None of us can completely escape the trap, but we can (and must) stay aware that what we think isn't necessarily the whole or complete picture. As once was famously said of Vietnam, "Anybody who knows what's going on clearly doesn't understand the situation."

Project managers are taught how important it is to document the range of assumptions on a project, but the PMBOK® Guide doesn't go into much detail about how to discover them or what to do about them. And it's wrong to assume (*ahem*) that all assumptions are bad for your project. They don't always make an "ass + u + me."

Some assumptions are, of course, clearly bad. Common project assumptions include the idea that everybody's on board; that people will always play nice; and that the proposed project will actually solve the underlying problem. "Bad" in this context doesn't mean these assumptions are necessarily or always wrong; it means it's dangerous to take for granted that they're right.

Other assumptions are more useful: if you see a gun, it's wise to assume it's loaded and act accordingly, even if you have good reason to believe it probably isn't. The consequences of an error in one direction don't have the same impact as the consequences of an error in the other. Still other assumptions may change over time. Assume the gun is loaded unless you need to use it; in the latter case, it might be safer to assume it isn't loaded and check to make sure there's a round in the chamber.

The big problem in assumptions comes from assumptions that are held so deeply in the subconscious mind that we (or other stakeholders) aren't even aware they exist -- the “unknown knowns" of our title.

Prejudices and biases are a normal part of the makeup of human beings. They have a certain utility; they permit us to filter and organize and simplify the complex flood of data we get from everyday existence. The danger comes when prejudices are confused with facts. A good general assumption turns into an iron-clad rule; “some” is equated with “all,” and it’s one short step to the idea that if someone sees it differently, they must be either stupid or venal. That, as G. K. Chesterton points out, is the essence of bigotry.

It is both humbling and fascinating to read the extensive and exhaustive lists of biases and cognitive distortions that have been identified over the years. There are far too many for a single blog post, so we'll have fun with these for the next few weeks. If you'd like to jump into discussion, please feel free. SideWise thinkers know they have to battle their own biases as well as those of others, and understanding the list is the essential first step.

Let’s start with five common biases.

Decision-Making and Behavioral Biases

Bias Blind Spot — "Bias blind spot" is a recursive bias, the bias of failing to compensate for one's own cognitive biases. Some 80% of drivers think they are substantially better than the average driver. That's called the "better than average effect." Here, the vast majority of people think they are less subject to bias than the average person.

Confirmation Bias — Evidence is seldom completely clean and clear. If a mass of facts argue against our position and one fact supports it, guess which fact we focus on? When confronted by a mass of data, we tend to be selective in the evidence we collect; we tend to interpret the evidence in a biased way; and when we recall evidence, we often do so selectively. This is why a search for facts isn't as persuasive as logic might suggest.

D矇formation professionnelle — Your training as a professional carries with it an intrinsic bias that's often expressed by the phrase "When the only tool you have is a hammer, all problems look like nails." We probably know IT professionals who think every problem can be best solved with software, HR professionals who think every problem yields to training and human capital development, and project managers who think all problems lie inside the confines of the triple constraints. Each profession, of course, provides enormous value, but no single profession has all the answers.

Denomination Effect — One way to limit your daily spending is to carry only large denomination bills. Research shows that people are less likely to spend larger bills than their equivalent value in smaller ones. (This could also be called the Starbucks Effect.)

Moral Credential Effect — If you develop a track record as a moral and ethical person, you can actually increase your likelihood of making less ethical decisions in the future, as if you have given yourself a "Get out of jail free" card. For example, in a 2001 study, individuals who have had the opportunity to recruit a woman or an African-American in one setting were more likely to say later that a different particular job would be better suited for a man or a Caucasian.

More next week...

[Illustration © 2009 Mark Hill, used with permission.]