follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« What does the science of science communication say about communicating & expanding interest in noncontroversial but just really really cool science? | Main | What does "Lincoln" mean? »
Monday
Dec032012

Three theories of why we see ideological polarization on facts

Explaining the phenomenon of political conflict over risk and related policy-consequential facts is, of course, the whole point -- or the starting point; the "whole point" includes counteracting it, too-- of the Cultural Cognition Project.

But what's being explained admits of diverse characterizations, and in fact tends to get framed and reframed across studies in a way that enables the basic conjecture to be probed and re-probed over & over from a variety of complementary angles (and supplementary ones too)

Yes, simple obsessiveness is part of what's going on. But so is validity. 

No finding is ever uniquely determined by a single theory. One makes a study as singularly supportive as possible of the "basic account" from which the study hypothesis derived. But corroboration of the hypothesis can't by itself completely rule out the possibility that something else might have generated the effect observed in a particular study.

The way to deal with this issue is not to argue until one's blue in the face w/ someone who says, "well, yes, but maybe it was ..."; rather it is to do another study, and another, and another, in which the same basic account is approached and re-approached from different related angles, enabled by slightly different reframings of the basic conjecture. Yes, in each instance, "something else" -- that is, something other than the conjecture you meant to be testing -- might "really"  explain the result. But the likelihood that "something else" was  "really" going on -- and necessarily something different every time; if there's one particular alternative theory that fits every single one of your results just as well as your theory, then you are doing a pretty bad job in study design! -- becomes vanishingly more remote as more and more studies that all reflect your conjecture reframed in another way way keep piling up.

The framing of the latest CCP study, Ideology, Motived Reasoning, and Cognitive Reflection, is a reframing in that spirit.  

The study presents experimental evidence that supports the hypotheses that ideologically motivated reasoning is symmetric or uniform across different systems of political values and that it increases in intensity as individuals' disposition to engage in conscious and systematic information processing-- as opposed to intuitive, heuristic-driven information processing-- increases.  

Those findings lend support to the "basic account" of cultural cognition: that political polarization over risk reflects the entanglement of policy-relevant facts in antagonistic social meanings; fixing the science communication problem, then, depends on disentangling meaning and fact.

But the story is told here as involving a competition between three "theories" of how dual-process reasoning, motivated cognition, and ideology relate to each other.  That story is meant to be interesting in itself, even if one hasn't tuned into all the previous episodes.

Here is the description of those theories in the paper; see if you can guess which one is really "cultural cognition"!

a. Public irrationality thesis (PIT). PIT treats dual-process reasoning as foundational and uses motivated cognition to explain individual differences in risk perception. The predominance of heuristic or System 1 reasoning styles among members of the general public, on this view, accounts for the failure of democratic institutions to converge reliably on the best available evidence as reflected in scientific consensus on issues like climate change (Weber 2006). Dynamics of motivated cognition, however, help to explain the ideological character of the resulting public controversy over such evidence. Many of the emotional resonances that drive system 1 risk perceptions, it is posited, originate in (or are reinforced by) the sorts of affinity groups that share cultural or ideological commitments. Where the group-based resonances that attach to putative risk sources (guns, say, or nuclear power plants) vary, then, we can expect to see systematic differences in risk perceptions across members of ideologically or culturally uniform groups (Lilienfeld, Ammirati, Landfield 2009; Sunstein 2007).

b.  Republican Brain hypothesis (RBH). RBH—so designated here in recognition of the synthesis constructed in Mooney (2012); see also Jost & Amado (2011)—treats the neo–authoritarian personality findings as foundational and links low-quality information processing and motivated cognition to them. Like PIT, RBH assumes motivated cognition is a heuristic-driven form of reasoning. The mental dispositions that the neo–authoritarian personality research identifies with conservative ideology—dogmatism, need for closure, aversion to complexity, and the like—indicate a disposition to rely predominantly on System 1. Accordingly, the impact of ideologically motivated cognition, even if not confined to conservatives, is disproportionately associated with that ideology by virtue of the negative correlation between conservativism and the traits of open-mindedness, and critical reflection—System 2, in Kahneman terms—that would otherwise check and counteract it (e.g., Mooney 2012;  Jost, Blaser, Kruglanski & Sulloway 2003; Kruglanski 2004; Thórisdóttir & Jost 2011; Feygina, Jost & Goldsmith 2010; Jost, Nosek & Gosling 2008).

It is primarily this strong prediction of asymmetry in motivated reasoning that distinguishes RBH from PIT. PIT does predict that motivated reasoning will be correlated with the disposition to use System 1 as opposed to System 2 forms of information processing. But nothing intrinsic to PIT furnishes a reason to believe that these dispositions will vary systematically across persons of diverse ideology.

c.  Expressive rationality thesis (ERT). ERT lays primary emphasis on identity-protective motivated reasoning, which it identifies as a form of information processing that rationally advances individual ends (Kahan, Peters, Wittlin, Slovic, Ouellette, Braman & Mandel 2012). The link it asserts between identity-protective cognition, so conceived, and dual-process reasoning creates strong points of divergence between ERT and both PIT and RBH.

One conception of “rationality” applies this designation to mental operations when and to the extent that they promote a person’s ends defined with reference to some appropriate normative standard. When individuals display identity-protective cognition, their processing of information will more reliably guide them to perceptions of fact congruent with their membership in ideologically or culturally defined affinity groups than to ones that reflect the best available scientific evidence. According to ERT, this form of information processing, when applied to the sorts of facts at issue in polarized policy disputes, will predictably make ordinary individuals better off. Any mistake an individual makes about the science on, say, the reality or causes of climate change will not affect the level of risk for her or for any other person or thing he cares about: whatever she does—as consumer, as voter, as participant in public discourse—will be too inconsequential to have an impact. But insofar as opposing positions on climate change have come to express membership in and loyalty to opposing self-defining groups, a person’s formation of a belief out of keeping with the one that predominates in hers could mark her as untrustworthy or stupid, and thus compromise her relationships with others. It is therefore “rational” for individuals in that situation to assess information in a manner that aligns their beliefs with those that predominate in their group whether or not those beliefs are correct—an outcome that could nevertheless be very bad for society at large (Kahan 2012b).

It is important to recognize that nothing in this account of the individual rationality of identity-protective cognition implies that this process is conscious. Indeed, the idea that people will consciously manage what they believe about facts in order to promote some interest or goal extrinsic to the truth of their beliefs reflects a conceptually incoherent (and psychologically implausible) picture of what it means to “believe” something (Elster 1983). Rather the claim is simply that people should be expected to converge more readily on styles of information processing, including unconscious ones, that promote rather than frustrate their individual ends. At least in regard to the types of risks and policy-relevant facts typically at issue in democratic political debate, ordinary people’s personal ends will be better served when unconscious modes of cognition reliably focus their attention in a manner that enables them to form and maintain beliefs congruent with their identity-defining commitments. They are thus likely to display that form of reasoning at the individual level, whether or not it serves the collective interest for them to do so (Kahan et al. 2012).

Individuals disposed to resort to low-level, System 1 cognitive processing should not have too much difficulty fitting in. Conformity to peer influences, receptivity to “elite” cues, and sensitivity to intuitions calibrated by the same will ordinarily guide them reliably to stances that cohere with and express their group commitments.

But if individuals are adept as using high-level, System 2 modes of information processing, then they ought to be even better at fitting their beliefs to their group identities. Their capacity to make sense of more complex forms of evidence (including quantitative data) will supply them with a special resource that they can use to fight off counterarguments or to identify what stance to take on technical issues more remote from ones that that figure in the most familiar and accessible public discussions.

ERT thus inverts the relationship that PIT posits between motivated cognition and dual-process reasoning. Whereas PIT views ideological polarization as evidence of a deficit in System 2 reasoning capacities, ERT predicts that the reliable employment of higher-level information processing will magnify the polarizing effects of identity-protective cognition (Kahan et al. 2012).

Again, the argument is not that such individuals will be consciously managing the content of their beliefs. Rather it is that individuals who are disposed and equipped to make ready use of high-level, conscious information processing can be expected to do so in the service of their unconscious motivation to form and maintain beliefs that foster their connection to identity-defining groups.

ERT’s understanding of the source of ideologically motivated reasoning also puts it into conflict with RBH. To begin, identity-protective cognition—the species of motivated reasoning that ERT understands to be at work in such conflicts—is not a distinctively political phenomenon. It is likely to be triggered by other important affinities, too—such as the institutional affiliations of college students or the team loyalties of sports fans. Unless there is something distinctive about “liberal” political groups that makes them less capable of underwriting community attachment than all other manner of group, it would seem odd for motivated reasoning to display the asymmetry that RBH predicts when identity-protective cognition operates in the domain of politics.

In addition, because RBH, like PIT, assumes motivated reasoning is a feature of low-level, System 1 information processing, ERT calls into question the theoretical basis for RBH’s expectation of asymmetry. Like PIT, ERT in fact suggests no reason to believe that low-level, System 1 reasoning dispositions will be correlated with ideological or other values. But because ERT asserts that high-level, System 2 reasoning dispositions magnify identity-protective cognition, the correlations featured in the neo–authoritarian-personality research would, if anything, imply that liberals—by virtue of their disposition to use systematic reasoning—are all the more likely to succeed in resisting evidence that challenges the factual premises of their preferred policy positions. Again, however, because ERT is neutral on how System 1 and System 2 dispositions are in fact distributed across the population, it certainly does not entail such a prediction.

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (1)

I believe that you have just described the psychological functioning of tribalism. But I'm not sure how one goes about countering it. Perhaps by aggressively targeting thought leaders of the tribal factions who find reality threatening. Reframing the issues in ways which resonate with core identity markers may also help. For example, I read an essay today which listed all the ways people were kinder to the environment in the 50s and 60s (many bottles were recyclable, one TV per household, line dried clothes, kids walked and rode bikes to school, etc). A retro agenda may be more appealing to conservatives than one dependant on accepting science which is seen as hopelessly tied to liberalism? There are a few voice on the right who are already involved in critiquing the consumerist bent of America for the ways it destroys traditional communities and values (Rod Dreher is one who comes to mind). Converging along these lines may be a good opening. With some practice, I think academia could become more skilled at presenting evidence in ways which work with tribal thinking rather than challenging group identity.

December 3, 2012 | Unregistered CommenterRebecca Trotter
Member Account Required
You must have a member account on this website in order to post comments. Log in to your account to enable posting.