Sex differences in brain size

Next time someone asks you “Are men and women’s brains different?”, you can answer, without hesitation, “Yes”. Not only do they tend to be found in different types of bodies, but they are different sizes. Men’s are typically larger by something like 130 cubic centimeters.

Not only are they actually larger, but they are larger even once you take into account body size (i.e. men’s brains are bigger even when accounting for the fact that heavier and/or taller people will tend to have bigger heads and brains, and than men tend to be heavier and taller than women). And this is despite the fact that there is no difference in size of brain at birth – the sex difference in brain volume development seems to begin around age two. (Side note: no difference in brain volume between male and female cats).

But is this difference in brain volume a lot? There’s substantial variation between individuals, as well as across the individuals of each sex. What does ~130cc mean in the context of this variation? One way of thinking about it is in terms of standardised effect size, which measures the size of a difference between the two population averages in standard units based on the variation within those populations.

Here’s a good example – we all know that men are taller than women. Not all men are taller than all women, but men tend to be taller. With the effect size, we can precisely express this vague idea of ‘tend to be’. The (Cohen’s d) effect size statistic of the height difference between men and women is ~1.72.

What this means is that the distribution of heights in the two populations can be visualised like this:

mf_heightsWith this spread of heights, the average man is taller than 95.7% of women.

Estimates of the effect size of total brain volume vary, but a reasonable value is about ~1.3, which looks like this:

mf_brainsThis means that the average man has a larger brain, by volume, than 90% of the female population.

For reference, psychology experiments typically look at phenomena with effet sizes of the order ~0.4 , which looks like this:

mf_0p4And which means that the average of group A exceeds 65.5% of group B.

In this context, human sexual dimorphism in brain volume is an extremely large effect.

So when they ask “Are men and women’s brains different?”, you can unhesitatingly say, “yes”. And when they ask “And what does that mean for differences in how they think” you can say “Ah, now that’s a different issue”.

Link: meta-analysis of male-female differences in brain structure:

Kristoffer Magnusson’s awesome interactive effect size visualisation

Previously: gendered brain blogging

Edit 8/2/17: Andy Fugard pointed out that there are many different measures of effect size, and I only discuss/use one: the Cohen’s d effect size. I’ve edited the text to make this clearer.

Edit 2 (8/2/17): Kevin Mitchell points out this paper that claims sex differences in brain size are already apparent in neonates

How to overcome bias

How do you persuade somebody of the facts? Asking them to be fair, impartial and unbiased is not enough. To explain why, psychologist Tom Stafford analyses a classic scientific study.

One of the tricks our mind plays is to highlight evidence which confirms what we already believe. If we hear gossip about a rival we tend to think “I knew he was a nasty piece of work”; if we hear the same about our best friend we’re more likely to say “that’s just a rumour”. If you don’t trust the government then a change of policy is evidence of their weakness; if you do trust them the same change of policy can be evidence of their inherent reasonableness.

Once you learn about this mental habit – called confirmation bias – you start seeing it everywhere.

This matters when we want to make better decisions. Confirmation bias is OK as long as we’re right, but all too often we’re wrong, and we only pay attention to the deciding evidence when it’s too late.

How we should to protect our decisions from confirmation bias depends on why, psychologically, confirmation bias happens. There are, broadly, two possible accounts and a classic experiment from researchers at Princeton University pits the two against each other, revealing in the process a method for overcoming bias.

The first theory of confirmation bias is the most common. It’s the one you can detect in expressions like “You just believe what you want to believe”, or “He would say that, wouldn’t he?” or when the someone is accused of seeing things a particular way because of who they are, what their job is or which friends they have. Let’s call this the motivational theory of confirmation bias. It has a clear prescription for correcting the bias: change people’s motivations and they’ll stop being biased.

The alternative theory of confirmation bias is more subtle. The bias doesn’t exist because we only believe what we want to believe, but instead because we fail to ask the correct questions about new information and our own beliefs. This is a less neat theory, because there could be one hundred reasons why we reason incorrectly – everything from limitations of memory to inherent faults of logic. One possibility is that we simply have a blindspot in our imagination for the ways the world could be different from how we first assume it is. Under this account the way to correct confirmation bias is to give people a strategy to adjust their thinking. We assume people are already motivated to find out the truth, they just need a better method. Let’s call this the cognition theory of confirmation bias.

Thirty years ago, Charles Lord and colleagues published a classic experiment which pitted these two methods against each other. Their study used a persuasion experiment which previously had shown a kind of confirmation bias they called ‘biased assimilation’. Here, participants were recruited who had strong pro- or anti-death penalty views and were presented with evidence that seemed to support the continuation or abolition of the death penalty. Obviously, depending on what you already believe, this evidence is either confirmatory or disconfirmatory. Their original finding showed that the nature of the evidence didn’t matter as much as what people started out believing. Confirmatory evidence strengthened people’s views, as you’d expect, but so did disconfirmatory evidence. That’s right, anti-death penalty people became more anti-death penalty when shown pro-death penalty evidence (and vice versa). A clear example of biased reasoning.

For their follow-up study, Lord and colleagues re-ran the biased assimilation experiment, but testing two types of instructions for assimilating evidence about the effectiveness of the death penalty as a deterrent for murder. The motivational instructions told participants to be “as objective and unbiased as possible”, to consider themselves “as a judge or juror asked to weigh all of the evidence in a fair and impartial manner”. The alternative, cognition-focused, instructions were silent on the desired outcome of the participants’ consideration, instead focusing only on the strategy to employ: “Ask yourself at each step whether you would have made the same high or low evaluations had exactly the same study produced results on the other side of the issue.” So, for example, if presented with a piece of research that suggested the death penalty lowered murder rates, the participants were asked to analyse the study’s methodology and imagine the results pointed the opposite way.

They called this the “consider the opposite” strategy, and the results were striking. Instructed to be fair and impartial, participants showed the exact same biases when weighing the evidence as in the original experiment. Pro-death penalty participants thought the evidence supported the death penalty. Anti-death penalty participants thought it supported abolition. Wanting to make unbiased decisions wasn’t enough. The “consider the opposite” participants, on the other hand, completely overcame the biased assimilation effect – they weren’t driven to rate the studies which agreed with their preconceptions as better than the ones that disagreed, and didn’t become more extreme in their views regardless of which evidence they read.

The finding is good news for our faith in human nature. It isn’t that we don’t want to discover the truth, at least in the microcosm of reasoning tested in the experiment. All people needed was a strategy which helped them overcome the natural human short-sightedness to alternatives.

The moral for making better decisions is clear: wanting to be fair and objective alone isn’t enough. What’s needed are practical methods for correcting our limited reasoning – and a major limitation is our imagination for how else things might be. If we’re lucky, someone else will point out these alternatives, but if we’re on our own we can still take advantage of crutches for the mind like the “consider the opposite” strategy.

This is my BBC Future column from last week. You can read the original here. My ebook For argument’s sake: Evidence that reason can change minds is out now.

Can boy monkeys throw?

180px-cebus_albifrons_editAimed throwing is a gendered activity – men are typically better at it than women (by about 1 standard deviation, some studies claim). Obviously this could be due to differential practice, which is in turn due to cultural bias in what men vs women are expected to be a good at and enjoy (some say “not so” to this practice-effect explanation).

Monkeys are interesting because they are close evolutionary relatives, but don’t have human gender expectations. So we note with interest this 2000 study which claims no difference in throwing accuracy between male and female Capuchin monkeys. In fact, the female monkeys were (non-significantly) more accurate than the males (perhaps due to throwing as part of Capuchin female sexual displays?).

Elsewhere, a review of cross-species gender differences in spatial ability finds “most of the hypotheses [that male mammals have better spatial ability than females] are either logically flawed or, as yet, have no substantial support. Few of the data exclusively support or exclude any current hypotheses“.

Chimps are closer relatives to humans than monkeys, but although there is a literature on gendered differences in object use/preference among chimps, I couldn’t immediately find anything on gendered differences in throwing among chimps. Possibly because few scientists want to get near a chimp when it is flinging sh*t around.

Cite: Westergaard, G. C., Liv, C., Haynie, M. K., & Suomi, S. J. (2000). A comparative study of aimed throwing by monkeys and humans. Neuropsychologia, 38(11), 1511-1517.

Previously: gendered brain blogging

Gender brain blogging

s-l300I’ve started teaching a graduate seminar on the cognitive neuroscience of sex-differences. The ambition is to carry out a collective close-reading of Cordelia Fine’s “Delusions of Gender: The Real Science Behind Sex Differences” (US: “How Our Minds, Society, and Neurosexism Create Difference“). Week by week the class is going to extract the arguments and check the references from each chapter of Fine’s book.

I mention this to explain why there is likely to be an increase in the number of gender-themed posts by me to mindhacks.com.

Here’s Fine summarising her argument in the introduction to the 2010 book:

There are sex differences in the brain. There are also large […] sex differences in who does what and who achieves what. It would make sense if these facts were connected in some way, and perhaps they are. But when we follow the trail of contemporary science we discover a surprising number of gaps, assumptions, inconsistencies, poor methodologies and leaps of faith.

This is a book about science works and how is made to work as much as it is a book about gender. It’s the Bad Science of  cognitive neuroscience.  Essential.

The troubled friendship of Tversky and Kahneman

Daniel Kahneman, by Pat Kinsella (detail)
Daniel Kahneman, by Pat Kinsella for the Chronicle Review (detail)

Writer Michael Lewis’s new book, “The Undoing Project: The Friendship That Changed Our Minds”, is about two of the most important figures in modern psychology, Amos Tvesky and Daniel Kahneman.

In this extract for the Chronicle of Higher Education, Lewis describes the emotional tension between the pair towards the end of their collaboration. It’s a compelling ‘behind the scenes’ view of the human side to the foundational work of the heuristics and biases programme in psychology, as well as being brilliantly illustrated by Pat Kinsella.

One detail that caught my eye is this response by Amos Tversky to a critique of the work he did with Kahneman. As well as being something I’ve wanted to write myself on occasion, it illustrates the forthrightness which made Tversky a productive and difficult colleague:

the objections you raised against our experimental method are simply unsupported. In essence, you engage in the practice of criticizing a procedural departure without showing how the departure might account for the results obtained. You do not present either contradictory data or a plausible alternative interpretation of our findings. Instead, you express a strong bias against our method of data collection and in favor of yours. This position is certainly understandable, yet it is hardly convincing.

 

Link: A Bitter Ending: Daniel Kahneman, Amos Tversky, and the limits of collaboration

Annette Karmiloff-Smith has left the building

The brilliant developmental neuropsychologist Annette Karmiloff-Smith has passed away and one of the brightest lights into the psychology of children’s development has been dimmed.

She actually started her professional life as a simultaneous interpreter for the UN and then went on to study psychology and trained with Jean Piaget.

Karmiloff-Smith went into neuropsychology and starting rethinking some of the assumptions of how cognition was organised in the brain which, until then, had almost entirely been based on studies of adults with brain injury.

These studies showed that some mental abilities could be independently impaired after brain damage suggesting that there was a degree of ‘modularity’ in the organisation of cognitive functions.

But Karmiloff-Smith investigated children with developmental disorders, like autism or William’s syndrome, and showed that what seemed to be the ‘natural’ organisation of the brain in adults was actually a result of development itself – an approach she called neuroconstructivism.

In other words, developmental disorders were not ‘knocking out’ specific abilities but affecting the dynamics of neurodevelopment as the child interacted with the world.

If you want to hear more of Karmiloff-Smith’s life and work, her interview on BBC Radio 4’s The Life Scientific is well worth a listen.
 

Link to page of remembrance for Annette Karmiloff-Smith.