Brain Watch

Subscribe to WIRED Magazine

Is It Really True That Watching Porn Will Shrink Your Brain?

A hundred years ago they said that masturbating would make you go blind. We’ve progressed. Today, we’re told that watching moderate amounts of pornography will shrink your brain. The claim arrives courtesy of a brain imaging paper published last month in JAMA Psychiatry, a respected medical journal.

Among the global hyperbolic headlines that followed, my favourite was from a German site: “Pea brain: watching porn online will wear out your brain and make it shrivel.” Others included “Viewing porn shrinks the brain” (from the reliably untrustworthy Daily Mail) and Watching Porn Linked To Less Gray Matter In The Brain (from Huffington Post).

The study that triggered all this concern was published by a German pair: Simone Kühn, a psychologist, and Jürgen Gallinat, a psychiatrist. They scanned the brains of 64 healthy men (average age 29) in three ways. Note the word healthy. In fact, all the men who participated were free from any psychiatric or neurological disorders. So if they had shrunken brains (we’ll come onto that later), it wasn’t causing them any major problems.

The first scan was a simple structural brain scan. The second looked at patterns of brain activation when the men viewed sexual or neutral images. The third scan looked at brain activity while the men relaxed in the scanner for five minutes (a so-called resting-state scan). The men also answered questions about how much porn they watch. They averaged four hours per week, and none of them met the criteria for Internet sex addiction according to the “Internet Sex Screening Test”.

Here’s what’s caused all the fuss. The researchers found that hours spent watching porn was negatively correlated with the amount of grey matter in a subcortical region near the front of the brain – the right striatum – that’s known to be involved in the processing of reward (as well as lots of other things). In other words, men who said they spent more time watching porn tended to have a smaller amount of grey matter in this part of their brain. Also, the more avid porn viewers showed less activation in their left striatum when they looked at racy images, and they appeared to have reduced connectivity between their right striatum and their left dorsolateral prefrontal cortex.

So, does watching porn shrink your brain? The researchers think it probably does. “One may be tempted,” they wrote “to assume that the frequent brain activation caused by pornography exposure might lead to wearing and down regulation of the underlying brain structure, as well as function …”.

One may be tempted, but one should really know better. The most glaringly obvious problem with this study is of course its cross-sectional methodology. It’s just as likely that men with less grey matter in their striatum are more attracted to porn, as opposed to porn causing that brain profile. The researchers know this. “It’s not clear … whether watching porn leads to brain changes or whether people born with certain brain types watch more porn,” Kühn told The Daily Telegraph (and yet that paper still ran the headline: “Watching pornography damages men’s brains”).

A further problem with correlational studies is not just that the causal direction can run either way, but that an unknown or uncontrolled third factor (and others) could be causally involved. In the case of this study, the elephant in the room is personality. Unsurprisingly, personality is linked with media use (including porn consumption) and with brain characteristics. Asking men how much porn they watch is a crude indicator of their extraversion, (lower) conscientiousness and desire for sensation seeking. For instance, men who watch porn in work hours tend to be less conscientious and more impulsive. Last year, a study reported: “Neuroticism, agreeableness, conscientiousness, and obsessional checking all significantly correlated with a latent measure of compulsive behavior upon which use of Internet pornography use also loaded.”

Amazingly, although Kühn and Gallinat checked their participants were free from depression and addiction, they otherwise failed to measure their participants’ personality traits. Had they done so, they would likely have found strong associations between personality and brain structure and function. Past research has already shown that high sensation seekers have reduced sensitivity to high arousal pictures (including nudity and gore). Other research has documented differences in resting-state brain activity according to personality. Still further research has shown how extraverts, and those more open to experience, are more persuaded by advertising that uses sexual imagery.

By failing to measure or control for personality, the results of this study are virtually meaningless. The men’s self-reported time spent watching porn is little more than a rough proxy for their personality profile, including their willingness to diverge details about their private habits. And we already know that key personality traits such as extraversion and sensation seeking are linked with distinct patterns of brain structure and response. By failing to follow up participants over time, the research also provides no evidence that watching porn has any effects whatsoever. Moreover, by also neglecting to measure any other media consumption, then even if before/after evidence were available, we wouldn’t know if it were due to porn consumption or to other media activities correlated with that porn use, such as watching violent movies and online gambling (to be fair, the findings did still hold after the researchers controlled for overall levels of internet use).

The researchers have witnessed newspapers spread headlines of brain shrinkage and brain harm, and yet they know that they specifically recruited psychologically and neurologically healthy men. In fact, therein lies the only really meaningful insight from this study. Look at it this way. In a survey of 64 men who answered recruitment adverts for a brain scanning study, it was found that they viewed an average of four hours porn a week. They do so with no apparent ill consequence – screening confirmed no psychiatric, medical or neurological problems. Of course there is a debate to be had about the merits and harms of porn for individuals and society. This study does not make a helpful contribution. Suggested new headline: “Watching moderate amounts of porn won’t hurt your brain”.

What Happens If You Apply Electricity to the Brain of a Corpse?

A model of a human brain. Image: Science Museum, London/ Wellcome Images/CC

A model of a human brain. Image: Science Museum, London/ Wellcome Images/CC

Some habits die hard. Like humans zapping their brains. We did this back in Ancient Greece, when medics used electric fish to treat headaches and other ailments. Today we’re still at it, as neuroscientists apply electric currents to people’s brains to boost their mental function, treat depression, or give them lucid dreams.

Subjecting the brain to external electricity has an influence on mental function because our neurons communicate with each other using electricity and chemicals. This has become relatively common knowledge today, but only two centuries ago scientists were still quite baffled by the mystery of nerve communication.

Issac Newton and others suggested that our nerves communicate with each other, and with the muscles, via vibrations. Another suggestion of the time was that the nerves emit some kind of fluid. Most opaque, and still popular, was the idea – first mooted in ancient times – that the brain and nerves are filled with mysterious “animal spirits”.

“Animal electricity”

During the eighteenth century our understanding of electricity was growing apace, and the use of electricity to treat a range of physical and mental ailments, known as electrotherapy, was incredibly popular. But still it wasn’t obvious to scientists at the time that the human nervous system produces its own electric charge, and that the nerves communicate using electricity.

Among the first scientists to make this proposal was the Italian physician Luigi Galvani (1737-1798). Most of Galvani’s experiments were with frogs’ legs and nerves, and he was able to show that lightning or man-made electrical machines could cause the frogs’ muscles to twitch. He subsequently came up with the idea of “animal electricity” – that animals, humans included, have their own intrinsic electricity.

“I believe it has been sufficiently well established that there is present in animals an electricity which we … are wont to designate with the general term ‘animal’ … “ he wrote. “It is seen most clearly … in the muscles and nerves.”

Neuroscience’s macabre past

However, to Galvani’s frustration, he failed to show that zapping the brain had an effect on the facial or peripheral muscles. Here, he was helped in dramatic, macabre fashion by his nephew Giovanni Aldini (1762-1834).

In 1802, Aldini zapped the brain of a decapitated criminal by placing a metal wire into each ear and then flicking the switch on the attached rudimentary battery. “I initially observed strong contractions in all the muscles of the face, which were contorted so irregularly that they imitated the most hideous grimaces,” he wrote in his notes. “The action of the eylids was particularly marked, though less striking in the human head than in that of the ox.”

During this era, there was fierce scientific debate about the role of electricity in human and animal nervous systems. Galvani’s influential rival, Alessandro Volta, for one, did not believe in the notion that animals produce their own electricity. In this context, the rival camps engaged in public relations exercises to promote their own views. This played to Aldini’s strengths. Something of a showman, he took his macabre experiments on tour. In 1803, he performed a sensational public demonstration at the Royal College of Surgeons, London, using the dead body of Thomas Forster, a murderer recently executed by hanging at Newgate. Aldini inserted conducting rods into the deceased man’s mouth, ear, and anus.

One member of the large audience later observed: “On the first application of the process to the face, the jaw of the deceased criminal began to quiver, the adjoining muscles were horribly contorted, and one eye was actually opened. In the subsequent part of the process, the right hand was raised and clenched, and the legs and thighs were set in motion. It appeared to the uninformed part of the bystanders as if the wretched man was on the eve of being restored to life.”

Although Frankenstein author Mary Shelley was only five when this widely reported demonstration was performed, it’s obvious that she was inspired by contemporary scientific debates about electricity and the human body. Indeed, publication of her novel coincided with another dramatic public demonstration performed in 1818 in Glasgow by Andrew Ure, in which application of electric current to a corpse appeared to cause it to resume heavy breathing, and even to point its fingers at the audience.

Death is a process

If a body is dead, how come its nerves are still responsive to external electric charge? In 1818, one popular but mistaken suggestion was that electricity is the life force, and that the application of electricity to the dead could literally bring them back to life. Indeed, so disturbed were many members of the audience at Ure’s demonstration that they had to leave the building. One man reportedly fainted. Modern scientific understanding of the way nerves communicate undermines such supernatural interpretations, but you can imagine that witnessing such a spectacle as performed by Ure or Aldini would even today be extremely unnerving (excuse the pun). A pithy explanation of why electricity appears to animate a dead body comes courtesy of Frances Ashcroft’s wonderful book The Spark of Life:

“The cells of the body do not die when an animal (or person) breathes its last breath, which is why it is possible to transplant organs from one individual to another, and why blood transfusions work,” she writes. “Unless it is blown to smithereens, the death of a multicellular organism is rarely an instantaneous event, but instead a gradual closing down, an extinction by stages. Nerve and muscle cells continue to retain their hold on life for some time after the individual is dead and thus can be ‘animated’ by application of electricity.” [see also "What is Brain Death?"]

The grisly experiments of Aldini and Ure seem distasteful by today’s standards, but they were historically important, stimulating the imagination of novelists and scientists alike.

Sources:
Minds Behind the Brain, A History of Pioneers and Their Discoveries by Stanley Finger.
The Spark of Life by Frances Ashcroft.
Galvanic cultures: electricity and life in the early nineteenth century by Iwan Rhys Morus.

Psychologists Give People Control of Their Dreams Using Brain Stimulation. Really?

Image: Old Visuals Everett Collection/Getty

Image: Old Visuals Everett Collection/Getty

My dreams are often like a bad TV night – full of repeats that I’ve slept through many times before. Other people are luckier. Their dreams are more like a movie experience, but one where they not only get to choose the film, they can also take directorial control and influence the course of events. This is known as “lucid dreaming” and considered a half-way house between sleep and wakefulness.

In a study out this week, a team of psychologists led by Ursula Voss at the J.W. Goethe University in Frankfurt, claim to have given non lucid-dreamers the power of lucid dreaming by applying weak electrical current to the surface of their scalps and into their brains.

The rationale behind the study is simple. Past research has associated lucid dreaming with electrical brain activity in the low gamma range – around 40Hz. Voss and colleagues therefore used transcranial alternating current stimulation (tACS) to promote gamma activity in frontal and temporal regions of their participants’ brains, in the hope that this would provoke lucid dreaming (tACS is similar tDCS, which I’ve written about on this blog before).

I have to admit this reasoning tickled my BS-detector a little. Neurobunk research conducted in the 1960s made the mistake of assuming that because experienced meditators exhibit brain activity in the alpha range (around 10Hz), then teaching people to express alpha brainwaves would give them a shortcut to the peace and enlightenment associated with years of meditative practice. It was an elementary case of confusing correlation for causality and results were disappointing.

Despite my initial skepticism, it turns out that, aside from a small sample, this new dream research is well conducted. Voss and her team tested 27 healthy volunteers (15 women, 12 men, none of whom usually have lucid dreams) on four successive nights. Each night, the participants were zapped with electricity in a different frequency range or – and it’s important they included this condition – with no electricity at all (known as a “sham” treatment). The stimulation was delivered after between two and three minutes of uninterrupted REM sleep. Shortly afterwards the participants were woken and they answered questions about the dream they’d just had.

The main result is that stimulation specifically delivered in the low gamma range, at 40Hz, and to a lesser extent at 25Hz, was associated with a greater experience of lucid dreaming, as compared to stimulation at other higher and lower frequencies or to sham treatment. “Our experiment is, to the best of our knowledge, the first to demonstrate altered conscious awareness as a direct consequence of induced gamma-band oscillations during sleep,” the researchers concluded. Excitable headlines have followed, such as “Brain Zap Could Help You Control Your Dreams” and “Having Nightmares? Control Your Dreams With Electric Currents“.

Despite the robust methodology, I think these headlines are getting carried away. Here’s why. Lucid dreaming was defined by higher scores in participants’ feelings of insight (knowing that they were dreaming); dissociation (taking a third person perspective); and control (being able to shape events). I looked up the paper where the researchers first described their scale for measuring these factors. If I understand correctly, the participants rated their experience of these three factors on a scale of 0 (strongly disagree that I had such an experience) to 5 (strongly agree). Now if we look to see the scores they gave for how much dream insight, dissociation and control they had, we find that the averages for the gamma stimulation condition are around 0.6, 1.3, and 0.5 respectively.

Yes, these scores are significantly higher compared with stimulation at other frequencies and with sham treatment, but they are nonetheless incredibly low. A real life creation of the dream control depicted in the movie Inception, this is not! I suppose this study is a proof of principle, so let’s wait and see what comes from future research.

But actually one more thing – these kind of studies that examine the impact of brain stimulation seem so crude. Do the researchers really know what neural effect the stimulation is having and why? I don’t think they do – the explanation in this paper is typically sketchy. “We assume that lower gamma activity is mediated by activation of fast-spiking interneurons that are known to generate gamma oscillations in cortical networks … These networks have been proposed to gate sensory processing, which might also enable lucid dreaming in a temporarily specific manner.” Got that? No, me neither.

Does This Brain Research Prove That Humiliation Is the Most Intense Human Emotion?

Image: Tim Sheerman-Chase/Flickr

Image: Tim Sheerman-Chase/Flickr

I must have been about seven years old, a junior in my prep school. I was standing in the dining hall surrounded by over a hundred senior boys and schoolmasters, all looking at me, some with pity, others with disdain.

It was unheard of for a junior boy to be present in the dining room by the time the seniors had filed in. “What on earth do you think you’re doing Jarrett?” asked the headmaster with mock outrage. I was there because, by refusing to finish my rhubarb crumble, I’d broken a cardinal school rule. All pupils were to eat all they were given. But after vomiting up some of my rhubarb – a flesh-like fruit that still disgusts me to this day – I simply refused to eat on. Keeping me behind in the dining room as the seniors arrived was my punishment. I wanted to explain this to the assembled crowd. Yet speech completely failed me and I began to sob openly and uncontrollably, my humiliation sealed.

This was an intense emotional experience for me, and as you can probably tell, the memory remains sore to this day. But is humiliation any more intense than the other negative emotions, such as anger or shame? If it were, how would psychologists and neuroscientists demonstrate that this was the case?

You might imagine that the most effective method would be to ask people to rate and describe different emotional experiences – after all, to say that an emotion is intense is really to say something about how it feels, and how it affects you.

Yet in a paper published earlier this year, a pair of psychologists – Marte Otten and Kai Jonas – have taken a different approach. Inspired by claims that humiliation is an unusually intense emotion, responsible even for war and strife in the world, the researchers have turned to brain-based evidence. They claim to have provided the “first empirical, neurocognitive evidence for long-standing claims in the humiliation literature that humiliation is a particularly intense emotion.”

The researchers conducted two studies in which dozens of male and female participants read short stories involving different emotions, and had to imagine how they’d feel in the described scenarios. The first study compared humiliation (e.g. your internet date takes one look at you and walks out), anger (e.g. your roommate has a party and wrecks the room while you’re away) and happiness (e.g. you find out a person you fancy likes you). The second study compared humiliation with anger and shame (e.g. you said some harsh words to your mother and she cried).

Throughout, the researchers used EEG (electroencephalography) to record the surface electrical activity of their participants’ brains. They were interested in two measures in particular – a larger positive spike (known as the “late positive potential” or LPP); and evidence of “event-related desynchronization,” which is a marker of reduced activity in the alpha range. Both these measures are signs of greater cognitive processing and cortical activation.

The take-home result was that imagining being humiliated led to larger LPPs and more event-related desychronization than the other emotions. This means, Otten and Jonas said, that humiliation, more than the other emotions they studied, leads to a mobilization of more processing power and a greater consumption of mental resources. “This supports the idea that humiliation is a particularly intense and cognitively demanding negative emotional experience that has far-reaching consequences for individuals and groups alike,” they concluded.

I’ve written with skepticism on this blog before about the current (in)ability for neuroscience to add greatly to our understanding of psychological processes. I feel the same way about this paper. For instance, if you look at one of the main brain-based measures used in this study – the LPP (which was higher when imagining humiliation) – the researchers admitted that it really remains unknown what mental processes underlie this marker. The brain seems to be doing more when you’re feeling humiliated, they’re effectively saying, but we don’t really know what. One possibility, which in fairness they acknowledge, is that humiliation requires more mental processing, not because it’s so intense, but because it’s a complex social emotion that involves monitoring loss of social status.

I’m unconvinced this study provides meaningful evidence for the unique intensity of humiliation. It provides a crude neural correlate of people imagining feeling the emotion. But surely the proof of humiliation’s intensity is in the subjective feeling of it, in the personal and public stories we share. Why does this need persist, to find visible markers in the brain for something that we already knew from the inside?

Neuroscientists Conduct the Most Frustrating Brain Scanning Study Ever

Image:  madstreetz/Flickr

Image: madstreetz/Flickr

Missing a bus is so much more frustrating if you’re late by a fraction of second, than by half an hour, even though the outcome is the same in both cases. It’s most frustrating of all if you first run all the way to the bus, and then only just miss it by a whisker. In the scientific jargon these two factors are “proximity”, and “effort expended” and being thwarted in the context of both of these is enough to unleash your inner HULK.

Now, for the first time, neuroscientists have looked at what’s happening in your brain when this kind of extreme frustration fills you with rage (or minor annoyance, at least). They think they may have found the HULK brain circuit – OK, those are my words not theirs, but that’s basically what they’re saying.

To research this topic, Rongjun Yu and his colleagues must have had great fun devising an incredibly frustrating task for their participants. I was hoping they might have introduced an itch in the middle of their participants’ backs, or taunted them with chocolates tied to a piece of string – SNAP, “no you can’t have it!”

Unfortunately it was a lot drier than that. Participants had to click the correct key, within a limited time window, according to whether arrows on a screen pointed left or right. Succeed at this enough times and they would win two pounds (about $3.30). A progress bar on screen showed them how many more trials they had to complete to win their reward (the most was four). Participants had several attempts to win two pounds, but the whole thing was actually fixed so that a lot of times they were thwarted at various stages of proximity to the reward, and after expending various degrees of effort.

The set-up worked, in the sense that both proximity and expended effort led participants to report feeling more frustrated, and to show more frustration. The display of frustration was measured by how hard participants pressed a key to confirm whether they’d won the money or failed. After getting really close, but missing out, or after completing more trials and then missing out, the participants pressed the confirmation key harder. There’s no mention of any participants hurling their response pads onto the ground, which is disappointing.

Next Yu and his team repeated the whole exercise with more participants and scanned their brains at the same time. Again, proximity and effort expended both increased the frustration caused by failing to win. Both these factors led to increased activity in the same circuit of brain structures, including the left amygdala, left midbrain PAG (periaqueductal gray), left anterior insula, and dorsal anterior cingulate cortex.

What does all that mean? Well, the researchers point out that this spread of activity is very similar to a “rage circuit” that’s been identified in rats. According to Rongjun Yu and his team, the evolutionary purpose of this brain response following frustration might be to “transfer unfulfilled motivation into subsequent behavioral vigour to overcome goal-blocking obstacles.” In plain English, being thwarted in tantalizing fashion gives you a jolt of energy so you try harder to get what you want.

But here’s why this is possibly the most frustrating brain scanning study ever – not only because it’s about frustration, but also because we haven’t learned much. The network of brain regions activated by frustration was relatively large and varied, and includes structures previously associated with a range of emotional functions. It would have been pretty amazing if these structures hadn’t shown some kind of response when participants were frustrated. The only really surprising result was that both greater “proximity” and “effort expended” led to the exact same activation patterns (although it remains possible that more detailed/powerful analysis in the future will uncover differences). I was left wondering how much the results would have differed if the researchers had looked at rising anger, rather than rising frustration – what exactly is the distinction?

The authors admit in their discussion of the study that many questions remain unanswered about the psychology of frustration, including: why do proximity and effort expended lead us to feel more angry and explosive? Yu and his team make a few suggestions, including counterfactual thinking (i.e. getting so close probably encourages us to imagine “what if” we had succeeded), and there’s also probably a progressive build up of “anticipated regret” – fear of failing – the closer we get to a reward. Those are sound suggestions, but it’s a frustrating shame the brain scanning doesn’t give us any extra insight. HULK says do more psychology research next time, only then scan brain.

What Is the Sexiest Part of the Brain?

Shaped like a sea horse, the hippocampus is the height of neural elegance. Image: Andreas März / Flickr

Shaped like a sea horse, the hippocampus is the height of neural elegance. Image: Andreas März / Flickr

Imagine a beauty contest for brain parts. We’d all feel a little awkward when the cerebellum made her entrance. Rotund and cauliflower-like in appearance, she can barely climb onto the stage. The hippocampus, by contrast, is the height of neural elegance – her name comes from the Greek for seahorse. But wait a minute, look at the horns on the basal ganglia. He not only looks macho but of course he’s a great mover too (the basal ganglia are involved in the control of movement – see what I did there?).

Perhaps this isn’t the best way to judge the sexiest brain part. What about looking at the popularity of different brain areas in the research literature? That’s exactly what a team led by Timothy Behrens at the University of Oxford did last year. Using the BrainMap database of 7342 brain imaging studies published between 1985 and 2008, they looked to see if papers that reported activation in certain brain areas were more likely to be published in higher impact (more prestigious) journals.

Behrens team found that studies featuring the fusiform gyrus, thought to be involved in face-processing, tended to appear in swanky journals; as did papers reporting activity in the rostral medial prefrontal cortex, the anterior insular cortex, the anterior cingulate gyrus, and the amygdala. Everyone knows that the amygdala look like a pair of old nuts, so it’s probably not their visual appeal that makes them popular but the fact that, like all these other high-status brain parts, they have a role in emotion.

A look at the key words associated with publication in top journals supports this interpretation – Face, Fear, Emotion, and Cognition were some of the hottest terms. It seems academics like brain areas that are in touch with their feelings. The anterior cingulate, for one, is taking advantage of this popularity. As Sally Satel and Scott Lilienfeld write in their 2013 book Brainwashed, “the anterior cingulate cortex is one of the most promiscuously excitable structures in the brain.”  What a naughty neural structure.

Faring less well in the grey matter popularity contest is the secondary somatosensory area. It leads “the way in ignominy” was how Behrens and his colleagues put it. “But the supplementary motor area was almost equally disgraced,” they added. Harsh. What’s unappealing about these brain parts? Looking at the key terms associated with publication in less prestigious journals, perhaps it’s the fact these areas are involved in touch and movement. So it seems brain researchers are more attracted to emotional brain parts than those with a bit of dynamism. How unadventurous.

Of course the world of academic neuroscience is just one cultural niche. A brain part considered dowdy by brain researchers may well have popular appeal among the general public and vice versa. To test this possibility, I performed a highly scientific investigation on Twitter. I looked to see how many mentions the promiscuous anterior cingulate cortex (ACC) received recently and compared its profile with a couple of other brain areas.

In fact the ACC  has been mentioned just twice since yesterday. “Anterior cingulate cortex and right pre-frontal cortex & ventral striatum are two main regions of our brain related to our exploration,” tweeted @wpaek3, perfectly managing to contain his/her enthusiasm. And look what else I found. “…the cingulate cortex, particularly the anterior cingulate cortex, was the hottest thing on the block, it was so trendy u couldn’t believe” tweeted @heyshakera yesterday, but then she adds, “now it’s out of fashion and nobody talks about it anymore it’s very sad”. Pass me the tissues, this is like a neuronal soap opera.

So what brain areas might have more popular appeal for the twittering public? Forgive my childishness, I had to search for mamillary bodies, which play a role in memory. “my neuro lecturer referred to the mamillary bodies in the brain as ‘small boob’ and then laughed at his own joke,” tweeted @oh_pies recently. Oh dear. OK, what about the poor cerebellum, who I was so rude about at the start? Wow, masses of twitter activity about her. “I don’t care what’s on your mind I just want your cerebellum!!” tweeted @Nathennavarro. Now I’m seeing this brain region in a sexier light. In fact, thanks to @Spspoke I also learned that the cerebellum is now starring in her own adverts. This one for Hendricks Gin has the strap line: “May your cucumbers be pickled but not your cerebellum.”

There’s a lesson here. She may not have the physique of a super model. She may not be one of the frontal, emotional brain areas courted by brain imagers, but it’s the spongy old cerebellum hanging off the back of the brain who’s got her own advertising deal. You know maybe the phrenologists were onto something. Guess what they considered the function of the cerebellum to be? They called it the centre for amativeness – otherwise known as sexual desire. A more pronounced cerebellum in man or beast was taken as a indicator of strong libido. “When the cerebellum is really large …” wrote George Combe in 1853, “the individual becomes distinguished from his fellows by the predominance of his amorous propensities.” That seals it. Forget the brain imagers. Based on my in-depth Twitter investigation combined with this historical perspective, I declare the cerebellum the sexiest part of the brain.

Hompeage image: Dr A.Irving / University of Dundee / Wellcome Images

The Neuroscience of Decision Making Explained in 30 Seconds

Is it possible to explain the neuroscience of decision making in 30 seconds? I had a go as one of my contributions to a new book called 30-Second Brain that’s released in the USA today. Here’s what I wrote:

From Plato’s charioteer controlling the horse of passion, to Freud’s instinctual id suppressed by the ego, there’s a long tradition of seeing reason and emotion as being in opposition to one another. Translating this perspective to neuroscience, one might imagine that successful decision making depends on the rational frontal lobes controlling the animalistic instincts arising from emotional brain regions that evolved earlier (including the limbic system, found deeper in the brain). But the truth is quite different—effective decision making is not possible without the motivation and meaning provided by emotional input. Consider Antonio Damasio’s patient, “Elliott.” Previously a successful businessman, Elliott underwent neurosurgery for a tumor and lost a part of his brain—the orbitofrontal cortex—that connects the frontal lobes with the emotions. He became a real life Mr. Spock, devoid of emotion. But rather than this making him perfectly rational, he became paralyzed by every decision in life. Damasio later developed the somatic marker hypothesis to describe how visceral emotion supports our decisions. For instance, he showed in a card game that people’s fingers sweat prior to picking up from a losing pile, even before they recognize at a conscious level that they’ve made a bad choice.

If 30-seconds is too long for you, here’s the message in 3-seconds:

Feelings provide the basis for human reason—brain-damaged patients left devoid of emotion struggle to make the most elementary decisions.

Alternatively, if you fancy digging a little deeper, here’s what the book calls my “3-minute brainstorm”:

Although we need emotions to make decisions, their input means we’re not the cold rational agents that traditional economics assumes us to be. For instance, Daniel Kahneman demonstrated with Amos Tversky that the negative emotional impact of losses is twice as intense as the positive effect of gains, which affects our decision making in predictable ways. For one thing it explains our stubborn reluctance to write off bad investments.

30-second brain30-Second Brain applies this fun approach to more than 50 topics in neuroscience (my other contributions cover topics such as mirror neurons and the left-brain right-brain myth). In the publisher’s words: “30-Second Brain is here to fill your mind with the science of exactly what’s happening inside your head. Using no more that two pages, 300 words, and a single picture [for each topic], this is the quickest way to understand the wiring and function of the most complex and intricate mechanism in the human body.”

I’m thrilled to have contributed to the book alongside editor and lead author Anil Seth, and co-contributors Tristan Bekinschtein, Daniel Bor, Chris Frith, Ryota Kanai, Michael O’Shea, and Jamie Ward.

Homepage image: Alexander Boden/Flickr

How Your Season of Birth Is Etched in Your Brain

Image: dobrych / flickr

Image: dobrych / flickr

The season we’re born in can have far-reaching consequences. For instance, Spring babies are more likely than others to develop schizophrenia later in life, whereas Summer babies tend to grow up to be more sensation seeking. There are many more of these so-called season of birth effects. Scientists aren’t sure, but they think such patterns could be due (among other things) to mothers’ and infants’ exposure to viruses over the Winter period, or to the amount of daylight they’re exposed to, either or both of which could influence genetic expression during early development. Now Spiro Pantazatos, a neuroscientist at Columbia University Medical Center, has studied links between season of birth and brain structure in healthy adults. He thinks the association between season of birth and psychiatric and behavioural outcomes later in life could be mediated by genetic factors that affect the growth of the brain.

Pantazatos has analysed MRI brain scans taken from 550 healthy men and women at hospitals in London, England. In one analysis he looked to see if there were any particular areas of the brain that differed between people according to the season they were born in. He defined the seasons as follows: Winter (Dec 23 to March 19); Spring (March 22 to June 19); Summer (June 22 to September 21); and Fall (September 24 to December 20). For the men only, he found that those born in the Fall and Winter tended to have more grey matter in a region known as the left superior temporal sulcus (STG), as compared with men born in Spring and Summer. Looking month by month, men born at the end of December tended to have the most grey matter in this region; men born at the end of June tended to have the least.

It’s interesting that Pantazatos found a specific link with this brain region. The amount of grey matter in the superior temporal sulcus – a region that includes the auditory cortex – has previously been linked with schizophrenia, with patients tending to have reduced volume in this area.

“The current results suggest that developmental gene x environment interactions, possibly via perinatal photoperiod effects on circadian clock genes in the suprachiasmatic nuclei and elsewhere, influence developmental patterning of the STG, ultimately resulting in gross morphological differences of this region,” says Pantazatos.

The association between STG volume and season of birth may not apply to men only. Making multiple statistical comparisons across the brain carries the risk of discovering associations by chance. Pantazatos was careful to control for this risk. When he used more liberal statistical tests, he also found an association between grey matter volume in superior temporal sulcus in women, but this time the seasonal effect was reversed. Women born in the Summer had more grey matter than those born in Winter. This actually fits with other research showing that season of birth effects can be different for men and women. For example, female Winter babies tend to be less sensation seeking as adults, whereas male Winter babies grow into adults with a penchant for risk.

Pantazatos also performed another kind of analysis. He looked to see if it were possible to predict which season a person was born in, purely from looking at differences in grey matter volume across their brains. This time he found a significant result for women but not men. An algorithm picking up differences across a swathe of brain regions in the frontal cortex, parietal lobe, and cerebellum, was able to categorise a woman’s season of birth with 35 per cent accuracy. Not great, but more than you’d expect based on a guess. The null result here for men could be due to lack of statistical power. One limitation of the study is that there was no data on the participants’ location at birth. Season of birth effects related to daylight are reversed in the Southern Hemisphere and it’s possible that there were a disproportionate number of immigrant participants in some of the seasonal groups. If so, this could have diluted the signal in the analysis (this means the observed effects between season of birth and brain structure are likely underestimates).

These are intriguing results and a next step to is to carry out a similar analysis with large samples of patients. Pantazatos’ prediction is that season of birth effects related to disorders like schizophrenia will be mediated by differences in grey matter volume in relevant parts of the brain. He calls these brain differences an “intermediate phenotype” – an observable half-way house between genetic vulnerability and behavioural outcomes. “These results imply that environmental variables associated with season of birth impact human brain development,” says Pantazatos, “which subsequently exerts influences on brain structure that persist through adulthood.” Ultimately, this research helps contribute to our understanding of how environmental and genetic risk factors influence development and lead towards psychiatric disorders later in life.

Pantazatos, S.P. (2014). Prediction of individual season of birth using MRI. Neuroimage.

Homepage image: ashokboghani/Flickr

Can Neuroscience Really Help Us Understand the Nuclear Negotiations With Iran?

The P5+1 talks on Iran's Nuclear programme held in Geneva last November. Image: US Mission Geneva / Flickr

The P5+1 talks on Iran’s Nuclear programme held in Geneva last November. Image: US Mission Geneva / Flickr

A few days ago, Iran began multilateral talks in Vienna in search of a compromise about its nuclear ambitions. I’m no expert on international politics, but I’m sure that these vital negotiations will be influenced by the history of the relationships between the countries involved, the state of nuclear proliferation around the world, as well as by local political factors in Iran and its region. Psychology will also come into play in terms of the negotiation partners’ perceptions about what’s fair, their interpretations of historical events and their beliefs about the future. Recent research into the psychology of morality, fairness, punishment and cooperation will be especially relevant.

In stark contrast, I’m not convinced that findings from contemporary neuroscience have any relevance to our understanding of these negotiations whatsoever. There are interesting papers on the neural correlates of fairness judgments, rejection, cooperation and revenge, but I don’t believe they add practical insights for the way these negotiations will be conducted or understood. So I was shocked and amused to discover that last month The Atlantic published an article, apparently in all seriousness, headlined: “The Neuroscience Guide to Negotiations With Iran,” and subtitled: “Wondering whether the historic nuclear talks will succeed or fail? Study the brain.”

Co-written by a neuroscientist (Nicholas Wright) and an expert on Iranian and International politics (Karim Sadjadpour), the article marks the high point (or low point, depending on your view) in recent attempts by many journalists and scholars to show that the science of the brain has important lessons for everyday affairs. The Atlantic article makes some excellent points about the Iranian negotiations, especially in terms of historical events and people’s perceptions of fairness, but it undermines its own credibility by labelling these insights as neuroscience or by gratuitously referencing the brain. It’s as if the authors drank brain soup before writing their article, and just as they’re making an interesting historical or political point, they hiccup out a nonsense neuro reference.

Unfortunately, this clumsy repackaging of psychology as neuroscience is part of a wider cultural theme. Today university psychology departments are changing their names to market themselves as neuroscience departments. Politicians are invoking the brain to justify arguments more sensibly grounded on social or psychological principles. And new business sub-disciplines are continually emerging, using the allure of the brain to impress – such as neuroleadership and neuromarketing, to name but two.

Let’s take a look at The Atlantic article in more detail. The authors’ first point, relevant to the Iranian perspective, is that “Humans pay high costs to reject unfairness”. This is true. Research using economic games shows that people will often sacrifice their own rewards, if that means they can punish another person who they feel has treated them unfairly. This isn’t neuroscience, it’s psychology and behavioural economics. Unfortunately, Wright and Sadjadpour spin this point as a neuroscience insight. Here’s their neuro belch: “A decade of studies using brain imaging shows that human neural activity, particularly in the insula cortex region, reflects the precise degree of unfairness in social interactions.”  But what does this brain reference add? Would it matter if it were some other brain region that mirrored levels of perceived injustice? We already knew that people have a sense of fairness before the neural correlates of this process were documented. The specific finding that the insular cortex is active in such situations is hardly surprising too – as neuroscientist Russ Poldrack has pointed out, the anterior portion of the insular is active in nearly one third of all brain imaging studies!

A related argument that runs through the Atlantic article is that neuroimaging studies of perceived fairness, and desire to punish, show that such judgments are grounded in biology and therefore the same the world over. “The fundamental biology of social motivations is the same in Tokyo, Tehran, and Tennessee,” say Wright and Sadjadpour. This is an example where, in their desire to invoke the brain, the authors undermine their own credibility. They seem to be implying that particular patterns of brain activity cause specific fairness judgments and behaviours, rendering them equivalent across cultures. But this is false. For example, there’s research showing how willingness to punish varies across cultures. Identifying the neural correlate of a fairness judgment doesn’t explain how that judgment was made in the first place. In the complex dance between psychology and biology, brain changes are just as likely the consequence of fairness judgments, as the cause. I’m sure that an understanding of cross-cultural differences in judgments about fairness and interpretations of past events could be useful to the current Iran negotiations – but no brain scanner is required!

Wright and Sadjadpour’s final lesson from neuroscience is that “Unexpected conciliatory gestures are more effective”. For a historical example they point to Egyptian President Anwar Sadat’s surprise trip to Israel in 1977.  ”The more surprising a reward or punishment is, the bigger the event’s impact is on our decision-making,” they write. OK, but what’s this got to do with neuroscience? Here comes another neuro burp. Many scanning studies have revealed that “the brain has sophisticated machinery to compute the crucial difference between what is expected to happen and what actually happens,” the authors say. But again I’m left wondering what that adds to our understanding of the Iranian negotiations. We already knew, without the help of a brain scanner, that people form expectations and are surprised when those expectations are inaccurate. And yes, there’s interesting psychology research showing how we respond to surprises – for example, we edit our understanding of the world to imply that we knew the surprise was going to happen all along (known as hindsight bias). But again, no brain scanner required for this. And actually, I’m not convinced that the psychology of surprise carries any straightforward lessons for the Iran negotiations either. Wright and Sadjadpour explain the recent thawing of relations between the West and Iran in terms of a series of unexpected conciliatory gestures on both sides, but we can’t know for sure the causal role that these events played.

Here’s a line from the penultimate paragraph of the Atlantic article: “… a comprehensive resolution to the nuclear conflict will be extremely challenging, not least because it must accommodate Israeli security concerns, Iranian ideology, and U.S. domestic politics.” Hard to disagree with that, but then another neuro-burp. “As the parties try to negotiate a final deal,” the authors add, “these lessons from neuroscience can help us comprehend the nature of the enormous trust deficit between Washington and Tehran.”

I’d love to know the motivations behind articles like these. Are the authors hoping that dressing their article up in neuroscience will help them get attention for the political points they want to make? Or are they trying to show that neuroscience is relevant to today’s most pressing international affairs? I fear that such articles could be harmful. Exaggerating the relevance of neuroscience to everyday matters risks breeding skepticism and apathy towards the field as a whole.

Have I been too harsh? What did you think of The Atlantic’s neuroscience guide to the Iranian negotiations?

What Is Brain Death?

Brain death is a tragic topic where neuroscience, ethics and philosophy collide. Two recent cases have sent this sensitive and thorny issue once again into the media spotlight.

Last November, 14 weeks into her pregnancy, 33-year-old Marlise Munoz collapsed at home from a suspected pulmonary embolism. The next day doctors declared that she was brain dead. However, against her own and her family’s wishes, John Peter Smith Hospital in Fort Worth Texas chose to maintain Munoz’s body on ventilators because they said they had a legal duty of care to her unborn fetus. On Sunday, January 26, following a successful lawsuit brought by her family, the hospital finally turned off the ventilators.

Meanwhile, teenager Jahi McMath was declared brain dead last December following complications that ensued after a tonsillectomy. In this case, McMath’s hospital wanted to turn off McMath’s artificial life support, but her family resisted this move and she has been transferred to another facility where her body is being maintained by mechanical respirator.

These contrasting cases provide a glimpse into the tragedy and ethical sensitivities surrounding the issue of brain death. Before we go any further, what are your first reactions to the stories? Do you believe that Marlise Munoz was dead after doctors declared her brain dead? What about Jahi McMath?

According to accepted medical and legal criteria, both Munoz and McMath were officially dead from the moment of brain death. The Uniform Determination of Death Act (UDDA) drafted in 1981 is accepted by all 50 US States. It determines that a person is dead if either their cardiovascular functioning has ceased or their brain has irreversibly stopped functioning. The precise methods and criteria for determining brain death vary from hospital to hospital, but the American Academy of Neurology states that three criteria must be fulfilled to confirm the diagnosis: “coma (with a known cause), absence of brainstem reflexes, and apnea [the cessation of breathing without artificial support].” In practice, clinicians will also look for an absence of motor responses (movement) and will rule out any other possible explanations for loss of brain function, such as drugs or hypothermia. Assessment will also be repeated again after several hours. For more details, the NHS website has a description of the diagnostic tests used for brain death in the UK.

The UDDA concept of brain death has its roots in a 1968 definition composed by medics and scholars at Harvard Medical School that outlines how death can be defined in terms of irreversible coma. Steven Laureys (of the Coma Science Group at Liège University Hospital) explains that earlier than that, a pair of French neurologists in 1959 also used the term “coma dépassé” (irretrievable coma) to refer to the same concept.

In contrast to the unequivocal contemporary official medical and legal position on brain death, surveys show widespread misunderstanding among the US public about what the term means. In 2003, in a survey of 1,000 households, James DuBois and T. Schmidt found that 47 percent agreed wrongly that “a person who is declared brain dead by a physician is still alive according to the law.” In 2004, a survey of 1,351 residents of Ohio found that 28 percent believed that brain dead people can hear. Yet another study, from 2003, found that only 16 percent of 403 surveyed families equated brain death with death.

This confusion is reflected in recent media coverage of the cases of Munoz and McMath. On January 26, reporting on the case of Marlise Munoz, the BBC stated: “A brain dead woman kept alive by a hospital in Texas because she was pregnant has been taken off life support [emphasis added].” In fact Munoz was not “kept alive” by the hospital – she was legally dead the moment that doctors determined that she was brain dead. Or consider an essay in American Thinker published on January 28: “Jahi McMath is alive [emphasis added]” declares its headline. And finally, from just a few days ago in Hollywood Life: “Brain dead woman to be kept alive until baby’s birth [emphasis added].”

These deviations from accepted medical understanding are not new or unusual. In an article published last year, Ariane Daoust and Eric Racine surveyed media coverage of brain death in US and Canadian newspapers between 2005 and 2009. They found few accurate definitions of brain death, together with many contradictory and colloquial uses of the term. Not only is “brain dead” used as a slang derogatory term for stupid politicians and celebrities, it’s also used erroneously to refer to people in a persistent vegetative state (PVS is characterised by a complete lack of awareness, but unlike brain death, this is sometimes potentially reversible, and some brain activity remains including brainstem function; Terri Schiavo was diagnosed as being PVS). Daoust and Racine also cited examples of news reports that implied a person could die a second time – once from brain death, and then a second death after life support is removed. For example, this is from The New York Times in 2005: “That evening Mrs. Cregan was declared brain-dead. The family had her respirator disconnected the next morning, and she died almost immediately.”

Surveys show that even medical professionals often lack understanding of the concept. In 2012, for example, a Spanish survey of graduating medical students found that only two-thirds believed that brain death is the same as death. Longer ago, in 1989, Youngner et al surveyed 195 US physicians and nurses and found that only 38 percent correctly understood the legal and medical criteria for death. In an overview of surveys of the public and medical personnel, James DuBois and colleagues in 2006 concluded that “studies consistently show that the general public and some medical personnel are inadequately familiar with the legal and medical status of brain death.”

Perhaps the most alarming example of misunderstanding of brain death by a medical professional comes from a 2007 paper by Professor of Medical Ethics Robert Truog (pdf). He describes the time that Dr. Sanjay Gupta (a neurosurgeon and Senior Medical Correspondent for CNN) appeared on Larry King in 2005 to discuss the tragic case of Susan Torres, another pregnant woman declared brain dead. “Well, you know, a dead person really means that the heart is no longer beating,” Gupta said. “I mean, that’s going to be the strict definition of it […] people do draw a distinction between brain dead and dead.” Here, in front of a massive mainstream audience, Dr. Gupta profoundly misrepresented the medical and legal facts around the criteria for death.

It is easy to understand why there is so much confusion. Many people implicitly associate life with breathing and heart function, and to see a person breathing (albeit with artificial support) and to be told they are in fact dead can be difficult to comprehend. The ability after brain death to carry a fetus, for wounds to heal, and for sexual maturation to occur also adds to many people’s incomprehension at the notion that brain dead means dead. But for those more persuaded by the idea of death as irrevocably linked, not with brain function, but with the end of heart and lung activity, consider this unpleasant thought experiment (borrowed from LiPuma and DeMarco). If a decapitated person’s body could be maintained on life support – with beating heart and circulating, oxygenated blood – would that person still be “alive” without their brain? And consider the converse – the classic “brain in a vat”. Would a conscious, thinking brain, sustained this way, though it had no breath and no beating heart, be considered dead? Surely not. Such unpalatable thought experiments demonstrate how brain death can actually be a more compelling marker of end of life than any perspective that focuses solely on bodily function.

Let’s be clear – there is continuing expert and public debate and controversy around how to define death, including brain death (to give you a taster, scholarly articles published over the last decade include “The death of whole-brain death” and “The incoherence of determining death by neurological criteria“). It is right that this debate and discussion continues. However, it’s also important that the public understand the existing consensus that is founded on the latest medical evidence and deliberation – that brain death means death. It’s not a preliminary or unfinished form of death. It’s not a persistent vegetative state. It is final. It is death. Families and medical professionals caring for brain dead patients are involved in terribly difficult decisions about organ donation and it is especially crucial that they know what the current medical and legal consensus is, and that they understand brain death means a permanent end of the person’s mental processing and consciousness, and therefore the end of life. Unsurprisingly, surveys show that people’s decisions about organ donation are affected by their understanding of what brain death means – people who think that brain death isn’t equivalent to death are less likely to agree to donation.

Of course, some people will have personal, spiritual or religious beliefs that contradict the current medical and legal position on brain death (such is the case with McMath’s family), and respect and sensitivity is important in these cases. Note, however, that both mainstream Judaism and Islam have accepted the concept of brain death. And, according to Steven Laureys writing in 2005, the Catholic Church has also stated that “the moment of death is not a matter for the church to resolve.”

I hope I have presented a fair and clear explanation of the current US medical and legal consensus on brain death. This is a tragic and sensitive issue and my heart goes out to the families of Munoz and McMath and others in similar situations.

Homepage image: Joachim Böttger via Ars Electronica/Flickr