False-Positive fMRI Hits The Mainstream

By Neuroskeptic | July 7, 2016 10:37 am

brainquest1

A new paper in PNAS has made waves. The article, called Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates, comes from Swedish neuroscientists Anders Eklund, Tom Nichols, and Hans Knutsson.

According to many of the headlines that greeted “Cluster failure”, the paper is a devastating bombshell that could demolish the whole field of functional magnetic resonance imaging (fMRI):

Bug in fMRI software calls 15 years of research into question (Wired)

A bug in fMRI software could invalidate 15 years of brain research. This is huge. (ScienceAlert)

New Research Suggests That Tens Of Thousands Of fMRI Brain Studies May Be Flawed (Motherboard)


So what’s going on here, and is it really this serious?

The first thing to note is that the story isn’t really new. I’ve been covering Eklund et al.’s work on the false-positive issue since 2012 (1,2,3,4). Over that time, Eklund and his colleagues have developed the argument that several commonly used fMRI analysis software tools suffer from a basic flaw which leads to elevated false-positive rates when it comes to finding activations associated with tasks or stimuli i.e. finding which brain area ‘lights up’ during particular tasks.

The new paper is just the culmination of this program, and the results – that up to 70% of analyses produce at least one false positive, depending on the software and conditions – won’t come as a surprise to anyone who has been following the issue

Although there is one unexpected point in “Cluster failure”: Eklund et al. reveal that they discovered a different kind of bug in one of the software packages, called AFNI:

A 15-y-old bug was found in [AFNI’s tool] 3dClustSim while testing the three software packages (the bug was fixed by the AFNI group as of May 2015, during preparation of this manuscript). The bug essentially reduced the size of the image searched for clusters, underestimating the severity of the multiplicity correction and overestimating significance (i.e., 3dClustSim FWE P values were too low)

This is a new and important issue, but this new bug only applies to AFNI, not other widely-used packages such as FSL and SPM.

As to the question of how serious this is, in my view, it’s very serious, but it doesn’t “invalidate 15 years of brain research” as the headline had it. For one thing, the issue only affects fMRI, and most brain research does not use fMRI. Moreover, Eklund et al.’s findings don’t call all fMRI studies into question – the problem only affects activation mapping studies. Yet while these experiments are common, they are far from the only application of fMRI. Studies of functional connectivity or multi-voxel pattern analysis (MVPA) are increasingly popular and they’re not, as far as I can see, likely to be affected.

Finally, it’s important to remember that “70% chance of finding at least one false positive” does not imply that “70% of positives are false”. If there are lots of true positives, only a minority of positives will be false. It’s impossible to directly know the true positive rate, however.

Update 15th July 2016: Tom Nichols, one of the authors of ‘Cluster Failure’, reports that he’s requested some corrections to the paper in order to remove some of the statements that led to “misinterpretations” of the study (i.e. to those hyped headlines). However, PNAS did not agree to the correction, so Nichols has posted it on PubMed Commons, here.

ResearchBlogging.orgEklund A, Nichols TE, & Knutsson H (2016). Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the National Academy of Sciences of the United States of America PMID: 27357684

CATEGORIZED UNDER: fMRI, methods, select, statistics, Top Posts
ADVERTISEMENT
  • LCND

    Thank you Neuroskeptic for pointing out your final point, it is something I have not seen anyone else discuss:

    Finally, it’s important to remember that “70% chance of finding at least one false positive” does not
    imply that “70% of positives are false”. If there are lots of true
    positives, only a minority of positives will be false. It’s impossible
    to directly know the true positive rate, however.

    I am working on estimating the false positive rate, my rough estimate is less than 5%.

  • LCND

    Thank you for being the first person I have seen to point out that family wise error rate is not the same thing as false positive rate. Given that the average fMRI paper reports 36 significant activations (based on Neurosynth), the false positive rate is likely to be quite low.

  • Pingback: Mogen we alle breinstudies weggooien? | X, Y of Einstein?()

  • Neurosiscientist

    That paper has at least one flaw. In the abstract it claims 40,000 studies are doubtful. In the main text they state that there are 40,000 fMRI studies and that those using cluster-based statistics are doubtful, not those using voxel-based statistics. However, in my experience, most studies use voxel-based statistics. I would guess that only <5% of papers actually use cluster-based statistics, probably because it is common knowledge that it produces too many significant results. At least, the supervisor of my first fMRI study told me in 2005 that I should not use cluster-based statistics because of that reason.

    So the already the abstract of that paper seems to exaggerate the problem by a large amount compared to what the rest of the paper suggests. And media do their best to blow it up even more.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Hmm, for what it’s worth I was always taught to use cluster statistics! I think there’s different conventions in different centres.

    • LCND

      Tom Nichols recently looked at this: http://blogs.warwick.ac.uk/nichols/entry/bibliometrics_of_cluster

      It is about 11% of fMRI papers.

      • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

        Thanks. Although he goes on to say that “I frankly thought this number would be higher, but didn’t realise the large proportion of studies that never used any sort of multiple testing correction. (Can’t have inflated corrected significances if you don’t correct!). These calculations suggest 13,000 papers used no multiple testing correction.”

        Which I find worrying in itself.

        • LCND

          I agree completely. Considering this is something that we have known about for many years, I don’t understand how/why reviewers would let that slide. It is worth noting though that not correcting was the norm approx pre-2000. I would be curious to know the proportion of non-correcting studies pre- and post- 2000.

          That said, the family wise error rate for that kind of not correcting is about 50% based on the Eklund paper. Given that this is not false positive rate, we again have no idea if this is a little bad or a lot bad (or potentially not bad at all).

  • Anders Eklund

    In the published paper we also explain why FLAME1 is so conservative for rest data, and also show that FLAME1 is *not* conservative for task data

    • son_of_stone

      I’m so relieved that FLAME1 is okay. That’s what I’ve been using on my resting state data.

  • Thomas Nichols

    Just to clarify, I feel that saying our charge is “that commonly used fMRI analysis software tools suffer from a basic flaw” paints it a bit broad. Of the two main types of inference (cluster vs voxel), the one type (cluster) can have dramatically inflated false positives *if* a particular free parameter is set too low (the cluster defining threshold, or CDT); if the CDT is set higher there is only modest inflation of false positives.

    Fortunately, a quick bibliomic analysis ( http://blogs.warwick.ac.uk/nichols/entry/bibliometrics_of_cluster/ ) shows that the low CDT is used not so much, accounting for only ~1/10 of the literature.

    While it is depressing that almost none of the peer reviewed archive has any shared data to re-analyze, we can make meta-analysis of the coordinate data reported in tables, and these should neglect false positives and identify consistent effects.

    • Avniel Ghuman

      With all due respect, you said this “we found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of some 40,000 fMRI studies and may have a
      large impact on the interpretation of neuroimaging results.” and “It is not feasible to redo 40,000 fMRI studies”

      It is not clear why you feel that “saying our charge is ‘that commonly used fMRI analysis software tools suffer from a basic flaw’ paints it a bit broad.” How else would you like people to interpret the statements quoted above?

      • Thomas Nichols

        As I spelled out in the blog post, these numbers are careless over-estimates. We’re working to correct this.

        A “false positive rate” is only uniquely defined for a single outcome (i.e. one test, one conclusion). In fMRI, we have ~50,000 tests, one at each voxel. The familywise error rate is the chance of one or more false positives across the brain.

        I suppose in the general sense you want to know: “Are the conclusions of the article incorrect?”. That cannot be told from a simple summary of a statistic image. That depends on the strength of the signal (e.g. even if a P-value is biased, if the effect is huge it will likely be still significant after the bias is corrected), and how the authors use the pattern of brain activity to conclude something about the brain. You simply have to review each study.

        At the end of the day, the practical way forward is to summarise the literature with meta-analysis. If the results aren’t consistent across studies, then you won’t see an effect in a meta-analysis.

        • Avniel Ghuman

          I hope you use the words “careless over-estimate” with regards to the original paper in both interactions with the press and your efforts to correct it. Considering you feel that way, I think making that absolutely explicit would help people react more appropriately to the work.

        • Avniel Ghuman

          Apologies, I believe the correct term would be that we need to know the false discovery rate in the context of real data where there are multiple true activations.

          Still, it appears to me that where we currently are is:

          1. Between 20-40% of about 11% of fMRI papers have more errors than expected due to improper spatial assumptions in the cluster statistic model (figure 1) . However, we do not have an estimate of the proportion of the findings in those papers that might be false. It could be quite low, it could be high.

          2. Around 50% of about 32.5% of the fMRI papers have more errors than expected due to not properly correcting for multiple comparisons (figure 2), likely biased towards older papers where neglecting this correction was the norm. However, again, we do not have an estimate of the proportion of the findings in those papers that might be false. It could be quite low, it could be high.

          The above is a very very different message (and perhaps one that should be delivered to reporters/the public) than “we found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results.” and “It is not feasible to redo 40,000 fMRI studies.”

          • Sys Best

            No, it’s not a VERY VERY different message. The 40k should be corrected but the actual message is the same, 5k would still be too much and the bottom line is that thousands of research assistants, associates and postdoctoral students are hired by irresponsible scientists to pass data through analysis pipelines and generate massive amounts of meaningless results. I wanted to say useless but the research exercise is actually useful for the trainees. And it grew too big to fail and too much work and too much money went down the drain. Neuroimaging techniques are tools used by the universities to attract students in the life sciences departments to the detriment of math, physics, engineering departments. And NIH funds nonsense research proposals just because they have some clinical relevance without questioning the methods and the expertise of the team.

        • jdmuuc

          First off, your paper is going to come in handy any time I need to refuse to loosen the CDT for an investigator who is new to neuroimaging, so thanks!

          I’m interested to know if you looked at the association between familywise error rate and the expected value of the number of false activation clusters. If Nclusters approaches 1 as FWE approaches 100%, then the false positive rate of most published works will not be too inflated due to the number of true positives.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      The basic flaw I was referring to is the Gaussian assumption for the spatial autocorrelation. But that’s true, as you say, whether this flaw translates into a seriously elevated false positive rate depends on other parameters!

    • Emmanuel Goldstein

      I’m confused as for CDT and FWE (I thought they were the same thing). CDT is the threshold for statistically create the images that generate those clusters? (So the first statistical parameter) while FWE is the threshold used to pick from the cluster distribution the cluster size? Thanks!

      • Thomas Nichols

        CDT, cluster defining threshold, is a tuning parameter used to create clusters, the spatial blobs, in the first place. You can’t have clusters without a CDT. Go very low and you’ll be able to catch very subtle effects that are spatially extended and couldn’t be detected any other way; go very high and eventually you’ll only catch the highest peaks, and you might as well be doing voxel-wise inference.

        FWE, familywise error rate, is the chance of one or more false positives. It’s simply a yardstick, a way of measuring false positives when you have 2 or more tests. You could swap out FWE for FDR, the false discovery rate, another way of measure false positives when you have 2 or more tests.

        • Emmanuel Goldstein

          Thanks, I understand now it is a parameter to create the blobs. The number of voxels in a cluster is determined by the FWE rate. So the, what does the CDT do to create the blobs? As I understand, the clusters are naturally there when thresholding a spatial map. And then the FWE is used to select the number of voxels of the cluster. I may be getting something wrong here though.

    • Trojan

      Don’t you think that peer review should make open access to data a requirement of participation – that way selection and funding bias would be much less on negative data?

  • https://forbetterscience.wordpress.com Leonid Schneider

    Too many scientists do not care about reliability of their measurements, but about achieving exciting results using fancy methods. Extrapolating from other fields of life science I hypothesise here that many fMRI users have little clue about this technology and twist and tweak until they see what they wanted to see. So please take this into account when discussing the real false positive ratios. I would not be surprised when a number of researchers should stick with using the old AFNI plug-in for many years to come, with approval of their peer reviewing colleagues. All because the old version delivers “better” results.

    • http://nonsignificance.blogspot.com non_sig

      I think the same… At least it is like that here… There are no discussions which correction for multiple comparisons is “best” (or generally about statistical methods)…. At “best” there are suggestions what else (programs, tests, methods, etc.) could be tried to get the desired result(s)…

    • Avniel Ghuman

      I highly doubt people will stick to the old methods because reviewers will call them out for it. After the circularity, motion issues in resting state, and multiple comparisons points became widely known, they quickly mostly dropped out of the literature.

      P-hacking is a serious problem, but one that is orthogonal to this one.

  • DS

    Nothing gets better until the field is forced to focus on the data and the veracity of processing methods. Stats are low on my list of concerns presently.

  • Neurosis

    I’d like to point out that Guillaume Flandin and Karl Friston have posted a reply: https://arxiv.org/abs/1606.08199 . As remarked earlier, you’re fine if you use high initial thresholding when using cluster-based thresholding in SPM. Moreover, the two sample t tests reported by Eklund et al. are flawed, according to Flandin and Friston.

    • Anders Eklund

      No they criticize the one-sample t-test

  • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

    Yeah we saw the same phenomenon with the Dead Salmon and Voodoo Correlations issues. Media hyped those fMRI problems out of proportion, but then went straight back to over-hyping fMRI studies afterwards.

  • http://asif.cc/ Asif J

    Probably does influence to some extent, but on the other hand, funding bodies should be composed of researchers who we would expect to be well-informed about fMRI anyways. Think it will just take time for the implications of this study to propagate and affect funding decisions.

  • Pingback: 15 years of brain research has been invalidated by a software bug, say Swedish scientists | Artikel Online()

  • Pingback: 15 years of mind analysis has been invalidated by way of a device trojan horse, say Swedish scientists - Rodexo()

  • Ravi Menon

    It is about time that the discussion around fMRI statistics involve MRI physicists. Because assumptions in random field theory or other noise models do not mirror reality. Play a cine loop of any fMRI study and tell me that what you see conforms to simple noise models where blurring can impose statistical sanity. It can’t. Add in multiband, SMS, GRAPPPA and the like and the noise models truely fail. Heck, measuring even the simple raw signal to noise in multichannel accelerated acquisitions is not a solved problem. So how. An we model the fMRI noise when spatial autocorrelation from the imaging techniques are so variable. It can be done, but we have to stop treating fMRI data like PET data.

    • Anders Eklund

      I completely agree, I don’t like that new multiband sequences are being used despite the fact that no one has looked in to what the new sequences mean for the statistical algorithms.

      • Avniel Ghuman

        Yes, until the effects on the statistical model are known, permutation testing should almost always be used with new acquisition methods.

        This is one thing it would have been nice if you had pointed out: many of the assumptions of the models are carryovers from older scan parameters and may have been reasonably valid at the time. It is likely that the failures of these assumptions are in large part a result of newer, more sensitive scan parameters and people not rechecking if the assumptions are still valid. This would have helped contextualize the results as in part due to the stats lagging behind the MR physics.

      • jwirsich

        I would not say no one 😉
        http://www.ncbi.nlm.nih.gov/pubmed/26749161

        • Anders Eklund

          True, but they do not seem to look into the spatial autocorrelation

    • Avniel Ghuman

      I completely agree and this accounts for some of the problem. Indeed, some of these assumptions are carry overs from when fMRI was done with low channel numbers and older acquisition parameters. That said, a large part of the issue is that there are spatial autocorrelations in real brain activity that need to be accounted for in the models. This is both why you would expect to find an elevated family wise error rate in a phantom and why it is far more elevated in real data.

      • Ravi Menon

        Spatial correlation in activity is known and varies depending on whether veins are present, how the smoothing is done (surface based or volume based) and the T2* blurring amongst a myriad of parameters. I’ve been doing fMRI since 1991 and have never seen these dealt with properly in a statistical sense. The correlation depends on where you are in space, so no spatialy invariant model can treat this properly. One size does not “fit” all. Add accelerated imagimg and it is a true cluster fck.

      • Vince Calhoun

        Another reason to consider complementing one’s analysis with a multivariate data-driven approaches? We can easily show that spatial autocorrelation is naturally captured in a clustering/ICA type analysis and as such won’t inflate false positives if you do component-wise statistical testing. 😉

        • jwirsich

          or with another modality…

        • Avniel Ghuman

          I would argue yes and no. The bigger picture point here is that all statistical models, univariate or multivariate, make assumptions about the structure of the underlying data. If they are reasonable, this will be fine, if they are not, they are problems. The spatial autocorrelation issue is an important one, and tend to be dealt with if one uses a mutlivariate/multivoxel analyses, but I guarantee this is not going to be the last assumption that proves to be inaccurate. Multivariate analyses can sometimes be worse because there are often extra steps such as feature selection/dimensionality reduction that introduce additional assumptions (non-independence of feature selection for example was how we ended up with circularity problems).

          From that perspective, I personally agree with the premise that now that it is computationally feasible, permutation testing is probably the best thing one can do. This is because the only assumption in permutation testing is that the permutations have the same structure as your original data. As long as you are permuting your original data in an appropriate way (e.g. maintaining the temporal and spatial structure of the original data) the assumptions of permutation testing should be ok.

          We haven’t even started discussing/addressing the temporal autocorrelation issue sufficiently (I’ve only seen a couple of papers) in the context of resting-state (including ICA-resting state) analyses. It pains me that people are not using AR (autoregressive) models to do resting state analyses, particularly in fMRI if you are going to use a .1 Hz low pass filter. Again though, this could be addressed either with an AR model on the front end or by using permutation testing to do the stats on the back end.

          • Anders Eklund
          • Avniel Ghuman

            Yup, as has Georgopoulos (and Huppert in the context of NIRS), yet the issue has mostly been ignored.

            Also, I don’t believe that paper uses the .1 Hz low pass filter that is common in resting-state analyses (Vince can correct me if I am wrong). That makes the problem way, way worse as it is no longer simple to estimate the true degrees of freedom in the data, which will greatly affect your hypothesis testing. Also, the point that it affects the correlation values is not trivial, as this is sometimes taken as a kind of effect size measure.

          • Avniel Ghuman

            As has Georgopolous and Huppert (in the context of NIRS). The issue is that it continues to be ignored.

            In particular (and Vince can correct me if I am wrong), I don’t believe that Vince used the .1 Hz low pass filter that is common for connectivity analyses. This can make the issue way worse and make it difficult to estimate the true degrees of freedom of the data. Also, the fact that it inflates the correlation value is not trivial, as this is sometimes treated as an effect size estimate.

            One thought though: wouldn’t an alternative solution be spatial whitening/a spatial AR model to deal with the spatial autocorrelation issue?

          • Anders Eklund

            Or you can model the spatial autocorrelation like we do here

            http://arxiv.org/abs/1606.00980

          • Vince Calhoun

            We actually did look at .1 Hz as well as a host of other things, you should check it out. The good news is group level testing was somewhat immune from the bias as both the correlation and the standard deviation were biased and they mostly cancelled one another our. But that’s not the end of the story either, we have more recent work where we are looking at autocorrelation as a variable of interest, it changes in schizophrenia vs controls for example, calling this autoconnectivity. 😉

          • Avniel Ghuman

            Thanks for the pointer. It makes sense that the variance increase mostly cancels out the correlation increase. Of course any within subject analyses have nothing to offset the correlation bias, but I suppose within subject studies are a relatively small part of the literature (though things like the cortical segmentation using rest connectivity might be an issue). Still, I worry that inflated correlation (and correlation differences) provide a false sense of the effect size.

            Interesting about the autocorrelation differences. I would be curious if you looked at where the autocorrelation differences arise. It seems like there are a bunch of different pathologies that are characterized by differences in the variance (Marlene Behrmann has a paper like that for autism, I recently heard about an effect like this in alzheimer’s animal models, etc.). Could what you are seeing be similar/related to variability differences?

          • Vince Calhoun

            I don’t think it’s due to variance…but variance changes as well, we have looked separately at variance differences (cf http://www.ncbi.nlm.nih.gov/pubmed/26106217 & http://www.ncbi.nlm.nih.gov/pubmed/27013947) and the regions implicated are quite different.

          • Vince Calhoun

            On your earlier comment, permutation testing isn’t a golden hammer either. I agree permutation testing is a good thing, but it’s merely a nonparametric way to generate a null given a prior set of modeling assumptions and certainly is not immune to the assumptions made by the underlying model. I think we are making the same points. 😉

          • Avniel Ghuman

            Oh, absolutely, permutation testing is not a panacea. The only thing that permutation testing insures is that the final test is statistically valid, assuming your permuted data has the same structure as your own data (like don’t permute across TRs within a trial or across space, because now your permutations have different temporal and spatial autocorrelation). However, statistical validity is not the same thing as actual validity, hah!

            And yes, I believe we are making the same points.

  • Avniel Ghuman

    It also might be worth noting that 70% was the high end of the range. The mean family wise error rate was around 15-35% (figure 1). Based on the data and code Eklund et al. provide as part of their word, my coarse analysis suggests this is on the order of .3-1.5 extra false positive clusters per contrast on average. Not exactly a “set the field on fire” number given the large number of significant clusters per contrast reported in the average paper.

  • http://www.mazepath.com/uncleal/qz4.htm Uncle Al

    It is unfair to condemn a diagnostic procedure of such tremendous predictive power just because its outputs are imaginary. fMRI data matter.

  • https://www.linkedin.com/in/curtcorum Curt Corum

    While an MRI physicist in academia, I listened to researchers complain about multiple aspects of fMRI for over a decade.

    One of the first complaints I can recall, from the early 2000s is that fMRI studies could produce differing results when done at 1.5 Tesla vs. the then emerging 3.0 Tesla scanners. This was in an introductory graduate level course on fMRI in the Psychology department, so no one was deluding themselves then or now.

    In my mind there has always been some ambivalence in the research community (users of fMRI or experts) about fMRI. It is one of the only methods to non-invasively (without injection of a tracer or inserting electrodes) get an image of (nearly) whole brain function at high spatial resolution. It is BOLD (blood oxygen level dependent) contrast, not neural activity like MEG. It has issues, especially when performed by inexperienced research groups. It is often difficult to replicate experiments even when as many acquisition and processing parameters are kept the same as possible, and more difficult if not.

    The question I have asked myself is, after 25 years, why hasn’t fMRI made it into more clinical applications. Why can’t it be used for diagnosis of individual patients (neurological, psychiatric)? There is progress toward such clinical application, and some past and present trials, but it seems very slow.

    In my mind it is not that the analysis software is flawed (except for
    explicit bugs). It is doing the best it can given the data.

    There seem to be multiple bottlenecks in the fMRI pipeline, starting with the raw MRI data acquisition. The various processing methods have evolved to deal with the data and the MRI acquisition has evolved as well. The main bottleneck in the acquisition is “physiological noise” and unfortunately does not get better as much with improved technology as other MRI methods seem to (like diffusion tensor imaging or standard T1, T2 and proton density).

    The “Cluster Failure” article is one of many that are taking advantage of “big data” archives to replicate and reanalyze fMRI studies. Hopefully there will be more and improved methods will come out of them.

    The MRI community has identified some alternate strategies to obtain
    functional brain images, such as fQSM, T1rho, fADC etc. These can potentially lead to less physiological noise or more direct measurement of neural activity, but are in very
    early stages.

    Disclosure: My company, Champaign Imaging LLC is working on what we feel are some high reward technologies in this area…the need for better and/or new is certainly out there!

    • Vince Calhoun

      One good use of big data IMO is enabling us to compare how one scan compares with another in a larger set of data. See this article (in particular figure 6 & 8) comparing results from 10,000 scans…http://www.ncbi.nlm.nih.gov/pubmed/27014049

      • https://www.linkedin.com/in/curtcorum Curt Corum

        Vince,
        Thanks for the pubmed link!
        Curt

  • Pingback: 15 years of brain research has been invalidated by a software bug, say Swedish scientists – Technology Up2date()

  • Pingback: Post Of The Week – Saturday 9th July 2016 | DHSB/DHSG Psychology Research Digest()

  • Pingback: خطأ إحصائي يشكك في نتائج 20 سنة من دراسات الدماغ! – سايوير()

  • Pingback: Is the bulk of fMRI data questionable? - Retraction Watch at Retraction Watch()

  • Pingback: 腦神經科學研究軟體有 bug,恐影響該領域 15 年來 4 萬份研究結果? | TechNews 科技新報()

  • Pingback: Wednesday assorted links - Marginal REVOLUTION()

  • Pingback: 脑神经科学研究软件有bug,恐影响该领域15年来4万份研究结果? – ITMAX-()

  • Pingback: fMRI 漏洞讓人看不清真相?在台灣,其實更缺的是縱觀全局的視野 | TechNews 科技新報()

  • Trojan

    No more than the BBC admitting that they are biased in favor of vaccination and the EU.

  • Defenestrator

    In case you hadn’t seen yet: http://huff.to/2anCRTC

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Hey, many thanks, I hadn’t seen that. It seems accurate, although it somewhat glosses over the fact that many studies only avoid the Eklund et al. problem by not using multiple comparisons correction at all (13,000 fMRI studies according to Tom Nichols) which is arguably worse.

  • Pingback: Attribute amnesia, uninterrupted eye contact, fMRI bugs, and women  driven out of STEM careers « The Jury Room()

  • Pingback: What To Do About Software Errors in fMRI? – BlogON()

  • Pingback: What To Do About Software Errors in fMRI? – Rincon Tech News()

  • Pingback: What To Do About Software Errors in fMRI? – Discover Magazine (blog)()

  • Pingback: What To Do About Software Errors in fMRI? | Software News()

  • Pingback: 腦神經科學研究軟體有 bug!恐影響該領域 15 年來 4 萬份研究結果? | 媽力個逼()

  • David Chorley

    It’s quite a lesson to learn about fMRI software bugs when you realise that the climate change software that predicts global warming is considered to be “proprietary ” and not open to public scrutiny

  • Pingback: Neuroscienze e fMRI: vent’anni buttati al vento? – Viaggio al centro della mente()

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+