fMRIs sound pretty scientific, right?
But what if it turns out that some scientific results, backed by fMRI data, may be unreliable? That’s what Dr. Thomas Nichols, Professor and Head of Neuroimaging Statistics at the University of Warwick, has discovered in his recently published research: about 10% of the scientific literature that relies on fMRI data is contaminated with false positives. But how significant is that number, really? Keep reading (or listening) to find out.
fMRIs and The Brain
fMRI stands for functional Magnetic Resonance Imaging, a neuroimaging procedure that measures brain activity by measuring changes in blood flow and blood volume in the brain. The basic concept is this: when neurons in a certain area of the brain are firing, more blood will flow to that area to support the increased activity. So, more blood flow = more neuronal activity.
It’s an easy and relatively sensitive form of measuring brain activity, which is why it remains a popular one. However, fMRI measurements aren’t perfectly precise — they measure a secondary activity (blood flow), not the primary one (neuronal activity). fMRI data can be easily misinterpreted, particularly if neuronal activity takes place very quickly in the brain, since rapid neuronal firing won’t pull in any additional blood to the area.
False Alarms
Dr. Nichols’ work is concerned with finding undiscovered false alarms using statistical analysis. In his own words, the majority of his work has been studying the “really boring cases where there’s absolutely nothing going on” and the results are valid. But, sometimes, he discovers some major problems with the accuracy of statistical methods for fMRI studies (in this case, task fMRI studies, where researchers look at the brains of people performing tasks).
Now here’s where we get into some statistical jargon — deep breath! Dr. Nichols does a great job of explaining terms like Pvalue and “noise” in an easytodigest way. Here are the cliff notes:
 Statistical significance: Quantifies how confident you can be in a result If findings are statistically significant, it means it’s unlikely that they are by chance.
 Pvalue: Helps scientists determine if results are significant. Pvalues are between 0 and 1 (or 0 and 100%) and tell you how strong the evidence is, one way or another. The lower the number, the stronger the evidence in support of the hypothesis.
 5%: The commonly accepted Pvalue for statistical significance. This means that there can be false alarms (positive or negative) 1 out of 20 times without invalidating the findings, i.e. 5% of the data can be “wrong.” Keep in mind, 5% is considered very weak evidence.
 Noise: A term in statistics for unexplained variation in data, for example errors.
Statistical Findings of the Study
In his recently published study, Dr. Nichols found that about 1/10th of the literature relying on task fMRIs (about 3,500 fMRI studies in total) have been affected by false positives and faulty data.
But! That doesn’t mean all 3,500 are wrong. If results have very low Pvalues (i.e. a very low possibility of the findings being random), the statistical significance reported may be incorrect, but the findings will still stand. On the other hand, if the statistical significance is weak (i.e. right at the threshold of a 5% Pvalue), the results might be invalid.
Further Reading
 Dr. Nichols’ study — only for the serious statistician
 Blog post by Dr. Nichols on the significance of his findings
 A more laypersonfriendly write up on the study from Science Daily
PS: We don’t need statistical analysis to know it would be an error for you to miss signin up for our weekly Brain Breakfast.
Show Notes

00:00:22
Functional magnetic resonance imaging

00:02:04
This Week in Neuroscience: Mystery of what sleep does to our brains may finally be solved

00:04:40
Audience interaction section

00:06:25

00:06:53
Introduction to Dr. Thomas Nichols

00:08:14
What is fMRI?

00:09:37
Misinterpreted fMRI data

00:10:09
The blood–brain barrier

00:11:46
fMRI analysis

00:16:46
How to know if scientific methods are calibrated correctly

00:21:42
Voxelwise versus clusterwise analysis

00:27:08
Are there any specific studies that have been called into question?

00:28:07
False positives versus false negatives

00:33:35
Ruthless ListenerRetention Gimmick: The Science Behind Slurpees And 'Brain Freeze'