Neuro-Tech,
Sci + Society,
39 MINS

#138: How Reliable is fMRI Data?

July 22, 2016

fMRIs sound pretty scientific, right?

But what if it turns out that some scientific results, backed by fMRI data, may be unreliable?  That’s what Dr. Thomas Nichols, Professor and Head of Neuroimaging Statistics at the University of Warwick, has discovered in his recently published research:  about 10% of the scientific literature that relies on fMRI data is contaminated with false positives.  But how significant is that number, really?  Keep reading (or listening) to find out.

fMRIs and The Brain

images of fMRI scans
fMRI stands for functional Magnetic Resonance Imaging, a neuroimaging procedure that measures brain activity by measuring changes in blood flow and blood volume in the brain.  The basic concept is this:  when neurons in a certain area of the brain are firing, more blood will flow to that area to support the increased activity.  So, more blood flow = more neuronal activity.

It’s an easy and relatively sensitive form of measuring brain activity, which is why it remains a popular one.  However, fMRI measurements aren’t perfectly precise — they measure a secondary activity (blood flow), not the primary one (neuronal activity).  fMRI data can be easily misinterpreted, particularly if neuronal activity takes place very quickly in the brain, since rapid neuronal firing won’t pull in any additional blood to the area.

False Alarms

Dr. Nichols’ work is concerned with finding undiscovered false alarms using statistical analysis.  In his own words, the majority of his work has been studying the “really boring cases where there’s absolutely nothing going on” and the results are valid.  But, sometimes, he discovers some major problems with the accuracy of statistical methods for fMRI studies (in this case, task fMRI studies, where researchers look at the brains of people performing tasks).

Now here’s where we get into some statistical jargon — deep breath!  Dr. Nichols does a great job of explaining terms like P-value and “noise” in an easy-to-digest way.  Here are the cliff notes:

  • Statistical significance:  Quantifies how confident you can be in a result  If findings are statistically significant, it means it’s unlikely that they are by chance.
  • P-value:  Helps scientists determine if results are significant.  P-values are between 0 and 1 (or 0 and 100%) and tell you how strong the evidence is, one way or another.  The lower the number, the stronger the evidence in support of the hypothesis.
  • 5%:  The commonly accepted P-value for statistical significance.  This means that there can be false alarms (positive or negative) 1 out of 20 times without invalidating the findings, i.e. 5% of the data can be “wrong.”  Keep in mind, 5% is considered very weak evidence.
  • Noise:  A term in statistics for unexplained variation in data, for example errors.

Statistical Findings of the Study

In his recently published study, Dr. Nichols found that about 1/10th of the literature relying on task fMRIs (about 3,500 fMRI studies in total) have been affected by false positives and faulty data.

But!  That doesn’t mean all 3,500 are wrong.  If results have very low P-values (i.e. a very low possibility of the findings being random), the statistical significance reported may be incorrect, but the findings will still stand.  On the other hand, if the statistical significance is weak (i.e. right at the threshold of a 5% P-value), the results might be invalid.

Further Reading

PS:  We don’t need statistical analysis to know it would be an error for you to miss signin up for our weekly Brain Breakfast.

Read Full Transcript

Leave a Reply

Scroll to top