Blue color background with a person holding a tablet device

Fire or False alarm? Restoring confidence in fMRI research

By Ekaterina Dobryakova, PhD, research scientist in Traumatic Brain Injury Research at Kessler Foundation.

During the past few weeks I have been bombarded with text messages, Facebook posts and face-to-face questions and remarks from friends, colleagues and relatives. The theme I heard from very different people contained very similar information: several Internet posts with a very glum message were asserting that years of neuroimaging research are invalidated. For example, some posts like this one from the UK’s Register: fMRI bugs could upend years of research, sounded an alarm about a bug in analysis software that neuroimaging researchers use to analyze brain data, while others called into question the whole analysis strategy, e.g., A bug in fMRI software could invalidate 15 years of brain research.
These posts were prompted by a recent paper by Eklund et al: “Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates”, published in the Proceedings of the National Academy of Science in the U.S. (PNAS), a prominent journal.  Because neuroimaging techniques are an integral aspect of the research conducted at Kessler Foundation, I would like to take a moment and address the controversy, which I believe is unfounded.
While Eklund’s article did mention a bug in one of the software packages, it is worth clarifying that this affected only one of the popular packages among multiple alternative ones. In addition, the bug was fixed a year ago and scientists have had ample opportunity to consider the implications and correct their findings. Rather than sensationalizing and questioning the hard work of thousands of bright-minded researchers, the aim of the PNAS article was something completely different: To draw attention to more rigorous analysis techniques and the availability of open data.

Essentially, the brain produces myriad patterns of activation that can be studied with fMRI. Using expertise from statisticians and software engineers, the larger neuroimaging community developed tools in order to make sense of these activities. For someone like me, a researcher studying brain activation, these tools attempt to screen out “noise” (false positives) that is not associated with a particular cognitive activity of interest. The brain is virtually abstracted into lots and lots of three-dimensional pixels or voxels. The software performs a statistical test on each of these voxels in order to identify a group of voxels related to a specific task the person is performing in the scanner.

According to Eklund et al, a particular analysis technique, namely, clusterwise inference (indeed, there are many other analysis approaches), can yield up to 70% false positives. Now, a method yielding 70% false positives sounds like a universally bad method, until one takes a closer look and understands that in a vast majority of situations, the actual false-positive impact is not nearly as significant. It is important for readers to realize that much depends on the setup, and there is only a fraction of research situations where this could become an issue. Thus, here we have a prime example of how important it is to not generalize but think critically and pay attention to detail.

Regardless of methodological shortcomings, it is our current analysis tools that have enabled the young field of functional neuroimaging research to show that during the presentation of visual stimuli, there is activation of the visual areas of the brain, while activation of the amygdala can be detected during emotional stimuli. These reproducible findings were obtained with current tools, even though the tools might not be perfect. For now, these are the tools we have to make sense of brain activation, and they are constantly being developed and improved, even as you are reading this blog.