During the past few weeks I have been bombarded with text messages, Facebook posts and face-to-face questions and remarks from friends, colleagues and relatives. The theme I heard from very different people contained very similar information: several Internet posts with a very glum message were asserting that years of neuroimaging research are invalidated. For example, some posts like this one from the UK’s Register: , sounded an alarm about a bug in analysis software that neuroimaging researchers use to analyze brain data, while others called into question the whole analysis strategy, e.g., .
These posts were prompted by a recent paper by Eklund et al: “, published in the Proceedings of the National Academy of Science in the U.S. (PNAS), a prominent journal. Because neuroimaging techniques are an integral aspect of the research conducted at Kessler Foundation, I would like to take a moment and address the controversy, which I believe is unfounded.
While Eklund’s article did mention a bug in one of the software packages, it is worth clarifying that this affected only one of the popular packages among multiple alternative ones. In addition, the bug was fixed a year ago and scientists have had ample opportunity to consider the implications and correct their findings. Rather than sensationalizing and questioning the hard work of thousands of bright-minded researchers, the aim of the PNAS article was something completely different: To draw attention to more rigorous analysis techniques and the availability of open data.
Essentially, the brain produces myriad patterns of activation that can be studied with fMRI. Using expertise from statisticians and software engineers, the larger neuroimaging community developed tools in order to make sense of these activities. For someone like me, a researcher studying brain activation, these tools attempt to screen out “noise” (false positives) that is not associated with a particular cognitive activity of interest. The brain is virtually abstracted into lots and lots of three-dimensional pixels or voxels. The software performs a statistical test on each of these voxels in order to identify a group of voxels related to a specific task the person is performing in the scanner.
According to Eklund et al, a particular analysis technique, namely, clusterwise inference (indeed, there are many other analysis approaches), can yield up to 70% false positives. Now, a method yielding 70% false positives sounds like a universally bad method, until one takes a closer look and understands that in a vast majority of situations, the actual false-positive impact is not nearly as significant. It is important for readers to realize that much depends on the setup, and there is only a fraction of research situations where this could become an issue. Thus, here we have a prime example of how important it is to not generalize but think critically and pay attention to detail.