Influence of multiple hypothesis testing on reproducibility in neuroimaging research: a simulation study and Python-based software

Puoliväli, T., Palva, S. and Palva, J. M. (2020) Influence of multiple hypothesis testing on reproducibility in neuroimaging research: a simulation study and Python-based software. Journal of Neuroscience Methods, 337, 108654. (doi: 10.1016/j.jneumeth.2020.108654) (PMID:32114144)

[img]
Preview
Text
212261.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

2MB

Abstract

BACKGROUND: Reproducibility of research findings has been recently questioned in many fields of science, including psychology and neurosciences. One factor influencing reproducibility is the simultaneous testing of multiple hypotheses, which entails false positive findings unless the analyzed p-values are carefully corrected. While this multiple testing problem is well known and studied, it continues to be both a theoretical and practical problem. NEW METHOD: Here we assess reproducibility in simulated experiments in the context of multiple testing. We consider methods that control either the family-wise error rate (FWER) or false discovery rate (FDR), including techniques based on random field theory (RFT), cluster-mass based permutation testing, and adaptive FDR. Several classical methods are also considered. The performance of these methods is investigated under two different models. RESULTS: We found that permutation testing is the most powerful method among the considered approaches to multiple testing, and that grouping hypotheses based on prior knowledge can improve power. We also found that emphasizing primary and follow-up studies equally produced most reproducible outcomes. COMPARISON WITH EXISTING METHOD(S): We have extended the use of two-group and separate-classes models for analyzing reproducibility and provide a new open-source software "MultiPy" for multiple hypothesis testing. CONCLUSIONS: Our simulations suggest that performing strict corrections for multiple testing is not sufficient to improve reproducibility of neuroimaging experiments. The methods are freely available as a Python toolkit "MultiPy" and we aim this study to help in improving statistical data analysis practices and to assist in conducting power and reproducibility analyses for new experiments.

Item Type:Articles
Additional Information:Funding: Academy of Finland (grants 266745 and 281414) and Instrumentarium Science Foundation.
Keywords:False discovery rate, family-wise error rate, multiple hypothesis testing, neurophysiological data, Python, reproducibility.
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Palva, Professor Satu and Palva, Professor Matias
Authors: Puoliväli, T., Palva, S., and Palva, J. M.
College/School:College of Medical Veterinary and Life Sciences > School of Psychology & Neuroscience
Journal Name:Journal of Neuroscience Methods
Publisher:Elsevier
ISSN:0165-0270
ISSN (Online):1872-678X
Published Online:27 February 2020
Copyright Holders:Copyright © 2020 Elsevier B.V.
First Published:First published in Journal of Neuroscience Methods 337: 108654
Publisher Policy:Reproduced in accordance with the publisher copyright policy

University Staff: Request a correction | Enlighten Editors: Update this record