Robust inter-subject audiovisual decoding in functional magnetic resonance imaging using high-dimensional regression

Raz, G. et al. (2017) Robust inter-subject audiovisual decoding in functional magnetic resonance imaging using high-dimensional regression. NeuroImage, 163, pp. 244-263. (doi: 10.1016/j.neuroimage.2017.09.032) (PMID:28939433)

[img]
Preview
Text
208867.pdf - Accepted Version

2MB

Abstract

Major methodological advancements have been recently made in thefield of neural decoding, which is concernedwith the reconstruction of mental content from neuroimaging measures. However, in the absence of a large-scaleexamination of the validity of the decoding models across subjects and content, the extent to which these modelscan be generalized is not clear. This study addresses the challenge of producing generalizable decoding models,which allow the reconstruction of perceived audiovisual features from human magnetic resonance imaging (fMRI)data without prior training of the algorithm on the decoded content. We applied an adapted version of kernelridge regression combined with temporal optimization on data acquired duringfilm viewing (234 runs) togenerate standardized brain models for sound loudness, speech presence, perceived motion, face-to-frame ratio,lightness, and color brightness. The prediction accuracies were tested on data collected from different subjectswatching other movies mainly in another scanner.Substantial and significant (QFDR<0.05) correlations between the reconstructed and the original descriptorswere found for thefirst three features (loudness, speech, and motion) in all of the 9 test movies (R¼0.62,R¼0.60,R¼0.60, respectively) with high reproducibility of the predictors across subjects. The face ratio modelproduced significant correlations in 7 out of 8 movies (R¼0.56). The lightness and brightness models did not showrobustness (R¼0.23,R¼0). Further analysis of additional data (95 runs) indicated that loudness reconstructionveridicality can consistently reveal relevant group differences in musical experience.Thefindings point to the validity and generalizability of our loudness, speech, motion, and face ratio models forcomplex cinematic stimuli (as well as for music in the case of loudness). While future research should furthervalidate these models using controlled stimuli and explore the feasibility of extracting more complex models viathis method, the reliability of our results indicates the potential usefulness of the approach and the resultingmodels in basic scientific and diagnostic contexts.

Item Type:Articles
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Svanera, Dr Michele
Authors: Raz, G., Svanera, M., Singer, N., Gilam, G., Bleich Cohen, M., Lin, T., Admon, R., Gonen, T., Thaler, A., Granot, R. Y., Goebel, R., Benini, S., and Valente, G.
College/School:College of Medical Veterinary and Life Sciences > School of Psychology & Neuroscience
Journal Name:NeuroImage
Publisher:Elsevier
ISSN:1053-8119
ISSN (Online):1095-9572
Published Online:20 September 2017
Copyright Holders:Copyright © 2017 Elsevier Inc.
First Published:First published in NeuroImage163:244-263
Publisher Policy:Reproduced in accordance with the publisher copyright policy

University Staff: Request a correction | Enlighten Editors: Update this record