Representational interactions during audiovisual speech entrainment: Redundancy in left posterior superior temporal gyrus and synergy in left motor cortex

Park, H. , Ince, R. A.A. , Schyns, P. G. , Thut, G. and Gross, J. (2018) Representational interactions during audiovisual speech entrainment: Redundancy in left posterior superior temporal gyrus and synergy in left motor cortex. PLoS Biology, 16(8), e2006558. (doi:10.1371/journal.pbio.2006558) (PMID:30080855) (PMCID:PMC6095613)

Park, H. , Ince, R. A.A. , Schyns, P. G. , Thut, G. and Gross, J. (2018) Representational interactions during audiovisual speech entrainment: Redundancy in left posterior superior temporal gyrus and synergy in left motor cortex. PLoS Biology, 16(8), e2006558. (doi:10.1371/journal.pbio.2006558) (PMID:30080855) (PMCID:PMC6095613)

[img]
Preview
Text
166813.pdf - Published Version
Available under License Creative Commons Attribution.

7MB

Abstract

Integration of multimodal sensory information is fundamental to many aspects of human behavior, but the neural mechanisms underlying these processes remain mysterious. For example, during face-to-face communication, we know that the brain integrates dynamic auditory and visual inputs, but we do not yet understand where and how such integration mechanisms support speech comprehension. Here, we quantify representational interactions between dynamic audio and visual speech signals and show that different brain regions exhibit different types of representational interaction. With a novel information theoretic measure, we found that theta (3–7 Hz) oscillations in the posterior superior temporal gyrus/sulcus (pSTG/S) represent auditory and visual inputs redundantly (i.e., represent common features of the two), whereas the same oscillations in left motor and inferior temporal cortex represent the inputs synergistically (i.e., the instantaneous relationship between audio and visual inputs is also represented). Importantly, redundant coding in the left pSTG/S and synergistic coding in the left motor cortex predict behavior—i.e., speech comprehension performance. Our findings therefore demonstrate that processes classically described as integration can have different statistical properties and may reflect distinct mechanisms that occur in different brain regions to support audiovisual speech comprehension.

Item Type:Articles
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Schyns, Professor Philippe and Thut, Professor Gregor and Park, Dr Hyojin and Ince, Dr Robin and Gross, Professor Joachim
Authors: Park, H., Ince, R. A.A., Schyns, P. G., Thut, G., and Gross, J.
College/School:College of Medical Veterinary and Life Sciences > Institute of Neuroscience and Psychology
Journal Name:PLoS Biology
Publisher:Public Library of Science
ISSN:1544-9173
ISSN (Online):1545-7885
Copyright Holders:Copyright © 2018 Park et al.
First Published:First published in PLoS Biology 16(8):e2006558
Publisher Policy:Reproduced under a Creative Commons License

University Staff: Request a correction | Enlighten Editors: Update this record

Project CodeAward NoProject NamePrincipal InvestigatorFunder's NameFunder RefLead Dept
597051Natural and modulated neural communication: State-dependent decoding and driving of human Brain Oscillations.Joachim GrossWellcome Trust (WELLCOTR)098433/Z/12/ZINP - CENTRE FOR COGNITIVE NEUROIMAGING
597911Natural and modulated neural communication: State-dependent decoding and driving of human Brain OscillationsGregor ThutWellcome Trust (WELLCOTR)098434/Z/12/ZINP - CENTRE FOR COGNITIVE NEUROIMAGING
698281Brain Algorithmics: Reverse Engineering Dynamic Information Processing Networks from MEG time seriesPhilippe SchynsWellcome Trust (WELLCOTR)107802/Z/15/ZINP - CENTRE FOR COGNITIVE NEUROIMAGING