Incongruence effects in cross-modal emotional processing in autistic traits: an fMRI study

Liu, P., Sutherland, M. and Pollick, F. E. (2021) Incongruence effects in cross-modal emotional processing in autistic traits: an fMRI study. Neuropsychologia, 161, 107997. (doi: 10.1016/j.neuropsychologia.2021.107997) (PMID:34425144)

[img] Text
249955.pdf - Accepted Version
Restricted to Repository staff only until 20 August 2022.
Available under License Creative Commons Attribution Non-commercial No Derivatives.

5MB

Abstract

In everyday life, emotional information is often conveyed by both the face and the voice. Consequently, information presented by one source can alter the way in which information from the other source is perceived, leading to emotional incongruence. Here, we used functional magnetic resonance imaging (fMRI) to examine neutral correlates of two different types of emotional incongruence in audiovisual processing, namely incongruence of emotion-valence and incongruence of emotion-presence. Participants were in two groups, one group with a low Autism Quotient score (LAQ) and one with a high score (HAQ). Each participant experienced emotional (happy, fearful) or neutral faces or voices while concurrently being exposed to emotional (happy, fearful) or neutral voices or faces. They were instructed to attend to either the visual or auditory track. The incongruence effect of emotion-valence was characterized by activation in a wide range of brain regions in both hemispheres involving the inferior frontal gyrus, cuneus, superior temporal gyrus, and middle frontal gyrus. The incongruence effect of emotion-presence was characterized by activation in a set of temporal and occipital regions in both hemispheres, including the middle occipital gyrus, middle temporal gyrus and inferior temporal gyrus. In addition, the present study identified greater recruitment of the right inferior parietal lobule in perceiving audio-visual emotional expressions in HAQ individuals, as compared to the LAQ individuals. Depending on face or voice-to-be attended, different patterns of emotional incongruence were found between the two groups. Specifically, the HAQ group tend to show more incidental processing to visual information whilst the LAQ group tend to show more incidental processing to auditory information during the crossmodal emotional incongruence decoding. These differences might be attributed to different attentional demands and different processing strategies between the two groups.

Item Type:Articles
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:LIU, PEIPEI and Pollick, Professor Frank and Sutherland, Professor Margaret
Creator Roles:
Liu, P.Conceptualization, Methodology, Investigation, Formal analysis, Visualization, Writing – original draft, Writing – review and editing
Sutherland, M.Conceptualization, Supervision, Writing – review and editing
Pollick, F. E.Conceptualization, Methodology, Formal analysis, Supervision, Writing – review and editing
Authors: Liu, P., Sutherland, M., and Pollick, F. E.
College/School:College of Science and Engineering > School of Psychology
College of Social Sciences > School of Education
Journal Name:Neuropsychologia
Publisher:Elsevier
ISSN:0028-3932
ISSN (Online):1873-3514
Published Online:20 August 2021
Copyright Holders:Copyright © 2021 Elsevier Ltd.
First Published:First published in Neuropsychologia 161: 107997
Publisher Policy:Reproduced in accordance with the publisher copyright policy

University Staff: Request a correction | Enlighten Editors: Update this record