Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients

Barone, P., Chambaudie, L., Strelnikov, K., Fraysse, B., Marx, M., Belin, P. and Deguine, O. (2016) Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients. Cortex, 83, pp. 259-270. (doi:10.1016/j.cortex.2016.08.005) (PMID:27622640)

Barone, P., Chambaudie, L., Strelnikov, K., Fraysse, B., Marx, M., Belin, P. and Deguine, O. (2016) Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients. Cortex, 83, pp. 259-270. (doi:10.1016/j.cortex.2016.08.005) (PMID:27622640)

Full text not currently available from Enlighten.

Abstract

Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual–auditory processing, such as face–voice interactions, which are important in social communication. We analyse and compare visual–auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual–auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face–voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face–voice interaction; this is important for patient internal supramodal representation.

Item Type:Articles
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Belin, Professor Pascal
Authors: Barone, P., Chambaudie, L., Strelnikov, K., Fraysse, B., Marx, M., Belin, P., and Deguine, O.
College/School:College of Medical Veterinary and Life Sciences > Institute of Neuroscience and Psychology
Journal Name:Cortex
Publisher:Elsevier
ISSN:0010-9452
Published Online:26 August 2016

University Staff: Request a correction | Enlighten Editors: Update this record

Project CodeAward NoProject NamePrincipal InvestigatorFunder's NameFunder RefLead Dept
561961Audiovisual integration of identity information from the face and voice: a combined behavioural, fMRI and MEG studyPascal BelinBiotechnology and Biological Sciences Research Council (BBSRC)BB/I022287/1INP - CENTRE FOR COGNITIVE NEUROIMAGING