Characterizing the manifolds of dynamic facial expression categorization

Delis, I., Jack, R. , Garrod, O., Panzeri, S. and Schyns, P. (2014) Characterizing the manifolds of dynamic facial expression categorization. Journal of Vision, 14(10), p. 1384. (doi:10.1167/14.10.1384)

Full text not currently available from Enlighten.

Abstract

Visual categorization seeks to understand how logical combinations of visual cues (e.g. "wide opened left eye" and "opened mouth") provide singly necessary and jointly sufficient conditions to access categories in memory (e.g. "surprise"). Such combinations of visual cues constitute the categorization manifold underlying the information goals of face categorization mechanisms and are therefore critical for their understanding. Yet, no method currently exists to reliably characterize the visual cues of the categorization manifold (i.e. its dimensions, such as "wide opened eyes" and "opened mouth") and how they combine (i.e. the manifold topology which dictates e.g. that "wide opened eyes" and "opened mouth" can be used independently or must be used jointly). Here we present a generic method to characterize categorization manifolds and apply it to observers categorizing dynamic facial expressions of emotion. To generate information, we used the Generative Face Grammar (GFG) platform (Yu et al., 2012) that selects on each trial (N = 2,400 trials/observer) a random set of Action Units (AUs) and values for their parametric activation (Jack et al., 2012). We asked 60 naïve Western Caucasian observers to categorize the presented random facial animation according to one of six classic emotions ("happy", "surprise", "fear", "disgust", "anger", "sad", plus "don't know"). For each observer, we used a Non-negative Matrix Factorization (NMF) algorithm to extract AU x Time components of facial expression signals associated with the pereceptual categorization of each emotion. We then performed a Linear Discriminant Analysis (LDA) to select the components (i.e. manifold dimensions) that discriminate between six emotion categories (Quian Quiroga and Panzeri, 2009; Delis et al., 2013). Hence, the dimensions of the resultant categorization manifolds represent the strategies observers use to categorize the emotions. Our data show that observers use multiple categorization strategies, which constitute the atomic signals of emotion communication via facial expressions.

Item Type:Articles
Additional Information:Meeting abstract presented at VSS 2014.
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Garrod, Dr Oliver and Panzeri, Professor Stefano and Jack, Dr Rachael and Delis, Mr Ioannis and Schyns, Professor Philippe
Authors: Delis, I., Jack, R., Garrod, O., Panzeri, S., and Schyns, P.
College/School:College of Medical Veterinary and Life Sciences > Institute of Neuroscience and Psychology
College of Science and Engineering > School of Psychology
Journal Name:Journal of Vision
Publisher:Association for Research in Vision and Ophthalmology
ISSN:1534-7362
ISSN (Online):1534-7362

University Staff: Request a correction | Enlighten Editors: Update this record