Perception-driven facial expression synthesis

Yu, H., Garrod, O. G.B. and Schyns, P. G. (2012) Perception-driven facial expression synthesis. Computers and Graphics, 36(3), pp. 152-162. (doi: 10.1016/j.cag.2011.12.002)

Full text not currently available from Enlighten.

Abstract

We propose a novel platform to flexibly synthesize any arbitrary meaningful facial expression in the absence of actor performance data for that expression. With techniques from computer graphics, we synthesized random arbitrary dynamic facial expression animations. The synthesis was controlled by parametrically modulating Action Units (AUs) taken from the Facial Action Coding System (FACS). We presented these to human observers and instructed them to categorize the animations according to one of six possible facial expressions. With techniques from human psychophysics, we modeled the internal representation of these expressions for each observer, by extracting from the random noise the perceptually relevant expression parameters. We validated these models of facial expressions with naive observers.

Item Type:Articles
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Garrod, Dr Oliver and Yu, Mr Hui and Schyns, Professor Philippe
Authors: Yu, H., Garrod, O. G.B., and Schyns, P. G.
College/School:College of Medical Veterinary and Life Sciences > School of Psychology & Neuroscience
Journal Name:Computers and Graphics
Publisher:Elsevier
ISSN:0097-8493
ISSN (Online):1873-7684
Published Online:22 December 2011

University Staff: Request a correction | Enlighten Editors: Update this record