Compound gesture generation: a model based on ideational units

Xu, Y., Pelachaud, C. and Marsella, S. (2014) Compound gesture generation: a model based on ideational units. In: 14th International Intelligent Virtual Agents Conference (IVA 2014), Boston, MA, USA, 27-29 Aug 2014, pp. 477-491. (doi: 10.1007/978-3-319-09767-1_58)

Full text not currently available from Enlighten.


This work presents a hierarchical framework that generates continuous gesture animation performance for virtual characters. As opposed to approaches that focus more on realizing individual gesture, the focus of this work is on the relation between gestures as part of an overall gesture performance. Following Calbris’ work [3], our approach is to structure the performance around ideational units and determine gestural features within and across these ideational units. Furthermore, we use Calbris’ work on the relation between form and meaning in gesture to help inform how individual gesture’s expressivity is manipulated. Our framework takes in high level communicative function descriptions, generates behavior descriptions and realizes them using our character animation engine. We define the specifications for these different levels of descriptions. Finally, we show the general results as well as experiments illustrating the impacts of the key features.

Item Type:Conference Proceedings
Additional Information:First published in Lecture Notes in Computer Science 8637: 477-491
Glasgow Author(s) Enlighten ID:Marsella, Professor Stacy
Authors: Xu, Y., Pelachaud, C., and Marsella, S.
College/School:College of Medical Veterinary and Life Sciences > School of Psychology & Neuroscience
ISSN (Online):0302-9743

University Staff: Request a correction | Enlighten Editors: Update this record