Synthesizing Expressive Facial and Speech Animation by Text-to-IPA Translation with Emotion Control

Stef, A., Perera, K., Shum, H.P.H. and Ho, E.S.L. (2019) Synthesizing Expressive Facial and Speech Animation by Text-to-IPA Translation with Emotion Control. In: 12th International Conference on Software, Knowledge, Information Management & Applications (SKIMA), Phnom Penh, Cambodia, 03-05 December 2018, ISBN 9781538691410 (doi: 10.1109/SKIMA.2018.8631536)

Full text not currently available from Enlighten.

Abstract

Given the complexity of human facial anatomy, animating facial expressions and lip movements for speech is a very time-consuming and tedious task. In this paper, a new text-to-animation framework for facial animation synthesis is proposed. The core idea is to improve the expressiveness of lip-sync animation by incorporating facial expressions in 3D animated characters. This idea is realized as a plug-in in Autodesk Maya, one of the most popular animation platforms in the industry, such that professional animators can effectively apply the method in their existing work. We evaluate the proposed system by conducting two sets of surveys, in which both novice and experienced users participate in the user study to provide feedback and evaluations from different perspectives. The results of the survey highlight the effectiveness of creating realistic facial animations with the use of emotion expressions.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Ho, Dr Edmond S. L
Authors: Stef, A., Perera, K., Shum, H.P.H., and Ho, E.S.L.
College/School:College of Science and Engineering > School of Computing Science
ISBN:9781538691410

University Staff: Request a correction | Enlighten Editors: Update this record