Public Speaking Training with a Multimodal Interactive Virtual Audience Framework

Chollet, M. , Stefanov, K., Prendinger, H. and Scherer, S. (2015) Public Speaking Training with a Multimodal Interactive Virtual Audience Framework. In: International Conference on Multimodal Interaction (ICMI '15), Seattle, WA, USA, 09-13 Nov 2015, pp. 367-368. ISBN 9781450339124 (doi: 10.1145/2818346.2823294)

Full text not currently available from Enlighten.

Abstract

We have developed an interactive virtual audience platform for public speaking training. Users' public speaking behavior is automatically analyzed using multimodal sensors, and ultimodal feedback is produced by virtual characters and generic visual widgets depending on the user's behavior. The flexibility of our system allows to compare different interaction mediums (e.g. virtual reality vs normal interaction), social situations (e.g. one-on-one meetings vs large audiences) and trained behaviors (e.g. general public speaking performance vs specific behaviors).

Item Type:Conference Proceedings
Additional Information:This material is partly supported by the JSPS (Japan Society for the Promotion of Science) and the National Science Foundation under Grant No. IIS-1421330.
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Chollet, Dr Mathieu
Authors: Chollet, M., Stefanov, K., Prendinger, H., and Scherer, S.
College/School:College of Science and Engineering > School of Computing Science
Journal Name:ICMI 2015 - Proceedings of the 2015 ACM International Conference on Multimodal Interaction
ISBN:9781450339124
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record