Cheng, D. S., Salamin, H., Salvagnini, P., Cristani, M., Vinciarelli, A. and Murino, V. (2014) Predicting online lecture ratings based on gesturing and vocal behavior. Journal on Multimodal User Interfaces, 8(2), pp. 151-160. (doi: 10.1007/s12193-013-0142-z)
|
Text
100503.pdf - Accepted Version 648kB |
Publisher's URL: http://dx.doi.org/10.1007/s12193-013-0142-z
Abstract
Nonverbal behavior plays an important role in any human–human interaction. Teaching—an inherently social activity—is not an exception. So far, the effect of nonverbal behavioral cues accompanying lecture delivery was investigated in the case of traditional ex-cathedra lectures, where students and teachers are co-located. However, it is becoming increasingly more frequent to watch lectures online and, in this new type of setting, it is still unclear what the effect of nonverbal communication is. This article tries to address the problem and proposes experiments performed over the lectures of a popular web repository (“Videolectures”). The results show that automatically extracted nonverbal behavioral cues (prosody, voice quality and gesturing activity) predict the ratings that “Videolectures” users assign to the presentations.
Item Type: | Articles |
---|---|
Status: | Published |
Refereed: | Yes |
Glasgow Author(s) Enlighten ID: | Vinciarelli, Professor Alessandro and Salamin, Mr Hugues |
Authors: | Cheng, D. S., Salamin, H., Salvagnini, P., Cristani, M., Vinciarelli, A., and Murino, V. |
College/School: | College of Science and Engineering > School of Computing Science |
Journal Name: | Journal on Multimodal User Interfaces |
Publisher: | Springer |
ISSN: | 1783-7677 |
ISSN (Online): | 1783-8738 |
Copyright Holders: | Copyright © 2014 OpenInterface Association |
First Published: | First published in Journal on Multimodal User Interfaces 8(2):151-160 |
Publisher Policy: | Reproduced in accordance with the copyright policy of the publisher |
University Staff: Request a correction | Enlighten Editors: Update this record