Multimodal Classification of Driver Glance

Baumann, D., Mahmoud, M. , Robinson, P., Dias, E. and Skrypchuk, L. (2017) Multimodal Classification of Driver Glance. In: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), San Antonio, TX, USA, 23-26 Oct 2017, pp. 389-394. ISBN 9781538605639 (doi: 10.1109/ACII.2017.8273629)

Full text not currently available from Enlighten.


This paper presents a multimodal approach to in-vehicle classification of driver glances. Driver glance is a strong predictor of cognitive load and is a useful input to many applications in the automotive domain. Six descriptive glance regions are defined and a classifier is trained on video recordings of drivers from a single low-cost camera. Visual features such as head orientation, eye gaze and confidence ratings are extracted, then statistical methods are used to perform failure analysis and calibration on the visual features. Non-visual features such as steering wheel angle and indicator position are extracted from a RaceLogic VBOX system. The approach is evaluated on a dataset containing multiple 60 second samples from 14 participants recorded while driving in a natural environment. We compare our multimodal approach to separate unimodal approaches using both Support Vector Machine (SVM) and Random Forests (RF) classifiers. RF Mean Decrease in Gini Index is used to rank selected features which gives insight into the selected features and improves the classifier performance. We demonstrate that our multimodal approach yields significantly higher results than unimodal approaches. The final model achieves an average F 1 score of 70.5% across the six classes.

Item Type:Conference Proceedings
Additional Information:The work presented in this paper was funded and supported by Jaguar Land Rover, Coventry, UK.
Glasgow Author(s) Enlighten ID:Mahmoud, Dr Marwa
Authors: Baumann, D., Mahmoud, M., Robinson, P., Dias, E., and Skrypchuk, L.
College/School:College of Science and Engineering > School of Computing Science
Published Online:01 February 2018

University Staff: Request a correction | Enlighten Editors: Update this record