Automatic Detection of Self-Adaptors for Psychological Distress

Lin, W., Orton, I., Liu, M. and Mahmoud, M. (2020) Automatic Detection of Self-Adaptors for Psychological Distress. In: 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020), Buenos Aires, Argentina, 16-20 Nov 2020, pp. 371-378. ISBN 9781728130798 (doi: 10.1109/FG47880.2020.00032)

[img] Text
259856.pdf - Accepted Version



Psychological distress is a significant and growing issue in society. Automatic detection, assessment, and analysis of such distress is an active area of research. Compared to modalities such as face, head, and vocal, research investigating the use of the body modality for these tasks is relatively sparse. This is, in part, due to the lack of available datasets and difficulty in automatically extracting useful body features. Recent advances in pose estimation and deep learning have enabled new approaches to this modality and domain. We propose a novel method to automatically detect self-adaptors and fidgeting, a subset of self-adaptors that has been shown to be correlated with psychological distress. We also propose a multi-modal approach that combines different feature representations using Multi-modal Deep Denoising Auto-Encoders and Improved Fisher Vector encoding. We also demonstrate that our proposed model, combining audio-visual features with automatically detected fidgeting behavioral cues, can successfully predict distress levels in a dataset labeled with self-reported anxiety and depression levels. To enable this research we introduce a new dataset containing full body videos for short interviews and self-reported distress labels.

Item Type:Conference Proceedings
Glasgow Author(s) Enlighten ID:Mahmoud, Dr Marwa
Authors: Lin, W., Orton, I., Liu, M., and Mahmoud, M.
College/School:College of Science and Engineering > School of Computing Science
Published Online:18 January 2021
Copyright Holders:Copyright © 2020 IEEE
First Published:First published in 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020): 371-378
Publisher Policy:Reproduced in accordance with the publisher copyright policy
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record