Video Transformer for Deepfake Detection with Incremental Learning

Khan, S. A. and Dai, H. (2021) Video Transformer for Deepfake Detection with Incremental Learning. In: MM '21: Proceedings of the 29th ACM International Conference on Multimedia, Online, 20-24 Oct 2021, pp. 1821-1828. ISBN 9781450386517 (doi: 10.1145/3474085.3475332)

Full text not currently available from Enlighten.


Face forgery by deepfake is widely spread over the internet and this raises severe societal concerns. In this paper, we propose a novel video transformer with incremental learning for detecting deepfake videos. To better align the input face images, we use a 3D face reconstruction method to generate UV texture from a single input face image. The aligned face image can also provide pose, eyes blink and mouth movement information that cannot be perceived in the UV texture image, so we use both face images and their UV texture maps to extract the image features. We present an incremental learning strategy to fine-tune the proposed model on a smaller amount of data and achieve better deepfake detection performance. The comprehensive experiments on various public deepfake datasets demonstrate that the proposed video transformer model with incremental learning achieves state-of-the-art performance in the deepfake video detection task with enhanced feature learning from the sequenced data.

Item Type:Conference Proceedings
Glasgow Author(s) Enlighten ID:Dai, Dr Hang
Authors: Khan, S. A., and Dai, H.
College/School:College of Science and Engineering > School of Computing Science

University Staff: Request a correction | Enlighten Editors: Update this record