Illumination-aware multi-task GANs for foreground segmentation

Sakkos, D., Ho, E. S. L. and Shum, H. P. H. (2019) Illumination-aware multi-task GANs for foreground segmentation. IEEE Access, 7, pp. 10976-10986. (doi: 10.1109/ACCESS.2019.2891943)

Full text not currently available from Enlighten.

Abstract

Foreground-background segmentation has been an active research area over the years. However, conventional models fail to produce accurate results when challenged with the videos of challenging illumination conditions. In this paper, we present a robust model that allows accurately extracting the foreground even in exceptionally dark or bright scenes and in continuously varying illumination in a video sequence. This is accomplished by a triple multi-task generative adversarial network (TMT-GAN) that effectively models the semantic relationship between the dark and bright images and performs binary segmentation end-to-end. Our contribution is twofold: first, we show that by jointly optimizing the GAN loss and the segmentation loss, our network simultaneously learns both tasks that mutually benefit each other. Second, fusing features of images with varying illumination into the segmentation branch vastly improve the performance of the network. Comparative evaluations on highly challenging real and synthetic benchmark datasets (ESI and SABS) demonstrate the robustness of TMT-GAN and its superiority over state-of-the-art approaches.

Item Type:Articles
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Ho, Dr Edmond S. L
Authors: Sakkos, D., Ho, E. S. L., and Shum, H. P. H.
College/School:College of Science and Engineering > School of Computing Science
Journal Name:IEEE Access
Publisher:IEEE
ISSN:2169-3536
ISSN (Online):2169-3536

University Staff: Request a correction | Enlighten Editors: Update this record