Focusnet++: attentive aggregated transformations for efficient and accurate medical image segmentation

Kaul, C., Pears, N., Dai, H., Murray-Smith, R. and Manandhar, S. (2021) Focusnet++: attentive aggregated transformations for efficient and accurate medical image segmentation. In: IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13-16 April 2021, pp. 1042-1046. ISBN 9781665412469 (doi: 10.1109/ISBI48211.2021.9433918)

[img] Text
253894.pdf - Accepted Version

718kB

Abstract

We propose a new residual block for convolutional neural networks and demonstrate its state-of-the-art performance in medical image segmentation. We combine attention mechanisms with group convolutions to create our group attention mechanism, which forms the fundamental building block of our network, FocusNet++. We employ a hybrid loss based on balanced cross entropy, Tversky loss and the adaptive logarithmic loss to enhance the performance along with fast convergence. Our results show that FocusNet++ achieves state-of-the-art results across various benchmark metrics for the ISIC 2018 melanoma segmentation and the cell nuclei segmentation datasets with fewer parameters and FLOPs.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Murray-Smith, Professor Roderick and Dai, Dr Hang and Kaul, Dr Chaitanya
Authors: Kaul, C., Pears, N., Dai, H., Murray-Smith, R., and Manandhar, S.
College/School:College of Science and Engineering > School of Computing Science
ISSN:1945-8452
ISBN:9781665412469
Published Online:25 May 2021
Copyright Holders:Copyright © 2021 IEEE
Publisher Policy:Reproduced in accordance with the publisher copyright policy
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record