The Fully Convolutional Transformer for Medical Image Segmentation

Tragakis, A., Kaul, C., Murray-Smith, R. and Husmeier, D. (2023) The Fully Convolutional Transformer for Medical Image Segmentation. In: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV2023), Waikoloa, HI, USA, 03-07 Jan 2023, pp. 1022-1031. ISBN 9781665493468 (doi: 10.1109/WACV56688.2023.00365)

[img] Text
282738.pdf - Accepted Version
Available under License Creative Commons Attribution.

2MB

Abstract

We propose a novel transformer, capable of segmenting medical images of varying modalities. Challenges posed by the fine-grained nature of medical image analysis mean that the adaptation of the transformer for their analysis is still at nascent stages. The overwhelming success of the UNet lay in its ability to appreciate the fine-grained nature of the segmentation task, an ability which existing transformer based models do not currently posses. To address this shortcoming, we propose The Fully Convolutional Transformer (FCT), which builds on the proven ability of Convolutional Neural Networks to learn effective image representations, and combines them with the ability of Transformers to effectively capture long-term dependencies in its inputs. The FCT is the first fully convolutional Transformer model in medical imaging literature. It processes its input in two stages, where first, it learns to extract long range semantic dependencies from the input image, and then learns to capture hierarchical global attributes from the features. FCT is compact, accurate and robust. Our results show that it outperforms all existing transformer architectures by large margins across multiple medical image segmentation datasets of varying data modalities without the need for any pre-training. FCT outperforms its immediate competitor on the ACDC dataset by 1.3%, on the Synapse dataset by 4.4%, on the Spleen dataset by 1.2% and on ISIC 2017 dataset by 1.1% on the dice metric, with up to five times fewer parameters. On the ACDC Post-2017-MICCAI-Challenge online test set, our model sets a new state-of-the-art on unseen MRI test cases out-performing large ensemble models as well as nnUNet with considerably fewer parameters. Our code, environments and models will be available via GitHub † .

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Murray-Smith, Professor Roderick and Husmeier, Professor Dirk and Tragakis, Athanasios and Kaul, Dr Chaitanya
Authors: Tragakis, A., Kaul, C., Murray-Smith, R., and Husmeier, D.
College/School:College of Science and Engineering > School of Computing Science
College of Science and Engineering > School of Mathematics and Statistics > Statistics
ISSN:2642-9381
ISBN:9781665493468
Copyright Holders:Copyright © 2023 IEEE
Publisher Policy:Reproduced in accordance with the copyright policy of the publisher
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record

Project CodeAward NoProject NamePrincipal InvestigatorFunder's NameFunder RefLead Dept
190841UK Quantum Technology Hub in Enhanced Quantum ImagingMiles PadgettEngineering and Physical Sciences Research Council (EPSRC)EP/M01326X/1P&S - Physics & Astronomy
300982Exploiting Closed-Loop Aspects in Computationally and Data Intensive AnalyticsRoderick Murray-SmithEngineering and Physical Sciences Research Council (EPSRC)EP/R018634/1Computing Science
308255The SofTMech Statistical Emulation and Translation HubDirk HusmeierEngineering and Physical Sciences Research Council (EPSRC)EP/T017899/1M&S - Statistics