Dental caries detection using a semi-supervised learning approach

Qayyum, A., Tahir, A., Butt, M. A., Luke, A., Abbas, H. T. , Qadir, J., Arshad, K., Assaleh, K., Imran, M. A. and Abbasi, Q. H. (2023) Dental caries detection using a semi-supervised learning approach. Scientific Reports, 13, 749. (doi: 10.1038/s41598-023-27808-9) (PMID:36639724)

[img] Text
289320.pdf - Published Version
Available under License Creative Commons Attribution.

3MB

Abstract

Early diagnosis of dental caries progression can prevent invasive treatment and enable preventive treatment. In this regard, dental radiography is a widely used tool to capture dental visuals that are used for the detection and diagnosis of caries. Different deep learning (DL) techniques have been used to automatically analyse dental images for caries detection. However, most of these techniques require large-scale annotated data to train DL models. On the other hand, in clinical settings, such medical images are scarcely available and annotations are costly and time-consuming. To this end, we present an efficient self-training-based method for caries detection and segmentation that leverages a small set of labelled images for training the teacher model and a large collection of unlabelled images for training the student model. We also propose to use centroid cropped images of the caries region and different augmentation techniques for the training of self-supervised models that provide computational and performance gains as compared to fully supervised learning and standard self-supervised learning methods. We present a fully labelled dental radiographic dataset of 141 images that are used for the evaluation of baseline and proposed models. Our proposed self-supervised learning strategy has provided performance improvement of approximately 6% and 3% in terms of average pixel accuracy and mean intersection over union, respectively as compared to standard self-supervised learning. Data and code will be made available to facilitate future research.

Item Type:Articles
Additional Information:This work is supported in parts by EPSRC grant no: EP/T021063/1 and Ajman University Internal Research Grants No. [2021-IRGDEN-7 and RTG-2022-DEN-01].
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Abbas, Dr Hasan and Abbasi, Professor Qammer and Tahir, Dr Ahsen and Imran, Professor Muhammad and Qayyum, Adnan
Authors: Qayyum, A., Tahir, A., Butt, M. A., Luke, A., Abbas, H. T., Qadir, J., Arshad, K., Assaleh, K., Imran, M. A., and Abbasi, Q. H.
College/School:College of Science and Engineering > School of Engineering > Autonomous Systems and Connectivity
College of Science and Engineering > School of Engineering > Electronics and Nanoscale Engineering
Journal Name:Scientific Reports
Publisher:Nature Research
ISSN:2045-2322
ISSN (Online):2045-2322
Published Online:13 January 2023
Copyright Holders:Copyright © 2023 The Authors
First Published:First published in Scientific Reports 13(1):749
Publisher Policy:Reproduced under a Creative Commons License

University Staff: Request a correction | Enlighten Editors: Update this record

Project CodeAward NoProject NamePrincipal InvestigatorFunder's NameFunder RefLead Dept
307826COG-MHEAR: Towards cognitiveky-inspired 5G-IoT enabled, multi-modal Hearing AidsQammer H AbbasiEngineering and Physical Sciences Research Council (EPSRC)EP/T021063/1ENG - Systems Power & Energy