Automatic detection of tooth-gingiva trim lines on dental surfaces

Chen, G., Qin, J., Amor, B. B., Zhou, W., Dai, H., Zhou, T., Huang, H. and Shao, L. (2023) Automatic detection of tooth-gingiva trim lines on dental surfaces. IEEE Transactions on Medical Imaging, (doi: 10.1109/tmi.2023.3263161) (PMID:37015112) (Early Online Publication)

Full text not currently available from Enlighten.


Detecting the tooth-gingiva trim line from a dental surface plays a critical role in dental treatment planning and aligner 3D printing. Existing methods treat this task as a segmentation problem, which is resolved with geometric deep learning based mesh segmentation techniques. However, these methods can only provide indirect results (i.e., segmented teeth) and suffer from unsatisfactory accuracy due to the incapability of making full use of high-resolution dental surfaces. To this end, we propose a two-stage geometric deep learning framework for automatically detecting tooth-gingiva trim lines from dental surfaces. Our framework consists of a trim line proposal network (TLP-Net) for predicting an initial trim line from the low-resolution dental surface as well as a trim line refinement network (TLR-Net) for refining the initial trim line with the information from the high-resolution dental surface. Specifically, our TLP-Net predicts the initial trim line by fusing the multi-scale features from a U-Net with a proposed residual multi-scale attention fusion module. Moreover, we propose feature bridge modules and a trim line loss to further improve the accuracy. The resulting trim line is then fed to our TLR-Net, which is a deep-based LDDMM model with the high-resolution dental surface as input. In addition, dense connections are incorporated into TLR-Net for improved performance. Our framework provides an automatic solution to trim line detection by making full use of raw high-resolution dental surfaces. Extensive experiments on a clinical dental surface dataset demonstrate that our TLP-Net and TLR-Net are superior trim line detection methods and outperforms cutting-edge methods in both qualitative and quantitative evaluations.

Item Type:Articles
Additional Information:This work was done when G. Chen, J. Qin, H. Dai, T. Zhou, and L. Shao were with Inception Institute of Artificial Intelligence. It was supported in part by the National Natural Science Foundation of China (No. 62201465, 62276129, and 62172228) and the Fundamental Research Funds for the Central Universities (No. D5000220213).
Status:Early Online Publication
Glasgow Author(s) Enlighten ID:Dai, Dr Hang
Authors: Chen, G., Qin, J., Amor, B. B., Zhou, W., Dai, H., Zhou, T., Huang, H., and Shao, L.
College/School:College of Science and Engineering > School of Computing Science
Journal Name:IEEE Transactions on Medical Imaging
ISSN (Online):1558-254X
Published Online:29 March 2023

University Staff: Request a correction | Enlighten Editors: Update this record