Multimodal Generation of Radiology Reports using Knowledge-Grounded Extraction of Entities and Relations

Dalla Serra, F., Clackett, W., Wang, C., MacKinnon, H., Deligianni, F. , Dalton, J. and O’Neil, A. Q. (2022) Multimodal Generation of Radiology Reports using Knowledge-Grounded Extraction of Entities and Relations. In: 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (AACL-IJCNLP 2022), Online, 20-23 November 2022, pp. 615-624. ISBN 9781955917650

[img] Text
281130.pdf - Published Version
Available under License Creative Commons Attribution.

813kB

Publisher's URL: https://aclanthology.org/2022.aacl-main.47/

Abstract

Automated reporting has the potential to assist radiologists with the time-consuming procedure of generating text radiology reports. Most existing approaches generate the report directly from the radiology image, however we observe that the resulting reports exhibit realistic style but lack clinical accuracy. Therefore, we propose a two-step pipeline that subdivides the problem into factual triple extraction followed by free-text report generation. The first step comprises supervised extraction of clinically relevant structured information from the image, expressed as triples of the form (entity1, relation, entity2). In the second step, these triples are input to condition the generation of the radiology report. In particular, we focus our work on Chest X-Ray (CXR) radiology report generation. The proposed framework shows state-of-the-art results on the MIMIC-CXR dataset according to most of the standard text generation metrics that we employ (BLEU, METEOR, ROUGE) and to clinical accuracy metrics (recall, precision and F1 assessed using the CheXpert labeler), also giving a 23% reduction in the total number of errors and a 29% reduction in critical clinical errors as assessed by expert human evaluation. In future, this solution can easily integrate more advanced model architectures - to both improve the triple extraction and the report generation - and can be applied to other complex image captioning tasks, such as those found in the medical domain.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Deligianni, Dr Fani and Dalton, Dr Jeff and Dalla Serra, Francesco
Authors: Dalla Serra, F., Clackett, W., Wang, C., MacKinnon, H., Deligianni, F., Dalton, J., and O’Neil, A. Q.
College/School:College of Science and Engineering
College of Science and Engineering > School of Computing Science
ISBN:9781955917650
Copyright Holders:Copyright © 2022 Association for Computational Linguistics
First Published:First published in Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers): 615-624
Publisher Policy:Reproduced under a Creative Commons License
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record