GenKIE: Robust Generative Multimodal Document Key Information Extraction

Cao, P., Wang, Y., Zhang, Q. and Meng, Z. (2023) GenKIE: Robust Generative Multimodal Document Key Information Extraction. In: 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023), Singapore, 06-10 Dec 2023, pp. 14702-14713. (doi: 10.18653/v1/2023.findings-emnlp.979)

[img] Text
307887.pdf - Published Version
Available under License Creative Commons Attribution.

1MB

Abstract

Key information extraction (KIE) from scanned documents has gained increasing attention because of its applications in various domains. Although promising results have been achieved by some recent KIE approaches, they are usually built based on discriminative models, which lack the ability to handle optical character recognition (OCR) errors and require laborious token-level labeling. In this paper, we propose a novel generative end-to-end model, named GenKIE, to address the KIE task. GenKIE is a sequence-to-sequence multimodal generative model that utilizes multimodal encoders to embed visual, layout and textual features and a decoder to generate the desired output. Well-designed prompts are leveraged to incorporate the label semantics as the weakly supervised signals and entice the generation of the key information. One notable advantage of the generative model is that it enables automatic correction of OCR errors. Besides, token-level granular annotation is not required. Extensive experiments on multiple public real-world datasets show that GenKIE effectively generalizes over different types of documents and achieves state-of-the-art results. Our experiments also validate the model’s robustness against OCR errors, making GenKIE highly applicable in real-world scenarios.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Meng, Dr Zaiqiao
Authors: Cao, P., Wang, Y., Zhang, Q., and Meng, Z.
College/School:College of Science and Engineering > School of Computing Science
Copyright Holders:Copyright © 2023 Association for Computational Linguistics
First Published:First published in Findings of the Association for Computational Linguistics: EMNLP 2023
Publisher Policy:Reproduced under a Creative Commons license
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record