Building and Evaluating Open-Domain Dialogue Corpora with Clarifying Questions

Aliannejadi, M., Kiseleva, J., Chuklin, A., Dalton, J. and Burtsev, M. (2021) Building and Evaluating Open-Domain Dialogue Corpora with Clarifying Questions. In: 2021 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic, 07-11 Nov 2021, pp. 4473-4484. ISBN 9781955917094

[img] Text
256990.pdf - Published Version
Available under License Creative Commons Attribution.

635kB

Publisher's URL: https://aclanthology.org/2021.emnlp-main.367

Abstract

Enabling open-domain dialogue systems to ask clarifying questions when appropriate is an important direction for improving the quality of the system response. Namely, for cases when a user request is not specific enough for a conversation system to provide an answer right away, it is desirable to ask a clarifying question to increase the chances of retrieving a satisfying answer. To address the problem of ‘asking clarifying questions in open-domain dialogues’: (1) we collect and release a new dataset focused on open-domain single- and multi-turn conversations, (2) we benchmark several state-of-the-art neural baselines, and (3) we propose a pipeline consisting of offline and online steps for evaluating the quality of clarifying questions in various dialogues. These contributions are suitable as a foundation for further research.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Dalton, Dr Jeff
Authors: Aliannejadi, M., Kiseleva, J., Chuklin, A., Dalton, J., and Burtsev, M.
College/School:College of Science and Engineering > School of Computing Science
ISBN:9781955917094
Published Online:01 November 2021
Copyright Holders:Copyright © 2021 Association for Computational Linguistics
Publisher Policy:Reproduced under a Creative Commons licence
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record