Too Much, Too Little, or Just Right? Ways Explanations Impact End Users' Mental Models

Kulesza, T., Stumpf, S. , Burnett, M., Yang, S., Kwan, I. and Wong, W.-K. (2013) Too Much, Too Little, or Just Right? Ways Explanations Impact End Users' Mental Models. In: 2013 IEEE Symposium on Visual Languages and Human Centric Computing, San Jose, CA, USA, 15-19 Sep 2013, pp. 3-10. ISBN 9781479903696 (doi: 10.1109/VLHCC.2013.6645235)

Full text not currently available from Enlighten.

Abstract

Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly “debug” an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and completeness of the explanations impacts the fidelity of end users' mental models. Our findings suggest that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. We also found that oversimplification, as per many commercial agents, can be a problem: when soundness was very low, participants experienced more mental demand and lost trust in the explanations, thereby reducing the likelihood that users will pay attention to such explanations at all.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Stumpf, Dr Simone
Authors: Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., and Wong, W.-K.
College/School:College of Science and Engineering > School of Computing Science
ISSN:1943-6092
ISBN:9781479903696
Published Online:24 October 2013

University Staff: Request a correction | Enlighten Editors: Update this record