Mini-crowdsourcing End-user Assessment of Intelligent Assistants: a Cost-benefit Study

Shinsel, A., Kulesza, T., Burnett, M., Curran, W., Groce, A., Stumpf, S. and Wong, W.-K. (2011) Mini-crowdsourcing End-user Assessment of Intelligent Assistants: a Cost-benefit Study. In: 2011 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), Pittsburgh, PA, USA, 18-22 Sep 2011, pp. 47-54. ISBN 9781457712470 (doi: 10.1109/VLHCC.2011.6070377)

Full text not currently available from Enlighten.

Abstract

Intelligent assistants sometimes handle tasks too important to be trusted implicitly. End users can establish trust via systematic assessment, but such assessment is costly. This paper investigates whether, when, and how bringing a small crowd of end users to bear on the assessment of an intelligent assistant is useful from a cost/benefit perspective. Our results show that a mini-crowd of testers supplied many more benefits than the obvious decrease in workload, but these benefits did not scale linearly as mini-crowd size increased - there was a point of diminishing returns where the cost-benefit ratio became less attractive.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Stumpf, Dr Simone
Authors: Shinsel, A., Kulesza, T., Burnett, M., Curran, W., Groce, A., Stumpf, S., and Wong, W.-K.
College/School:College of Science and Engineering > School of Computing Science
ISSN:1943-6092
ISBN:9781457712470
Published Online:10 November 2011

University Staff: Request a correction | Enlighten Editors: Update this record