Evaluating epistemic uncertainty under incomplete assessments

Baillie, M., Azzopardi, L. and Ruthven, I. (2008) Evaluating epistemic uncertainty under incomplete assessments. Information Processing and Management, 44(2), pp. 811-837. (doi: 10.1016/j.ipm.2007.04.002)

Full text not currently available from Enlighten.

Abstract

The thesis of this study is to propose an extended methodology for laboratory based Information Retrieval evaluation under incomplete relevance assessments. This new methodology aims to identify potential uncertainty during system comparison that may result from incompleteness. The adoption of this methodology is advantageous, because the detection of epistemic uncertainty – the amount of knowledge (or ignorance) we have about the estimate of a system’s performance – during the evaluation process can guide and direct researchers when evaluating new systems over existing and future test collections. Across a series of experiments we demonstrate how this methodology can lead towards a finer grained analysis of systems. In particular, we show through experimentation how the current practice in Information Retrieval evaluation of using a measurement depth larger than the pooling depth increases uncertainty during system comparison.

Item Type:Articles
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Azzopardi, Dr Leif
Authors: Baillie, M., Azzopardi, L., and Ruthven, I.
Subjects:Q Science > QA Mathematics > QA75 Electronic computers. Computer science
College/School:College of Science and Engineering > School of Computing Science
Journal Name:Information Processing and Management
ISSN:0306-4573
ISSN (Online):1873-5371
Published Online:25 May 2007

University Staff: Request a correction | Enlighten Editors: Update this record