CrowdTerrier: automatic crowdsourced relevance assessments with terrier

Mccreadie, R. , Macdonald, C. and Ounis, I. (2012) CrowdTerrier: automatic crowdsourced relevance assessments with terrier. In: SIGIR 2012: 35th Annual International ACM SIGIR Conference on Research and Development on Information Retrieval, Portland OR, USA, 12-16 Aug 2012, p. 1005. (doi: 10.1145/2348283.2348430)

Full text not currently available from Enlighten.

Publisher's URL:


In this demo, we present CrowdTerrier, an infrastructure extension to the open source Terrier IR platform that enables the semi-automatic generation of relevance assessments for a variety of document ranking tasks using crowdsourcing. The aim of CrowdTerrier is to reduce the time and expertise required to effectively Crowdsource relevance assessments by abstracting away from the complexities of the crowdsourcing process. It achieves this by automating the assessment process as much as possible, via a close integration of the IR system that ranks the documents (Terrier) and the crowdsourcing marketplace that is used to assess those documents (Amazon's Mechanical Turk).

Item Type:Conference Proceedings
Additional Information:ISBN: 9781450314725
Glasgow Author(s) Enlighten ID:Mccreadie, Dr Richard and Macdonald, Professor Craig and Ounis, Professor Iadh
Authors: Mccreadie, R., Macdonald, C., and Ounis, I.
Subjects:Q Science > QA Mathematics > QA75 Electronic computers. Computer science
College/School:College of Science and Engineering > School of Computing Science

University Staff: Request a correction | Enlighten Editors: Update this record