Evaluating aggregated search pages

Zhou, K., Cummins, R., Lalmas, M. and Jose, J. M. (2012) Evaluating aggregated search pages. In: SIGIR '12: Proceedings of the 35thIinternational ACM SIGIR Conference on Research and Development in Information Retrieval, Portland, OR, USA, 12-16 Aug 2012, pp. 115-124. (doi: 10.1145/2348283.2348302)

Full text not currently available from Enlighten.

Publisher's URL: http://dx.doi.org/10.1145/2348283.2348302

Abstract

Aggregating search results from a variety of heterogeneous sources or verticals such as news, image and video into a single interface is a popular paradigm in web search. Although various approaches exist for selecting relevant verticals or optimising the aggregated search result page, evaluating the quality of an aggregated page is an open question.<p></p> This paper proposes a general framework for evaluating the quality of aggregated search pages. We evaluate our approach by collecting annotated user preferences over a set of aggregated search pages for 56 topics and 12 verticals. We empirically demonstrate the fidelity of metrics instantiated from our proposed framework by showing that they strongly agree with the annotated user preferences of pairs of simulated aggregated pages.<p></p> Furthermore, we show that our metrics agree with the majority preference more often than current diversity-based information retrieval metrics. Finally, we demonstrate the flexibility of our framework by showing that personalised historical preference data can be used to improve the performance of our proposed metrics.<p></p>

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Jose, Professor Joemon and Zhou, Mr Ke and Lalmas, Professor Mounia and Cummins, Dr Ronan
Authors: Zhou, K., Cummins, R., Lalmas, M., and Jose, J. M.
College/School:College of Science and Engineering > School of Computing Science
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record