DiffIR: Exploring Differences in Ranking Models' Behavior

Jose, K. M., Nguyen, T., MacAvaney, S. , Dalton, J. and Yates, A. (2021) DiffIR: Exploring Differences in Ranking Models' Behavior. In: SIGIR 2021: 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 11-15 July 2021, pp. 2595-2599. (doi: 10.1145/3404835.3462784)

Full text not currently available from Enlighten.


Understanding and comparing the behavior of retrieval models is a fundamental challenge that requires going beyond examining average effectiveness and per-query metrics, because these do not reveal key differences in how ranking models' behavior impacts individual results. DiffIR is a new open-source web tool to assist with qualitative ranking analysis by visually 'diffing' system rankings at the individual result level for queries where behavior significantly diverges. Using one of several configurable similarity measures, it identifies queries for which the rankings of models compared have important differences in individual rankings and provides a visual web interface to compare the rankings side-by-side. DiffIR additionally supports a model-specific visualization approach based on custom term importance weight files. These support studying the behavior of interpretable models, such as neural retrieval methods that produce document scores based on a similarity matrix or based on a single document passage. Observations from this tool can complement neural probing approaches like ABNIRML to generate quantitative tests. We provide an illustrative use case of DiffIR by studying the qualitative differences between recently developed neural ranking models on a standard TREC benchmark dataset.

Item Type:Conference Proceedings
Glasgow Author(s) Enlighten ID:Dalton, Dr Jeff and MacAvaney, Dr Sean
Authors: Jose, K. M., Nguyen, T., MacAvaney, S., Dalton, J., and Yates, A.
College/School:University Services > IT Services > Computing Service
Published Online:11 July 2021

University Staff: Request a correction | Enlighten Editors: Update this record