On enhancing the robustness of timeline summarization test collections

McCreadie, R. , Rajput, S., Soboroff, I., Macdonald, C. and Ounis, I. (2019) On enhancing the robustness of timeline summarization test collections. Information Processing and Management, (doi:10.1016/j.ipm.2019.02.006) (Early Online Publication)

[img] Text
180205.pdf - Accepted Version
Restricted to Repository staff only until 25 March 2020.
Available under License Creative Commons Attribution Non-commercial No Derivatives.



Timeline generation systems are a class of algorithms that produce a sequence of time-ordered sentences or text snippets extracted in real-time from high-volume streams of digital documents (e.g. news articles), focusing on retaining relevant and informative content for a particular information need (e.g. topic or event). These systems have a range of uses, such as producing concise overviews of events for end-users (human or artificial agents). To advance the field of automatic timeline generation, robust and reproducible evaluation methodologies are needed. To this end, several evaluation metrics and labeling methodologies have recently been developed - focusing on information nugget or cluster-based ground truth representations, respectively. These methodologies rely on human assessors manually mapping timeline items (e.g. sentences) to an explicit representation of what information a ‘good’ summary should contain. However, while these evaluation methodologies produce reusable ground truth labels, prior works have reported cases where such evaluations fail to accurately estimate the performance of new timeline generation systems due to label incompleteness. In this paper, we first quantify the extent to which the timeline summarization test collections fail to generalize to new summarization systems, then we propose, evaluate and analyze new automatic solutions to this issue. In particular, using a depooling methodology over 19 systems and across three high-volume datasets, we quantify the degree of system ranking error caused by excluding those systems when labeling. We show that when considering lower-effectiveness systems, the test collections are robust (the likelihood of systems being miss-ranked is low). However, we show that the risk of systems being mis-ranked increases as the effectiveness of systems held-out from the pool increases. To reduce the risk of mis-ranking systems, we also propose a range of different automatic ground truth label expansion techniques. Our results show that the proposed expansion techniques can be effective at increasing the robustness of the TREC-TS test collections, as they are able to generate large numbers missing matches with high accuracy, markedly reducing the number of mis-rankings by up to 50%.

Item Type:Articles
Additional Information:This work was funded as part of the Incident Streams Project within the Public Safety Communications Research Program, by the National Institute of Standards and Technology (U.S.).
Status:Early Online Publication
Glasgow Author(s) Enlighten ID:Mccreadie, Dr Richard and Ounis, Professor Iadh and Macdonald, Dr Craig
Authors: McCreadie, R., Rajput, S., Soboroff, I., Macdonald, C., and Ounis, I.
College/School:College of Science and Engineering > School of Computing Science
Journal Name:Information Processing and Management
ISSN (Online):1873-5371
Published Online:25 March 2019

University Staff: Request a correction | Enlighten Editors: Update this record