Examining the Coherence of the Top Ranked Tweet Topics

Fang, A., Macdonald, C. , Ounis, I. and Habel, P. (2016) Examining the Coherence of the Top Ranked Tweet Topics. In: SIGIR 2016, Pisa, Italy, 17-21 July 2016, ISBN 9781450340694 (doi:10.1145/2911451.2914731)

Fang, A., Macdonald, C. , Ounis, I. and Habel, P. (2016) Examining the Coherence of the Top Ranked Tweet Topics. In: SIGIR 2016, Pisa, Italy, 17-21 July 2016, ISBN 9781450340694 (doi:10.1145/2911451.2914731)

[img]
Preview
Text
119283.pdf - Accepted Version

452kB

Abstract

Topic modelling approaches help scholars to examine the topics discussed in a corpus. Due to the popularity of Twitter, two distinct methods have been proposed to accommodate the brevity of tweets: the tweet pooling method and Twitter LDA. Both of these methods demonstrate a higher performance in producing more interpretable topics than the standard Latent Dirichlet Allocation (LDA) when applied on tweets. However, while various metrics have been proposed to estimate the coherence of the generated topics from tweets, the coherence of the top ranked topics, those that are most likely to be examined by users, has not been investigated. In addition, the effect of the number of generated topics K on the topic coherence scores has not been studied. In this paper, we conduct large-scale experiments using three topic modelling approaches over two Twitter datasets, and apply a state-of-the-art coherence metric to study the coherence of the top ranked topics and how K affects such coherence. Inspired by ranking metrics such as precision at n, we use coherence at n to assess the coherence of a topic model. To verify our results, we conduct a pairwise user study to obtain human preferences over topics. Our findings are threefold: we find evidence that Twitter LDA out-performs both LDA and the tweet pooling method because the top ranked topics it generates have more coherence; we demonstrate that a larger number of topics (K) helps to generate topics with more coherence; and finally, we show that coherence at n is more effective when evaluating the coherence of a topic model than the average coherence score.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Habel, Dr Philip and Ounis, Professor Iadh and Macdonald, Dr Craig
Authors: Fang, A., Macdonald, C., Ounis, I., and Habel, P.
College/School:College of Science and Engineering > School of Computing Science
College of Social Sciences > School of Social and Political Sciences > Politics
ISBN:9781450340694
Copyright Holders:Copyright © 2016 Association for Computing Machinery
Publisher Policy:Reproduced in accordance with the copyright policy of the publisher
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record