A Systematic Review and Replicability Study of BERT4Rec for Sequential Recommendation

Petrov, A. and Macdonald, C. (2022) A Systematic Review and Replicability Study of BERT4Rec for Sequential Recommendation. In: ACM Conference on Recommender Systems (RecSys 2022), Seattle, USA, 18-23 Sep 2022, pp. 436-447. ISBN 9781450392785 (doi: 10.1145/3523227.3548487)

[img] Text
275645.pdf - Accepted Version

847kB

Abstract

BERT4Rec is an effective model for sequential recommendation based on the Transformer architecture. In the original publication, BERT4Rec claimed superiority over other available sequential recommendation approaches (e.g. SASRec), and it is now frequently being used as a state-of-the art baseline for sequential recommendation. However, not all subsequent publications confirmed its superiority and have proposed other models that were shown to outperform BERT4Rec in effectiveness. In this paper we systematically review all publications that compare BERT4Rec with another popular Transformer-based model, namely SASRec, and show that BERT4Rec results are not consistent within these publications. To understand the reasons behind this inconsistency, we analyse the available implementations of BERT4Rec and show that we fail to reproduce results of the original BERT4Rec publication when using their default configuration parameters. However, we are able to replicate the reported results with the original code if training for a much longer amount of time (up to 30x) compared to the default configuration. We also propose our own implementation of BERT4Rec based on the HuggingFace Transformers library, which we demonstrate replicates the originally reported results on 3 out 4 datasets, while requiring up to 95% less training time to converge. Overall, from our systematic review and detailed experiments, we conclude that BERT4Rec does indeed exhibit state-of-the-art effectiveness for sequential recommendation, but only when trained for a sufficient amount of time. Additionally, we show that our implementation can further benefit from adapting other Transformer architectures that are available in the HuggingFace Transformers library (e.g. using disentangled attention, as provided by DeBERTa, or larger hidden layer size cf. ALBERT). For example, on the MovieLens-1M dataset, we demonstrate that both these models can improve BERT4Rec performance by up to 9%. Moreover, we show that an ALBERT-based BERT4Rec model achieves better performance on that dataset than state-of-the-art results reported in the most recent publications.

Item Type:Conference Proceedings
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Macdonald, Professor Craig and Petrov, Aleksandr
Authors: Petrov, A., and Macdonald, C.
College/School:College of Science and Engineering > School of Computing Science
ISBN:9781450392785
Copyright Holders:Copyright © 2022 Association for Computing Machinery
Publisher Policy:Reproduced in accordance with the copyright policy of the publisher

University Staff: Request a correction | Enlighten Editors: Update this record