RSS: Effective and Efficient Training for Sequential Recommendation using Recency Sampling

Petrov, A. and Macdonald, C. (2023) RSS: Effective and Efficient Training for Sequential Recommendation using Recency Sampling. ACM Transactions on Recommender Systems, (doi: 10.1145/3604436) (Early Online Publication)

[img] Text
299238.pdf - Accepted Version

1MB

Abstract

Many modern sequential recommender systems use deep neural networks, which can effectively estimate the relevance of items, but require a lot of time to train. Slow training increases the costs of training, hinders product development timescales and prevents the model from being regularly updated to adapt to changing user preferences. The training of such sequential models involves appropriately sampling past user interactions to create a realistic training objective. The existing training objectives have limitations. For instance, next item prediction never uses the beginning of the sequence as a learning target, thereby potentially discarding valuable data. On the other hand, the item masking used by the state-of-the-art BERT4Rec recommender model is only weakly related to the goal of the sequential recommendation; therefore, it requires much more time to obtain an effective model. Hence, we propose a novel Recency-based Sampling of Sequences (RSS) training objective (which is parameterized by a choice of recency importance function) that addresses both limitations. We apply our method to various recent and state-of-the-art model architectures – such as GRU4Rec, Caser, and SASRec. We show that the models enhanced with our method can achieve performances exceeding or very close to the effective BERT4Rec, but with much less training time. For example, on the MovieLens-20M dataset, RSS applied to the SASRec model can result in a 60% improvement in NDCG over a vanilla SASRec, and a 16% improvement over a fully-trained BERT4Rec model, despite taking 93% less training time than BERT4Rec. We also experiment with two families of recency importance functions and show that they perform similarly. We further empirically demonstrate that RSS-enhanced SASRec successfully learns to distinguish differences between recent and older interactions – a property that the original SASRec model does not exhibit. Overall, we show that RSS is a viable (and frequently better) alternative to the existing training objectives, which is both effective and efficient for training sequential recommender model when the computational resources for training are limited.

Item Type:Articles
Status:Early Online Publication
Refereed:Yes
Glasgow Author(s) Enlighten ID:Macdonald, Professor Craig and Petrov, Aleksandr
Authors: Petrov, A., and Macdonald, C.
College/School:College of Science and Engineering > School of Computing Science
Research Centre:College of Science and Engineering > School of Computing Science > IDA Section > GPU Cluster
Journal Name:ACM Transactions on Recommender Systems
Publisher:Association for Computing Machinery
ISSN:2770-6699
Published Online:23 June 2023
Copyright Holders:Copyright © 2023 The Authors
First Published:First published in ACM Transactions on Recommender Systems 2023
Publisher Policy:Reproduced in accordance with the copyright policy of the publisher

University Staff: Request a correction | Enlighten Editors: Update this record