Personalised multi-modal interactive recommendation with hierarchical state representations

Wu, Y., Macdonald, C. and Ounis, I. (2024) Personalised multi-modal interactive recommendation with hierarchical state representations. ACM Transactions on Recommender Systems, (doi: 10.1145/3651169) (Early Online Publication)

[img] Text
320766.pdf - Accepted Version
Available under License Creative Commons Attribution.

2MB

Abstract

Multi-modal interactive recommender systems (MMIRS) can effectively guide users towards their desired items through multi-turn interactions by leveraging the users’ real-time feedback (in the form of natural-language critiques) on previously recommended items (such as images of fashion products). In this scenario, the users’ preferences can be expressed by both the users’ past interests from their historical interactions and their current needs from the real-time interactions. However, it is typically challenging to make satisfactory personalised recommendations across multi-turn interactions due to the difficulty in balancing the users’ past interests and the current needs for generating the users’ state (i.e. current preferences) representations over time. On the other hand, hierarchical reinforcement learning has been successfully applied in various fields by decomposing a complex task into a hierarchy of more easily addressed subtasks. In this journal article, we propose a novel personalised multi-modal interactive recommendation model (PMMIR) using hierarchical reinforcement learning to more effectively incorporate the users’ preferences from both their past and real-time interactions. In particular, PMMIR decomposes the personalised interactive recommendation process into a sequence of two subtasks with hierarchical state representations: a first subtask where a history encoder learns the users’ past interests with the hidden states of history for providing personalised initial recommendations, and a second subtask where a state tracker estimates the current needs with the real-time estimated states for updating the subsequent recommendations. The history encoder and the state tracker are jointly optimised with a single objective by maximising the users’ future satisfaction with the recommendations. Following previous work, we train and evaluate our PMMIR model using a user simulator that can generate natural-language critiques about the recommendations as a surrogate for real human users. Experiments conducted on two derived fashion datasets from two well-known public datasets demonstrate that our proposed PMMIR model yields significant improvements in comparison to the existing state-of-the-art baseline models.

Item Type:Articles
Additional Information:The authors acknowledge support from EPSRC grant EP/R018634/1 entitled Closed-Loop Data Science for Complex, Computationally- and Data-Intensive Analytics.
Keywords:interactive recommendation, multi-modal, personalisation, reinforcement learning.
Status:Early Online Publication
Refereed:Yes
Glasgow Author(s) Enlighten ID:Wu, Mr Yaxiong and Ounis, Professor Iadh and Macdonald, Professor Craig
Authors: Wu, Y., Macdonald, C., and Ounis, I.
College/School:College of Science and Engineering > School of Computing Science
Research Centre:College of Science and Engineering > School of Computing Science > IDA Section > GPU Cluster
Journal Name:ACM Transactions on Recommender Systems
Publisher:Association for Computing Machinery
ISSN:2770-6699
ISSN (Online):2770-6699
Published Online:04 March 2024
Copyright Holders:Copyright © 2024 The Authors
First Published:First published in ACM Transactions on Recommender Systems 2024
Publisher Policy:Reproduced in accordance with the copyright policy of the publisher

University Staff: Request a correction | Enlighten Editors: Update this record

Project CodeAward NoProject NamePrincipal InvestigatorFunder's NameFunder RefLead Dept
300982Exploiting Closed-Loop Aspects in Computationally and Data Intensive AnalyticsRoderick Murray-SmithEngineering and Physical Sciences Research Council (EPSRC)EP/R018634/1Computing Science