학술논문

Continuous-Time User Preference Modelling for Temporal Sets Prediction
Document Type
Periodical
Source
IEEE Transactions on Knowledge and Data Engineering IEEE Trans. Knowl. Data Eng. Knowledge and Data Engineering, IEEE Transactions on. 36(4):1475-1488 Apr, 2024
Subject
Computing and Processing
Behavioral sciences
Predictive models
Collaboration
Training
Task analysis
Sun
Market research
Temporal sets prediction
continuous-time representation learning
user modelling
Language
ISSN
1041-4347
1558-2191
2326-3865
Abstract
Given a sequence of sets, where each set has a timestamp and contains an arbitrary number of elements, temporal sets prediction aims to predict the elements in the subsequent set. Previous studies for temporal sets prediction mainly focus on the modelling of elements and implicitly represent each user's preference based on his/her interacted elements. However, user preferences are often continuously evolving and the evolutionary trend cannot be fully captured with the indirect learning paradigm of user preferences. To this end, we propose a continuous-time user preference modelling framework for temporal sets prediction, which explicitly models the evolving preference of each user by maintaining a memory bank to store the states of all the users and elements. Specifically, we first construct a universal sequence by arranging all the user-set interactions in a non-descending temporal order, and then chronologically learn from each user-set interaction. For each interaction, we continuously update the memories of the related user and elements based on their currently encoded messages and past memories. Moreover, we present a personalized user behavior learning module to discover user-specific characteristics based on each user's historical sequence, which aggregates the previously interacted elements from dual perspectives according to the user and elements. Finally, we develop a set-batch algorithm to improve the model efficiency, which can create time-consistent batches in advance and achieve 3.5× and 3.0× speedups in the training and evaluation process on average. Experiments on four real-world datasets demonstrate the superiority of our approach over state-of-the-arts under both transductive and inductive settings. The good interpretability of our method is also shown.