U-Bert: Pre-Training User Representations For Improved Recommendation

THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE(2021)

引用 61|浏览149
暂无评分
摘要
Learning user representation is a critical task for recommendation systems as it can encode user preference for personalized services. User representation is generally learned from behavior data, such as clicking interactions and review comments. However, for less popular domains, the behavior data is insufficient to learn precise user representations. To deal with this problem, a natural thought is to leverage contentrich domains to complement user representations. Inspired by the recent success of BERT in NLP, we propose a novel pretraining and fine-tuning based approach U-BERT. Different from typical BERT applications, U-BERT is customized for recommendation and utilizes different frameworks in pre-training and fine-tuning. In pre-training, U-BERT focuses on content-rich domains and introduces a user encoder and a review encoder to model users' behaviors. Two pre-training strategies are proposed to learn the general user representations; In fine-tuning, U-BERT focuses on the target contenti-nsufficient domains. In addition to the user and review encoders inherited from the pre-training stage, U-BERT further introduces an item encoder to model item representations. Besides, a review co-matching layer is proposed to capture more semantic interactions between the reviews of the user and item. Finally, U-BERT combines user representations, item representations and review interaction information to improve recommendation performance. Experiments on six benchmark datasets from different domains demonstrate the state-of-the-art performance of U-BERT.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要