Improve Temporal Awareness of LLMs for Sequential Recommendation
arxiv(2024)
摘要
Large language models (LLMs) have demonstrated impressive zero-shot abilities
in solving a wide range of general-purpose tasks. However, it is empirically
found that LLMs fall short in recognizing and utilizing temporal information,
rendering poor performance in tasks that require an understanding of sequential
data, such as sequential recommendation. In this paper, we aim to improve
temporal awareness of LLMs by designing a principled prompting framework
inspired by human cognitive processes. Specifically, we propose three prompting
strategies to exploit temporal information within historical interactions for
LLM-based sequential recommendation. Besides, we emulate divergent thinking by
aggregating LLM ranking results derived from these strategies. Evaluations on
MovieLens-1M and Amazon Review datasets indicate that our proposed method
significantly enhances the zero-shot capabilities of LLMs in sequential
recommendation tasks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要