Rethinking Large Language Model Architectures for Sequential Recommendations

Hanbing Wang,Xiaorui Liu,Wenqi Fan,Xiangyu Zhao, Venkataramana Kini, Devendra Yadav,Fei Wang,Zhen Wen,Jiliang Tang,Hui Liu

CoRR(2024)

引用 0|浏览6
暂无评分
摘要
Recently, sequential recommendation has been adapted to the LLM paradigm to enjoy the power of LLMs. LLM-based methods usually formulate recommendation information into natural language and the model is trained to predict the next item in an auto-regressive manner. Despite their notable success, the substantial computational overhead of inference poses a significant obstacle to their real-world applicability. In this work, we endeavor to streamline existing LLM-based recommendation models and propose a simple yet highly effective model Lite-LLM4Rec. The primary goal of Lite-LLM4Rec is to achieve efficient inference for the sequential recommendation task. Lite-LLM4Rec circumvents the beam search decoding by using a straight item projection head for ranking scores generation. This design stems from our empirical observation that beam search decoding is ultimately unnecessary for sequential recommendations. Additionally, Lite-LLM4Rec introduces a hierarchical LLM structure tailored to efficiently handle the extensive contextual information associated with items, thereby reducing computational overhead while enjoying the capabilities of LLMs. Experiments on three publicly available datasets corroborate the effectiveness of Lite-LLM4Rec in both performance and inference efficiency (notably 46.8 improvement on ML-1m) over existing LLM-based methods. Our implementations will be open sourced.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络