Towards Efficient and Effective Unlearning of Large Language Models for Recommendation
CoRR(2024)
摘要
The significant advancements in large language models (LLMs) give rise to a
promising research direction, i.e., leveraging LLMs as recommenders (LLMRec).
The efficacy of LLMRec arises from the open-world knowledge and reasoning
capabilities inherent in LLMs. LLMRec acquires the recommendation capabilities
through instruction tuning based on user interaction data. However, in order to
protect user privacy and optimize utility, it is also crucial for LLMRec to
intentionally forget specific user data, which is generally referred to as
recommendation unlearning. In the era of LLMs, recommendation unlearning poses
new challenges for LLMRec in terms of inefficiency and
ineffectiveness. Existing unlearning methods require updating billions
of parameters in LLMRec, which is costly and time-consuming. Besides, they
always impact the model utility during the unlearning process. To this end, we
propose E2URec, the first Efficient and
Effective Unlearning method for LLMRec. Our
proposed E2URec enhances the unlearning efficiency by updating only a few
additional LoRA parameters, and improves the unlearning effectiveness by
employing a teacher-student framework, where we maintain multiple teacher
networks to guide the unlearning process. Extensive experiments show that
E2URec outperforms state-of-the-art baselines on two real-world datasets.
Specifically, E2URec can efficiently forget specific data without affecting
recommendation performance. The source code is at
.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要