Digital Forgetting in Large Language Models: A Survey of Unlearning Methods
arxiv(2024)
摘要
The objective of digital forgetting is, given a model with undesirable
knowledge or behavior, obtain a new model where the detected issues are no
longer present. The motivations for forgetting include privacy protection,
copyright protection, elimination of biases and discrimination, and prevention
of harmful content generation. Effective digital forgetting has to be effective
(meaning how well the new model has forgotten the undesired
knowledge/behavior), retain the performance of the original model on the
desirable tasks, and be scalable (in particular forgetting has to be more
efficient than retraining from scratch on just the tasks/data to be retained).
This survey focuses on forgetting in large language models (LLMs). We first
provide background on LLMs, including their components, the types of LLMs, and
their usual training pipeline. Second, we describe the motivations, types, and
desired properties of digital forgetting. Third, we introduce the approaches to
digital forgetting in LLMs, among which unlearning methodologies stand out as
the state of the art. Fourth, we provide a detailed taxonomy of machine
unlearning methods for LLMs, and we survey and compare current approaches.
Fifth, we detail datasets, models and metrics used for the evaluation of
forgetting, retaining and runtime. Sixth, we discuss challenges in the area.
Finally, we provide some concluding remarks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要