Offset Unlearning for Large Language Models
CoRR(2024)
Abstract
Despite the strong capabilities of Large Language Models (LLMs) to acquire
knowledge from their training corpora, the memorization of sensitive
information in the corpora such as copyrighted, harmful, and private content
has led to ethical and legal concerns. In response to these challenges,
unlearning has emerged as a potential remedy for LLMs affected by problematic
training data. However, previous unlearning techniques are either not
applicable to black-box LLMs due to required access to model internal weights,
or violate data protection principles by retaining sensitive data for
inference-time correction. We propose δ-unlearning, an offset unlearning
framework for black-box LLMs. Instead of tuning the black-box LLM itself,
δ-unlearning learns the logit offset needed for unlearning by
contrasting the logits from a pair of smaller models. Experiments demonstrate
that δ-unlearning can effectively unlearn target data while maintaining
similar or even stronger performance on general out-of-forget-scope tasks.
δ-unlearning also effectively incorporates different unlearning
algorithms, making our approach a versatile solution to adapting various
existing unlearning algorithms to black-box LLMs.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined