Stochastic Gradient Langevin Unlearning
CoRR(2024)
摘要
“The right to be forgotten” ensured by laws for user data privacy becomes
increasingly important. Machine unlearning aims to efficiently remove the
effect of certain data points on the trained model parameters so that it can be
approximately the same as if one retrains the model from scratch. This work
proposes stochastic gradient Langevin unlearning, the first unlearning
framework based on noisy stochastic gradient descent (SGD) with privacy
guarantees for approximate unlearning problems under convexity assumption. Our
results show that mini-batch gradient updates provide a superior
privacy-complexity trade-off compared to the full-batch counterpart. There are
numerous algorithmic benefits of our unlearning approach, including complexity
saving compared to retraining, and supporting sequential and batch unlearning.
To examine the privacy-utility-complexity trade-off of our method, we conduct
experiments on benchmark datasets compared against prior works. Our approach
achieves a similar utility under the same privacy constraint while using 2%
and 10% of the gradient computations compared with the state-of-the-art
gradient-based approximate unlearning methods for mini-batch and full-batch
settings, respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要