Efficient Giant Graph Unlearning via Push-Pull Tuning.

Parallel and Distributed Processing with Applications(2023)

引用 0|浏览0
暂无评分
摘要
The advent of deep learning applications for data collection has elicited concerns pertaining to privacy, particularly regarding the potential exposure of data through various vulnerabilities, such as membership inference attacks. In response to these concerns, several machine unlearning techniques have been proposed, which can effectively eliminate specific data from a trained model. However, it is important to note that existing methods primarily concentrate on Euclidean data, leaving non-Euclidean data structures, such as graphs, unexplored. Our preliminary experiments reveal that directly applying these techniques to graph data yields suboptimal results, especially on large graphs. In this work, we shift our focus towards addressing the unlearning challenge within the context of graph data trained by graph neural networks (GNNs). We propose a simple yet effective approach termed push-tuning, which sophistically manipulates loss values associated with the data intended for unlearning. This manipulation redirects the model’s predictions and facilitates the unlearning process. Moreover, we also propose a pull-tuning method to recover accuracy on the remaining data, which is degraded by the unlearning process. To jointly consider data removal and accuracy recovery, we further introduce an alternation method. To assess the effectiveness of our proposed method, we conduct comprehensive experiments using five popular benchmark datasets. The experimental results demonstrate that our approach can effectively unlearn the target nodes while preserving model’s accuracy on the remaining nodes.
更多
查看译文
关键词
Security,graph neural networks,machine un-learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要