Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks.
arXiv: Machine Learning(2018)
摘要
Deep neural networks are susceptible to various inference attacks as they remember information about their training data. perform a comprehensive analysis of white-box privacy inference attacks on deep learning models. measure the privacy leakage by leveraging the final model parameters as well as the parameter updates during the training and fine-tuning processes. design the attacks in the stand-alone and federated settings, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We design and evaluate our novel white-box membership inference attacks against deep learning algorithms to measure their training data membership leakage. show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, widely used to train deep neural networks. show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. also show how adversarial participants of a federated learning setting can run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要