Privacy-enhancing machine learning framework with private aggregation of teacher ensembles

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS(2022)

引用 1|浏览0
暂无评分
摘要
Private aggregation of teacher ensembles (PATE), a general machine learning framework based on knowledge distillation, can provide a privacy guarantee for training data sets. However, this framework poses a number of security risks. First, PATE mainly focuses on the privacy of teachers' training data and fails to protect the privacy of their students' data. Second, PATE relies heavily on a trusted aggregator to count teachers' votes, which is not convincing enough to assume a third party would never leak teachers' votes during the knowledge transfer process. To address the abovementioned issues, we improve the original PATE framework and present a new one that combines secret sharing with Intel Software Guard Extensions in a novel way. In the proposed framework, teachers are trained locally, then uploaded and stored in two computing servers in the form of secret shares. In the knowledge transfer phase, the two computing servers receive shares of private inputs from students before collaboratively performing secure predictions. Thus neither teachers nor students expose sensitive information. During the aggregation process, we propose an effective masking technique suitable for the setting to keep the prediction results private and prevent the votes from being leaked to the aggregation server. Besides, we optimize the aggregation mechanism and add noise perturbations adaptively based on the posterior entropy of the prediction results. Finally, we evaluate the performance of the new framework on multiple data sets and experimentally demonstrate that the new framework allows highly efficient, accurate, and secure predictions.
更多
查看译文
关键词
Intel Software Guard Extensions, knowledge distillation, machine learning, privacy preservation, secret sharing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要