Scalable Privacy-Preserving Distributed Learning

Proc. Priv. Enhancing Technol.(2021)

引用 26|浏览84
暂无评分
摘要
In this paper, we address the problem of privacy-preserving distributed learning and evaluation of machine learning models by analyzing it in the widespread MapReduce abstraction that we extend with privacy constraints. Following this abstraction, we instantiate SPINDLE (Scalable Privacy-preservINg Distributed LEarning), an operational distributed system that supports the privacy-preserving training and evaluation of generalized linear models on distributed datasets. SPINDLE enables the efficient execution of distributed gradient descent while ensuring data and model confidentiality, as long as at least one of the data providers is honest-but-curious. The trained model is then used for oblivious predictions on confidential data. SPINDLE is able to efficiently perform demanding training tasks that require a high number of iterations on large input data with thousands of features, distributed among hundreds of data providers. It relies on a multiparty homomorphic encryption scheme to execute high-depth computations on encrypted data without significant overhead. It further leverages on its distributed construction and the packing capabilities of the cryptographic scheme to efficiently parallelize the computations at multiple levels. In our evaluation, SPINDLE performs the training of a logistic-regression model on a dataset of one million samples with 32 features distributed among 160 data providers in less than 176 seconds, yielding similar accuracy to non-secure centralized models.
更多
查看译文
关键词
learning,privacy-preserving
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要