Exploring Strategies for Privacy-Preserving Machine Learning in Distributed Environments

Suresh Dodda,Anoop Kumar, Navin Kamuni, Madan Mohan Tito Ayyalasomayajula

crossref(2024)

引用 0|浏览0
暂无评分
摘要
Machine Learning (ML) with distributed privacy preservation is growing in significance as it focuses on facilitating multi-party learning without requiring actual data sharing. This is especially helpful for companies that want to work together but are unable to do so because of ethical, regulatory, or budgetary constraints on sharing data. In order to address these issues, this study examines three privacy-preserving algorithms: regularized logistic regression with Differential Privacy (DP), stochastic gradient descent (SGD) with differentially private updates, and a distributed Lasso that distributes gradients among data centers. The study emphasizes the relationship between error rate and privacy through these algorithms. In order to improve error rates for large datasets, both DP algorithms modify their sensitivity dependent on the amount of data, highlighting the significance of training data volume in model performance in the study. Results demonstrate that using the SGD; error rate can be reduced by employing random projections in advance.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要