Private Stochastic Non-Convex Optimization: Adaptive Algorithms and Tighter Generalization Bounds

arxiv(2020)

引用 9|浏览96
暂无评分
摘要
We study differentially private (DP) algorithms for stochastic non-convex optimization. In this problem, the goal is to minimize the population loss over a p-dimensional space given n i.i.d. samples drawn from a distribution. We improve upon the population gradient bound of √(p)/√(n) from prior work and obtain a sharper rate of √(p)/√(n). We obtain this rate by providing the first analyses on a collection of private gradient-based methods, including adaptive algorithms DP RMSProp and DP Adam. Our proof technique leverages the connection between differential privacy and adaptive data analysis to bound gradient estimation error at every iterate, which circumvents the worse generalization bound from the standard uniform convergence argument. Finally, we evaluate the proposed algorithms on two popular deep learning tasks and demonstrate the empirical advantages of DP adaptive gradient methods over standard DP SGD.
更多
查看译文
关键词
adaptive algorithms,stochastic,optimization,non-convex
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要