Provable Generalization Of Sgd-Trained Neural Networks Of Any Width In The Presence Of Adversarial Label Noise

INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139(2021)

引用 19|浏览337
暂无评分
摘要
We consider a one-hidden-layer leaky ReLU network of arbitrary width trained by stochastic gradient descent (SGD) following an arbitrary initialization. We prove that SGD produces neural networks that have classification accuracy competitive with that of the best halfspace over the distribution for a broad class of distributions that includes log-concave isotropic and hard margin distributions. Equivalently, such networks can generalize when the data distribution is linearly separable but corrupted with adversarial label noise, despite the capacity to overfit. To the best of our knowledge, this is the first work to show that overparameterized neural networks trained by SGD can generalize when the data is corrupted with adversarial label noise.
更多
查看译文
关键词
adversarial label noise,provable generalization,neural networks,sgd-trained
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要