Agnostic Learning Of Halfspaces With Gradient Descent Via Soft Margins

INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139(2021)

引用 18|浏览73
暂无评分
摘要
We analyze the properties of gradient descent on convex surrogates for the zero-one loss for the agnostic learning of halfspaces. We show that when a quantity we refer to as the soft margin is well-behaved-a condition satisfied by log-concave isotropic distributions among others-minimizers of convex surrogates for the zero-one loss are approximate minimizers for the zero-one loss itself. As standard convex optimization arguments lead to efficient guarantees for minimizing convex surrogates of the zero-one loss, our methods allow for the first positive guarantees for the classification error of halfspaces learned by gradient descent using the binary cross-entropy or hinge loss in the presence of agnostic label noise.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要