Efficiently Learning Adversarially Robust Halfspaces with Noise

ICML(2020)

引用 34|浏览126
暂无评分
摘要
We study the problem of learning adversarially robust halfspaces in the distribution-independent setting. In the realizable setting, we provide necessary and sufficient conditions on the adversarial perturbation sets under which halfspaces are efficiently robustly learnable. In the presence of random label noise, we give a simple computationally efficient algorithm for this problem with respect to any $\ell_p$-perturbation.
更多
查看译文
关键词
adversarially robust halfspaces,learning,noise
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要