Redundant features can hurt robustness to distributions shift

international conference on machine learning(2020)

引用 4|浏览60
暂无评分
摘要
In this work, we borrow tools from the field of adversarial robustness, and propose a new framework that permits to relate dataset features to the distance of samples to the decision boundary. Using this framework we identify the subspace of features used by CNNs to classify largescale vision benchmarks, and reveal some intriguing aspects of their robustness to distributions shift. Specifically, by manipulating the frequency content in CIFAR-10 we show that the existence of redundant features on a dataset can harm the networks’ robustness to distribution shifts. We demonstrate that completely erasing the redundant information from the training set can efficiently solve this problem. This paper is a short version of (Ortiz-Jimenez et al., 2020).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要