How to handle noisy labels for robust learning from uncertainty

Neural Networks(2021)

引用 5|浏览10
暂无评分
摘要
Most deep neural networks (DNNs) are trained with large amounts of noisy labels when they are applied. As DNNs have the high capacity to fit any noisy labels, it is known to be difficult to train DNNs robustly with noisy labels. These noisy labels cause the performance degradation of DNNs due to the memorization effect by over-fitting. Earlier state-of-the-art methods used small loss tricks to efficiently resolve the robust training problem with noisy labels. In this paper, relationship between the uncertainties and the clean labels is analyzed. We present novel training method to use not only small loss trick but also labels that are likely to be clean labels selected from uncertainty called “Uncertain Aware Co-Training (UACT)”. Our robust learning techniques (UACT) avoid over-fitting the DNNs by extremely noisy labels. By making better use of the uncertainty acquired from the network itself, we achieve good generalization performance. We compare the proposed method to the current state-of-the-art algorithms for noisy versions of MNIST, CIFAR-10, CIFAR-100, T-ImageNet and News to demonstrate its excellence.
更多
查看译文
关键词
Deep network,Noisy labels,Robust learning,Uncertain aware joint training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要