Training Noise-Robust Deep Neural Networks Via Meta-Learning

2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)(2020)

引用 77|浏览189
暂无评分
摘要
Label noise may significantly degrade the performance of Deep Neural Networks (DNNs). To train noise-robust DNNs, Loss correction (LC) approaches have been introduced. LC approaches assume the noisy labels are corrupted from clean (ground-truth) labels by an unknown noise transition matrix T. The backbone DNNs and T can be trained separately, where T is approximated by prior knowledge. For example, T can be constructed by stacking the maximum or mean predictions of the samples from each class. In this work, we propose a new loss correction approach, named as Meta Loss Correction (MLC), to directly learn T from data via the meta-learning framework. The MLC is model-agnostic and learns T from data rather than heuristically approximates T using prior knowledge. Extensive evaluations are conducted on computer vision (MNIST, CIFAR-10, CIFAR-100, Clothing1M) and natural language processing (Twitter) datasets. The experimental results show that MLC achieves very competitive performance against state-of-the-art approaches.
更多
查看译文
关键词
label noise,noise-robust DNNs,loss correction approach,LC approaches,noisy labels,clean labels,MLC,meta-learning framework,training noise-robust deep neural networks,prior knowledge,computer vision,meta loss correction,unknown noise transition matrix
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要