Cross Quality Distillation.
CoRR(2016)
摘要
We propose a technique for training recognition models when high-quality data is available at training time but not at testing time. Our approach, called Cross Quality Distillation (CQD), first trains a model on the high-quality data and encourages a second model trained on the low-quality data to generalize in the same way as the first. The technique is fairly general and only requires the ability to generate low-quality data from the high-quality data. We apply this to learn models for recognizing low-resolution images using labeled high-resolution images, non-localized objects using labeled localized objects, edge images using labeled color images, etc. Experiments on various fine-grained recognition datasets demonstrate that the technique leads to large improvements in recognition accuracy on the low-quality data. We also establish connections of CQD to other areas of machine learning such as domain adaptation, model compression, and learning using privileged information, and show that the technique is general and can be applied to other settings. Finally, we present further insights into why the technique works through visualizations and establishing its relationship to curriculum learning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络