Low-Resolution Face Recognition in the Wild with Mixed-Domain Distillation

2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM)(2019)

引用 4|浏览66
暂无评分
摘要
Low-resolution face recognition in the wild still is an open problem. In this paper, we propose to address this problem via a novel learning approach called Mixed-Domain Distillation (MDD). The approach applies a teacher-student framework to mix and distill knowledge from four different domain datasets, including private high-resolution, public high-resolution, public low-resolution web and target low-resolution wild face datasets. In this way, high-resolution knowledge from the well-trained complex teacher model is first adapted to public high-resolution faces and then transferred to a simply student model. The student model is designed to identify low-resolution faces, and could perform face recognition in the wild effectively and efficiently. Experimental results show that our proposed model outperforms several existing models for low-resolution face recognition in the wild.
更多
查看译文
关键词
Low-resolution face recognition in the Wild,transfer learning,knowledge distillation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要