Seeing Beyond Appearance - Mapping Real Images Into Geometrical Domains For Unsupervised Cad-Based Recognition
2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2019)
摘要
While convolutional neural networks are dominating the field of computer vision, one usually does not have access to the large amount of domain-relevant data needed for their training. Therefore, it has become common practice to use available synthetic samples along domain adaptation schemes to prepare algorithms for the target domain. Tackling this problem from a different angle, we introduce a pipeline to map unseen target samples into the synthetic domain used to train task-specific methods. Denoising the data and retaining only the features these recognition algorithms are familiar with, our solution greatly improves their performance. As this mapping is easier to learn than the opposite one (i.e., to generate realistic features to augment the source samples), we demonstrate how our whole solution can be trained purely on augmented synthetic data and still performs better than methods trained with domain-relevant information (e.g., real images or realistic textures for the 3D models). Applying our approach to object recognition from texture-less CAD data, we present a custom generative network which fully utilizes the purely geometrical information to learn robust features and to achieve a more refined mapping for unseen color images.
更多查看译文
关键词
unseen color images,refined mapping,robust features,purely geometrical information,custom generative network,texture-less CAD data,realistic textures,domain-relevant information,augmented synthetic data,source samples,realistic features,task-specific methods,synthetic domain,map unseen target samples,different angle,target domain,domain adaptation schemes,available synthetic samples,domain-relevant data,computer vision,convolutional neural networks,unsupervised CAD-based recognition,geometrical domains
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络