Towards Learning Affine-Invariant Representations via Data-Efficient CNNs
2020 IEEE Winter Conference on Applications of Computer Vision (WACV)(2020)
摘要
In this paper we propose integrating a priori knowledge into both design and training of convolutional neural networks (CNNs) to learn object representations that are invariant to affine transformations (i.e. translation, scale, rotation). Accordingly we propose a novel multi-scale maxout CNN and train it end-to-end with a novel rotation-invariant regularizer. This regularizer aims to enforce the weights in each 2D spatial filter to approximate circular patterns. In this way, we manage to handle affine transformations in training using convolution, multi-scale maxout, and circular filters. Empirically we demonstrate that such knowledge can significantly improve the data-efficiency as well as generalization and robustness of learned models. For instance, on the Traffic Sign data set and trained with only 10 images per class, our method can achieve 84.15% that outperforms the state-of-the-art by 29.80% in terms of test accuracy.
更多查看译文
关键词
affine-invariant representations,convolutional neural networks,object representations,affine transformations,multiscale maxout CNN,rotation-invariant regularizer,2D spatial filter,approximate circular patterns,traffic sign data set
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络