Self-Reinforcing Unsupervised Matching

IEEE Transactions on Pattern Analysis and Machine Intelligence(2022)

引用 3|浏览192
暂无评分
摘要
Remarkable gains in deep learning usually benefit from large-scale supervised data. Ensuring the intra-class modality diversity in training set is critical for generalization capability of cutting-edge deep models, but it burdens human with heavy manual labor on data collection and annotation. In addition, some rare or unexpected modalities are new for the current model, causing reduced performance under such emerging modalities. Inspired by the achievements in speech recognition, psychology and behavioristics, we present a practical solution, self-reinforcing unsupervised matching (SUM), to annotate the images with 2D structure-preserving property in an emerging modality by cross-modality matching. Specifically, we propose a dynamic programming algorithm, dynamic position warping (DPW), to reveal the underlying element correspondence relationship between two matrix-form data in an order-preserving fashion, and devise a local feature adapter (LoFA) to allow for cross-modality similarity measurement. On these bases, we develop a two-tier self-reinforcing learning mechanism on both feature level and image level to optimize the LoFA. The proposed SUM framework requires no any supervision in emerging modality and only one template in seen modality, providing a promising route towards incremental learning and continual learning. Extensive experimental evaluation on two proposed challenging one-template visual matching tasks demonstrate its efficiency and superiority.
更多
查看译文
关键词
Algorithms,Humans
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要