Large Scale Near-Duplicate Image Retrieval Via Patch Embedding

2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW)(2019)

引用 0|浏览90
暂无评分
摘要
Large scale near-duplicate image retrieval (NDIR) relies on the Bag-of-Words methodology which quantizes local features into visual words. However the direct match of these visual words typically leads to unpleasant mismatches due to quantization errors. To enhance the discriminability of the matching process, existing methods usually exploit hand-crafted contextual information, which have limited performance in complicated real-world scenarios. In contrast, we in this paper propose a trainable lightweight embedding network to extract local binary features. The network takes image patches as inputs and generates the binary code that can be efficiently stored in the inverted indexing file and helps discard mismatches immediately during the retrieval process. We improve the discriminability of the code by elaborately composing the training patches for network optimization, which consists of a proper interclass (non-duplicate) patches selection and a rich intra-class (near-duplicate) patches generation. We evaluate our approach on the open NDIR dataset, INRIA CopyDays, and the experimental results show that our method performs favorably against the state-of-the-art algorithms. Furthermore, with a relatively short code length, our approach achieves higher query speed and lower storage occupation.
更多
查看译文
关键词
NDIR,Bag of Words,Feature Match,Neural Networks,Patch Embedding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要