BagFormer: Better Cross-Modal Retrieval via bag-wise interaction

Haowen Hou, Xiaopeng Yan,Yigeng Zhang, Fengzong Lian,Zhanhui Kang

arxiv(2022)

引用 0|浏览9
暂无评分
摘要
In the field of cross-modal retrieval, single encoder models tend to perform better than dual encoder models, but they suffer from high latency and low throughput. In this paper, we present a dual encoder model called BagFormer that utilizes a cross modal interaction mechanism to improve recall performance without sacrificing latency and throughput. BagFormer achieves this through the use of bag-wise interactions, which allow for the transformation of text to a more appropriate granularity and the incorporation of entity knowledge into the model. Our experiments demonstrate that BagFormer is able to achieve results comparable to state-of-the-art single encoder models in cross-modal retrieval tasks, while also offering efficient training and inference with 20.72 times lower latency and 25.74 times higher throughput.
更多
查看译文
关键词
bagformer,retrieval,cross-modal,bag-wise
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要