Efficient Inverted Indexes for Approximate Retrieval over Learned Sparse Representations
arxiv(2024)
摘要
Learned sparse representations form an attractive class of contextual
embeddings for text retrieval. That is so because they are effective models of
relevance and are interpretable by design. Despite their apparent compatibility
with inverted indexes, however, retrieval over sparse embeddings remains
challenging. That is due to the distributional differences between learned
embeddings and term frequency-based lexical models of relevance such as BM25.
Recognizing this challenge, a great deal of research has gone into, among other
things, designing retrieval algorithms tailored to the properties of learned
sparse representations, including approximate retrieval systems. In fact, this
task featured prominently in the latest BigANN Challenge at NeurIPS 2023, where
approximate algorithms were evaluated on a large benchmark dataset by
throughput and recall. In this work, we propose a novel organization of the
inverted index that enables fast yet effective approximate retrieval over
learned sparse embeddings. Our approach organizes inverted lists into
geometrically-cohesive blocks, each equipped with a summary vector. During
query processing, we quickly determine if a block must be evaluated using the
summaries. As we show experimentally, single-threaded query processing using
our method, Seismic, reaches sub-millisecond per-query latency on various
sparse embeddings of the MS MARCO dataset while maintaining high recall. Our
results indicate that Seismic is one to two orders of magnitude faster than
state-of-the-art inverted index-based solutions and further outperforms the
winning (graph-based) submissions to the BigANN Challenge by a significant
margin.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要