Chrome Extension
WeChat Mini Program
Use on ChatGLM

Stochastic RAG: End-to-End Retrieval-Augmented Generation Through Expected Utility Maximization

SIGIR 2024(2024)

University of Massachusetts Amherst | Google

Cited 3|Views72
Abstract
This paper introduces Stochastic RAG--a novel approach for end-to-end optimization of retrieval-augmented generation (RAG) models that relaxes the simplifying assumptions of marginalization and document independence, made in most prior work. Stochastic RAG casts the retrieval process in RAG as a stochastic sampling without replacement process. Through this formulation, we employ straight-through Gumbel-top-k that provides a differentiable approximation for sampling without replacement and enables effective end-to-end optimization for RAG. We conduct extensive experiments on seven diverse datasets on a wide range of tasks, from open-domain question answering to fact verification to slot-filling for relation extraction and to dialogue systems. By applying this optimization method to a recent and effective RAG model, we advance state-of-the-art results on six out of seven datasets.
More
Translated text
Key words
Retrieval augmentation,retrieval-enhanced machine learning,ranking optimization
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种新颖的端到端优化检索增强生成(RAG)模型的方法——Stochastic RAG,它放松了大多数先前工作中关于边缘化和文档独立性的简化假设,将RAG中的检索过程视为一种无放回的随机抽样过程,采用Gumbel-top-k进行有效端到端优化。

方法】:Stochastic RAG利用Gumbel-top-k提供了一种无放回抽样的可微近似,从而实现了对RAG的有效端到端优化。

实验】:在七个具有多样性的大规模数据集上进行了广泛的实验,包括开放域问答、事实验证、关系提取的槽填充以及对话系统等任务。将这种优化方法应用于最近有效的RAG模型,在七个数据集中的六个上取得了最新的研究成果。