Chrome Extension
WeChat Mini Program
Use on ChatGLM

Bridging the Preference Gap Between Retrievers and LLMs

Annual Meeting of the Association for Computational Linguistics(2024)

Cited 0|Views38
No score
Abstract
Large Language Models (LLMs) have demonstrated superior results across a widerange of tasks, while retrieval has long been established as an effective meansof obtaining task-relevant information for humans. Retrieval-augmentedGeneration (RAG) are known for their effectiveness in knowledge-intensive tasksby locating relevant information and placing it within the context window ofthe LLM. However, the relationship between retrievers and LLMs is stillunder-investigated. Most existing work treats the retriever and the LLM asindependent components and leaves a gap between retrieving human-friendlyinformation and assembling a LLM-friendly context. In this work, we examine anovel bridge model, validate the ranking and selection assumptions inretrievers in the context of RAG, and propose a training framework that chainstogether supervised and reinforcement learning to learn a bridge model.Empirical results demonstrate the effectiveness of our method in bothquestion-answering and personalized generation tasks.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined