RAGCache: Efficient Knowledge Caching for Retrieval-Augmented Generation
arxiv(2024)
摘要
Retrieval-Augmented Generation (RAG) has shown significant improvements in
various natural language processing tasks by integrating the strengths of large
language models (LLMs) and external knowledge databases. However, RAG
introduces long sequence generation and leads to high computation and memory
costs. We propose Thoth, a novel multilevel dynamic caching system tailored for
RAG. Our analysis benchmarks current RAG systems, pinpointing the performance
bottleneck (i.e., long sequence due to knowledge injection) and optimization
opportunities (i.e., caching knowledge's intermediate states). Based on these
insights, we design Thoth, which organizes the intermediate states of retrieved
knowledge in a knowledge tree and caches them in the GPU and host memory
hierarchy. Thoth proposes a replacement policy that is aware of LLM inference
characteristics and RAG retrieval patterns. It also dynamically overlaps the
retrieval and inference steps to minimize the end-to-end latency. We implement
Thoth and evaluate it on vLLM, a state-of-the-art LLM inference system and
Faiss, a state-of-the-art vector database. The experimental results show that
Thoth reduces the time to first token (TTFT) by up to 4x and improves the
throughput by up to 2.1x compared to vLLM integrated with Faiss.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要