LoongServe: Efficiently Serving Long-context Large Language Models with Elastic Sequence Parallelism
arxiv(2024)
摘要
The context window of large language models (LLMs) is rapidly increasing,
leading to a huge variance in resource usage between different requests as well
as between different phases of the same request. Restricted by static
parallelism strategies, existing LLM serving systems cannot efficiently utilize
the underlying resources to serve variable-length requests in different phases.
To address this problem, we propose a new parallelism paradigm, elastic
sequence parallelism (ESP), to elastically adapt to the variance between
different requests and phases. Based on ESP, we design and build LoongServe, an
LLM serving system that (1) improves computation efficiency by elastically
adjusting the degree of parallelism in real-time, (2) improves communication
efficiency by reducing key-value cache migration overhead and overlapping
partial decoding communication with computation, and (3) improves GPU memory
efficiency by reducing key-value cache fragmentation across instances. Our
evaluation under diverse real-world datasets shows that LoongServe improves the
maximum throughput by up to 3.85× compared to the chunked prefill and
5.81× compared to the prefill-decoding disaggregation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要