SiDA-MoE: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models
arxiv(2023)
摘要
Mixture-of-Experts (MoE) has emerged as a favorable architecture in the era
of large models due to its inherent advantage, i.e., enlarging model capacity
without incurring notable computational overhead. Yet, the realization of such
benefits often results in ineffective GPU memory utilization, as large portions
of the model parameters remain dormant during inference. Moreover, the memory
demands of large models consistently outpace the memory capacity of
contemporary GPUs. Addressing this, we introduce SiDA-MoE
(Sparsity-inspired Data-Aware), an
efficient inference approach tailored for large MoE models. SiDA-MoE
judiciously exploits both the system's main memory, which is now abundant and
readily scalable, and GPU memory by capitalizing on the inherent sparsity on
expert activation in MoE models. By adopting a data-aware perspective, SiDA-MoE
achieves enhanced model efficiency with a neglectable performance drop.
Specifically, SiDA-MoE attains a remarkable speedup in MoE inference with up to
3.93× throughput increasing, up to 72% latency reduction, and up to
80% GPU memory saving with down to 1% performance drop. This work paves
the way for scalable and efficient deployment of large MoE models, even with
constrained resources. Code is available at:
https://github.com/timlee0212/SiDA-MoE.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要