SCAR: Scheduling Multi-Model AI Workloads on Heterogeneous Multi-Chiplet Module Accelerators
arxiv(2024)
摘要
Emerging multi-model workloads with heavy models like recent large language
models significantly increased the compute and memory demands on hardware. To
address such increasing demands, designing a scalable hardware architecture
became a key problem. Among recent solutions, the 2.5D silicon interposer
multi-chip module (MCM)-based AI accelerator has been actively explored as a
promising scalable solution due to their significant benefits in the low
engineering cost and composability. However, previous MCM accelerators are
based on homogeneous architectures with fixed dataflow, which encounter major
challenges from highly heterogeneous multi-model workloads due to their limited
workload adaptivity. Therefore, in this work, we explore the opportunity in the
heterogeneous dataflow MCM AI accelerators. We identify the scheduling of
multi-model workload on heterogeneous dataflow MCM AI accelerator is an
important and challenging problem due to its significance and scale, which
reaches O(10^18) scale even for a single model case on 6x6 chiplets. We develop
a set of heuristics to navigate the huge scheduling space and codify them into
a scheduler with advanced techniques such as inter-chiplet pipelining. Our
evaluation on ten multi-model workload scenarios for datacenter multitenancy
and AR/VR use-cases has shown the efficacy of our approach, achieving on
average 35.3
applications settings compared to homogeneous baselines.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要