Enforcing Last-Level Cache Partitioning through Memory Virtual Channels

2019 28th International Conference on Parallel Architectures and Compilation Techniques (PACT)(2019)

引用 3|浏览42
暂无评分
摘要
Ensuring fairness or providing isolation between multiple workloads with different characteristics that are colocated on a single, shared-memory system is a challenge. Recent multicore processors provide last-level cache (LLC) hardware partitioning to provide hardware support for isolation, with the cache partitioning often specified by the user. While more LLC capacity often results in higher performance, in this work we identify that a workload allocated more LLC capacity result in worse performance on real-machine experiments, which we refer to as MiW (more is worse). Through various controlled experiments, we identify that another workload with less LLC capacity causes more frequent LLC misses. The workload stresses the main-memory system shared by both workloads and degrades the performance of the former workload even if the LLC partitioning is used (a balloon effect). To resolve this problem, we propose virtualizing the datapath of main-memory controllers and dedicating the memory virtual channels (mVCs) to each group of applications, grouped for LLC partitioning. mVC can further fine-tune the performance of groups by differentiating buffer sizes among mVCs. It can reduce the total system cost by executing latency-critical and throughput-oriented workloads together on shared machines, of which performance criteria can be achieved only on dedicated machines if mVCs are not supported. Experiments on a simulated chip multiprocessor show that our proposals effectively eliminate the MiW phenomenon, hence providing additional opportunities for workload consolidation in a datacenter. Our case study demonstrates potential savings of machine count by 21.8% with mVC, which would otherwise violate a service level objective (SLO).
更多
查看译文
关键词
Memory Virtual Channel, LLC Partitioning, Fairness, More is Worse
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要