Scalable Memory Reclamation for Multi-Core, Real-Time Systems

2018 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)(2018)

引用 4|浏览57
暂无评分
摘要
A core challenge in best utilizing an increasing number of cores in real-time systems is addressing the problem of efficient and predictable resource sharing. Traditional mechanisms for mutual exclusion, such as locks, limit parallelism due to serialized resource access. Relaxing mutual exclusion, reader-writer locks enable selective parallelism for a subset of accesses, but can suffer from increased implementation overheads. In all such implementations, the costs of cache-coherency alone can be prohibitive for an increasing number of cores. This paper investigates the use of techniques such as Read-Copy Update (RCU) to enable truly parallel access to data-structures. Such techniques optimize for data-structure read-paths, and can completely avoid stores to shared structures, thus avoiding cache-coherency overheads. We show that existing implementations of preemptive RCU aren't designed to provide real-time latencies, and require a potentially unbounded amount of dynamically allocated memory. Thus, we introduce two new implementations that are both predictable and efficient, and a matching analysis that establishes bounds on memory consumption. We additionally provide a schedulability analysis that demonstrates the effectiveness of scalable read-side operations, achieving consistently higher schedulability than existing techniques. We further apply the analysis to provide admission control for a soft real-time application to both achieve higher throughput than existing approaches (up to 40% higher) while limiting 99th percentile read-path latencies (4x lower than existing techniques).
更多
查看译文
关键词
real time,parallelism,predictability,quiescence,Scalable Memory Reclamation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要