A Parallel Page Cache: IOPS and Caching for Multicore Systems.

HotStorage'12: Proceedings of the 4th USENIX conference on Hot Topics in Storage and File Systems(2012)

引用 9|浏览39
暂无评分
摘要
We present a set-associative page cache for scalable parallelism of IOPS in multicore systems. The design eliminates lock contention and hardware cache misses by partitioning the global cache into many independent page sets, each requiring a small amount of metadata that fits in few processor cache lines. We extend this design with message passing among processors in a non-uniform memory architecture (NUMA). We evaluate the set-associative cache on 12-core processors and a 48- core NUMA to show that it realizes the scalable IOPS of direct I/O (no caching) and matches the cache hits rates of Linux's page cache. Set-associative caching maintains IOPS at scale in contrast to Linux for which IOPS crash beyond eight parallel threads.
更多
查看译文
关键词
cache hits rate,global cache,hardware cache,page cache,processor cache line,set-associative cache,set-associative page cache,IOPS crash,scalable IOPS,independent page set,multicore system,parallel page cache
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要