TCIM: Triangle Counting Acceleration With Processing-In-MRAM Architecture

2020 57th ACM/IEEE Design Automation Conference (DAC)(2020)

引用 7|浏览93
暂无评分
摘要
Triangle counting (TC) is a fundamental problem in graph analysis and has found numerous applications, which motivates many TC acceleration solutions in the traditional computing platforms like GPU and FPGA. However, these approaches suffer from the bandwidth bottleneck because TC calculation involves a large amount of data transfers. In this paper, we propose to overcome this challenge by designing a TC accelerator utilizing the emerging processing-in-MRAM (PIM) architecture. The true innovation behind our approach is a novel method to perform TC with bitwise logic operations (such as AND), instead of the traditional approaches such as matrix computations. This enables the efficient in-memory implementations of TC computation, which we demonstrate in this paper with computational Spin-Transfer Torque Magnetic RAM (STT-MRAM) arrays. Furthermore, we develop customized graph slicing and mapping techniques to speed up the computation and reduce the energy consumption. We use a device-to-architecture co-simulation framework to validate our proposed TC accelerator. The results show that our data mapping strategy could reduce 99.99% of the computation and 72% of the memory WRITE operations. Compared with the existing GPU or FPGA accelerators, our in-memory accelerator achieves speedups of 9× and 23.4×, respectively, and a 20.6× energy efficiency improvement over the FPGA accelerator.
更多
查看译文
关键词
Triangle Counting,Processing-In-MRAM,Architecture,Data Mapping
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要