Lift: Exploiting Hybrid Stacked Memory for Energy-Efficient Processing of Graph Convolutional Networks.

DAC(2023)

引用 0|浏览4
暂无评分
摘要
Graph Convolutional Networks (GCNs) are powerful learning approaches for graph-structured data. GCNs are both computing- and memory-intensive. The emerging 3D-stacked computation-in-memory (CIM) architecture provides a promising solution to process GCNs efficiently. The CIM architecture can provide near-data computing, thereby reducing data movement between computing logic and memory. However, previous works do not fully exploit the CIM architecture in both dataflow and mapping, leading to significant energy consumption.This paper presents Lift, an energy-efficient GCN accelerator based on 3D CIM architecture using software and hardware co-design. At the hardware level, Lift introduces a hybrid architecture to process vertices with different characteristics. Lift adopts near-bank processing units with a push-based dataflow to process vertices with strong re-usability. A dedicated unit is introduced to reduce massive data movement caused by high-degree vertices. At the software level, Lift adopts a hybrid mapping to further exploit data locality and fully utilize the hybrid computing resources. The experimental results show that the proposed scheme can significantly reduce data movement and energy consumption compared with representative schemes.
更多
查看译文
关键词
3D-Stacked Memory, Computation-in-Memory, Graph Convolutional Networks, Accelerator
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要