A Multicore GNN Training Accelerator
2023 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED)(2023)
Abstract
Graph neural networks (GNN) are vital for analytics on real-world problems with graph models. This work develops a multicore GNN training accelerator and develops multicore-specific optimizations for superior performance. It uses enhanced multicore-specific dynamic caching to circumvent the costs of irregular DRAM access patterns of graph-structured data. A novel feature vector segmentation approach is used to maximize on-chip data reuse with high on-chip computation per memory access, reducing data access latency, using a machine learning model for optimal performance. The work presents a major advance over prior FPGA/ASIC GNN accelerators by handling significantly larger datasets (with up to 8.6M vertices) on a variety of GNN models. On average, training speedup of 17× and energy efficiency improvement of 322× is achieved over DGL on a GPU; a speedup of 14× with 268× lower energy is shown over GPU-based GNNAdvisor; and 11× and 24× speedups are obtained over ASIC-based Rubik and FPGA-based GraphACT.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined