DeltaGNN: Accelerating Graph Neural Networks on Dynamic Graphs With Delta Updating

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS(2024)

引用 0|浏览10
暂无评分
摘要
Graph neural network (GNN) accelerators haveachieved prominent performance speedup on static graphs butfallen with inefficiency on dynamic graphs. The reason is that indynamic graphs, updating on a few vertices will introduce enor-mous redundant neighbor reaggregation and feature reupdating.Moreover, evolving graph structure makes graph preprocessingimpractical and incurs random memory accesses which can onlybe determined at runtime. In this article, we propose DeltaGNN,an algorithm and accelerator co-design for GNN acceleration ondynamic graphs. In algorithm, we first propose a delta updatingalgorithm, which identifies the sensitivity of vertices and reducesthe aggregation and updating operations of insensitive verticeswithout accuracy compromise. In hardware, we propose a novelsensitivity remapping cache to satisfy the dissimilar reusabilityof vertices under different sensitivity without preprocessingrequirement. To tackle the workload imbalance, we implementfeature-disperse execution to support different feature updatingbetween sensitive and insensitive vertices. Moreover, we introducevertex feature coalescing to reduce the amount of feature vectorsby exploiting the locality within vertex accesses. We then proposelightweight yet effective hardware optimizations to make ourdesign applicable to conventional GNN accelerators. Comparedto the state-of-the-art GNN accelerators, our DeltaGNN gains anaverage of 1.5x-11.8xspeedup and 1.3x-8.6xenergy efficiencyimprovement on dynamic graphs
更多
查看译文
关键词
Heuristic algorithms,Graph neural networks,Hardware,Sensitivity,Long short term memory,Training,Task analysis,Algorithm and accelerator co-design,dynamic graph,graph neural networks (GNNs)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要