Unifews: Unified Entry-Wise Sparsification for Efficient Graph Neural Network
arxiv(2024)
摘要
Graph Neural Networks (GNNs) have shown promising performance in various
graph learning tasks, but at the cost of resource-intensive computations. The
primary overhead of GNN update stems from graph propagation and weight
transformation, both involving operations on graph-scale matrices. Previous
studies attempt to reduce the computational budget by leveraging graph-level or
network-level sparsification techniques, resulting in downsized graph or
weights. In this work, we propose Unifews, which unifies the two operations in
an entry-wise manner considering individual matrix elements, and conducts joint
edge-weight sparsification to enhance learning efficiency. The entry-wise
design of Unifews enables adaptive compression across GNN layers with
progressively increased sparsity, and is applicable to a variety of
architectural designs with on-the-fly operation simplification. Theoretically,
we establish a novel framework to characterize sparsified GNN learning in view
of a graph optimization process, and prove that Unifews effectively
approximates the learning objective with bounded error and reduced
computational load. We conduct extensive experiments to evaluate the
performance of our method in diverse settings. Unifews is advantageous in
jointly removing more than 90
better accuracy than baseline models. The sparsification offers remarkable
efficiency improvements including 10-20x matrix operation reduction and up to
100x acceleration in graph propagation time for the largest graph at the
billion-edge scale.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要