Neural Acceleration of Graph Based Utility Functions for Sparse Matrices.

IEEE Access(2023)

引用 0|浏览0
暂无评分
摘要
Many graph-based algorithms in high performance computing (HPC) use approximate solutions due to having algorithms that are computationally expensive or serial in nature. Neural acceleration, i.e., the process of speeding up approximation computation elements via artificial neural networks, is relatively new and has not focused on HPC graph-based algorithms. In this paper, we propose a starting point for applying models for neural acceleration to graph-based HPC algorithms utilizing an understanding of the connectivity computational pattern, recursive neural networks, and graph neural networks. We demonstrate these techniques on the problem related to the utility functions of sparse matrix ordering and fill-in (i.e., zero elements becoming nonzero during factorization) calculations. The problem of sparse matrix ordering is commonly used for issues related to load balancing, improving memory reuse, or reducing computational and memory costs in direct sparse linear solver methods. These utility functions are ideal for demonstration as they comprise a number of different graph-based subproblems, and thus demonstrate the usefulness of our method over a wide range. We show that we can accurately approximate the best ordering and the nonzero count of the sparse factorization matrix while speeding up the calculation by as much as 30.3x over the traditional serial method.
更多
查看译文
关键词
graph based utility functions,sparse matrices
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要