A Multilevel Subtree Method For Single And Batched Sparse Cholesky Factorization

PROCEEDINGS OF THE 47TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING(2018)

引用 2|浏览44
暂无评分
摘要
Scientific computing relies heavily on matrix factorization. Cholesky factorization is typically used to solve the linear equation system Ax = b where A is symmetric and positive definite. A large number of applications require operating on sparse matrices. A major overhead with factorization of sparse matrices on GPUs is addressing the cost of transferring the data from the CPU to the GPU. Additionally, the computational efficiency of factorization of small dense matrices has to be addressed.In this paper, we develop a multilevel subtree method for Cholesky factorization of large sparse matrices on single and multiple GPUs. This approach effectively addresses two important limitations of previous methods. First, by applying the subtree method to both lower levels and higher levels of the elimination tree, we improve the amount of concurrency and the computational efficiency. Previous approaches only used the subtree method at the lower levels. Second, we overlap computation of a subtree with another subtree, thereby reducing the overhead of the data transfer from CPU to GPU. Additionally, we propose the use of batched parallelism for applications that require simultaneous factorization of multiple matrices. Effectively, the tree structure of a collection of matrices can be derived by merging the individual trees.Our experimental results show that each of the three techniques result in significant performance improvement. Further, the combination of the three can result in a speedup of up to 2.43 on a variety of sparse matrices.
更多
查看译文
关键词
sparse matrices, sparse direct methods, Cholesky factorization, GPU, CUDA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要