谷歌浏览器插件
订阅小程序
在清言上使用

Auto‐tuning of Level 1 and Level 2 BLAS for GPUs

Concurrency and computation(2012)

引用 14|浏览13
暂无评分
摘要
SUMMARYThe use of high‐performance libraries for dense linear algebra operations is of great importance in many numerical scientific applications. The most common operations form the backbone of the Basic Linear Algebra Subroutines (BLAS) library. In this paper, we consider the performance and auto‐tuning of level 1 and level 2 BLAS routines on graphical processing units. As examples, we develop single‐precision Compute Unified Device Architecture kernels for three of the most popular operations, the Euclidian norm (SNRM2), the matrix–vector multiplication (SGEMV), and the triangular solution (STRSV). The target hardware is the most recent Nvidia (Santa Clara, CA, USA) Tesla 20‐series (Fermi architecture), which is designed from the ground up for high‐performance computing. We show that it is essentially a matter of fully utilizing the fine‐grained parallelism of the many‐core graphical processing unit to achieve high performance for level 1 and level 2 BLAS operations. We show that auto‐tuning can be successfully employed to kernels for these operations so that they perform well for all input sizes. Copyright © 2012 John Wiley & Sons, Ltd.
更多
查看译文
关键词
GPU,BLAS,dense linear algebra,parallel algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要