Accelerating Pythonic coupled cluster implementations: a comparison between CPUs and GPUs

arXiv (Cornell University)(2023)

引用 0|浏览7
暂无评分
摘要
We scrutinize how to accelerate the bottleneck operations of Pythonic coupled cluster implementations performed on a \texttt{NVIDIA} Tesla V100S PCIe 32GB (rev 1a) Graphics Processing Unit (GPU). The \texttt{NVIDIA} Compute Unified Device Architecture (CUDA) API is interacted with via \texttt{CuPy}, an open-source library for Python, designed as a \texttt{NumPy} drop-in replacement for GPUs. The implementation uses the Cholesky linear algebra domain and is done in {PyBEST}, the Pythonic Black-box Electronic Structure Tool -- a fully-fledged modern electronic structure software package. Due to the limitations of Video Memory (VRAM), the GPU calculations must be performed batch-wise. Timing results of some contractions containing large tensors are presented. The \texttt{CuPy} implementation leads to factor 10 speed-up compared to calculations on 36 CPUs. Furthermore, we benchmark several Pythonic routines for time and memory requirements to identify the optimal choice of the tensor contraction operations available. Finally, we compare an example CCSD and pCCD-LCCSD calculation performed solely on CPUs to their CPU--GPU hybrid implementation. Our results indicate a significant speed-up (up to a factor of 16 regarding the bottleneck operations) when offloading specific contractions to the GPU using \texttt{CuPy}.
更多
查看译文
关键词
cluster implementations,pythonic,cpus
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要