Transparent CPU-GPU collaboration for data-parallel kernels on heterogeneous systems

PACT(2013)

引用 168|浏览316
暂无评分
摘要
Heterogeneous computing on CPUs and GPUs has traditionally used fixed roles for each device: the GPU handles data parallel work by taking advantage of its massive number of cores while the CPU handles non data-parallel work, such as the sequential code or data transfer management. Unfortunately, this work distribution can be a poor solution as it under utilizes the CPU, has difficulty generalizing beyond the single CPU-GPU combination, and may waste a large fraction of time transferring data. Further, CPUs are performance competitive with GPUs on many workloads, thus simply partitioning work based on the fixed roles may be a poor choice. In this paper, we present the single kernel multiple devices (SKMD) system, a framework that transparently orchestrates collaborative execution of a single data-parallel kernel across multiple asymmetric CPUs and GPUs. The programmer is responsible for developing a single data-parallel kernel in OpenCL, while the system automatically partitions the workload across an arbitrary set of devices, generates kernels to execute the partial workloads, and efficiently merges the partial outputs together. The goal is performance improvement by maximally utilizing all available resources to execute the kernel. SKMD handles the difficult challenges of exposed data transfer costs and the performance variations GPUs have with respect to input size. On real hardware, SKMD achieves an average speedup of 29% on a system with one multicore CPU and two asymmetric GPUs compared to a fastest device execution strategy for a set of popular OpenCL kernels.
更多
查看译文
关键词
performance variations gpus,parallel processing,data-parallel kernels,single cpu-gpu combination,sequential codes,time transferring data,single kernel multiple devices system,single kernel,non data-parallel work,performance competitive,heterogeneous systems,graphics processing units,opencl kernels,performance improvement,heterogeneous system,sequential code,multiple asymmetric cpu,device execution strategy,multiprocessing systems,collaborative execution,skmd system,partitioning work,gpgpu,popular opencl kernel,heterogeneous computing,data transfer management,data parallel work,transparent cpu-gpu collaboration,asymmetric gpu,exposed data transfer cost,multicore cpu,collaboration,single data-parallel kernel,performance variation,opencl,data parallel,data transfer costs
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要