Using the Launcher for Executing High Throughput Workloads.

Big Data Research(2017)

引用 1|浏览3
暂无评分
摘要
For many scientific disciplines, the transition to using advanced cyberinfrastructure comes not out of a desire to use the most advanced or most powerful resources available, but because their current operational model is no longer sufficient to meet their computational needs. Many researchers begin their computations on their desktop or local workstation, only to discover that the time required to simulate their problem, analyze their instrument data, or score the multitude of entities that they want to would require far more time than they have available.Launcher is a simple utility which enables the execution of high throughput computing workloads on managed HPC systems quickly and with as little effort as possible on the part of the user. Basic usage of the Launcher is straightforward, but Launcher provides several more advanced capabilities including use of Intel Xeon Phi coprocessor cards and task binding support for multi-/many-core architectures. We step through the processes of setting up a basic Launcher job, including creating a job file, setting appropriate environment variables, and using scheduler integration. We also describe how to enable use of the Intel Xeon Phi coprocessor cards, take advantage of Launcher's task binding system, and execute many parallel (OpenMP/MPI) applications at once.
更多
查看译文
关键词
High throughput computing,Distributed computing,Parallel computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要