Parallelizable adjoint stencil computations using transposed forward-mode algorithmic differentiation.

OPTIMIZATION METHODS & SOFTWARE(2018)

引用 6|浏览11
暂无评分
摘要
Algorithmic differentiation (AD) is a tool for generating discrete adjoint solvers, which efficiently compute gradients of functions with many inputs, for example for use in gradient-based optimization. AD is often applied to large computations such as stencil operators, which are an important part of most structured-mesh PDE solvers. Stencil computations are often parallelized, for example by using OpenMP, and optimized by using techniques such as cache-blocking and tiling to fully utilize multicore CPUs and many-core accelerators and GPUs. Differentiating these codes with conventional reverse-mode AD results in adjoint codes that cannot be expressed as stencil operations and may not be easily parallelizable. They thus leave most of the compute power of modern architectures unused. We present a method that combines forward-mode AD and loop transformation to generate adjoint solvers that use the same memory access pattern as the original computation that they are derived from and can benefit from the same optimization techniques. The effectiveness of this method is demonstrated by generating a scalable adjoint CFD solver for multicore CPUs and Xeon Phi accelerators.
更多
查看译文
关键词
algorithmic differentiation,reverse mode,discrete adjoints,shared-memory parallelism,OpenMP
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要