Using Symmetry to Schedule Classical Matrix Multiplication.

arXiv: Distributed, Parallel, and Cluster Computing(2015)

引用 23|浏览25
暂无评分
摘要
Presented with a new machine with a specific interconnect topology, algorithm designers use intuition about the symmetry of the algorithm to design time and communication-efficient schedules that map the algorithm to the machine. Is there a systematic procedure for designing schedules? present a new technique to design schedules for algorithms with no non-trivial dependencies, focusing on the classical matrix multiplication algorithm. We model the symmetry of algorithm with the set of instructions $X$ as the action of the group formed by the compositions of bijections from the set $X$ to itself. model the machine as the action of the group $Ntimes Delta$, where $N$ and $Delta$ represent the interconnect topology and time increments respectively, on the set $Ptimes T$ of processors iterated over time steps. model schedules as symmetry-preserving equivariant maps between the set $X$ and a subgroup of its symmetry and the set $Ptimes T$ with the symmetry $NtimesDelta$. Such equivariant maps are the solutions of a set of algebraic equations involving group homomorphisms. associate time and communication costs with the solutions to these equations. We solve these equations for the classical matrix multiplication algorithm and show that equivariant maps correspond to time- and communication-efficient schedules for many topologies. recover well known variants including the Cannonu0027s algorithm and the communication-avoiding 2.5D algorithm for toroidal interconnects, systolic computation for planar hexagonal VLSI arrays, recursive algorithms for fat-trees, the cache-oblivious algorithm for the ideal cache model, and the space-bounded schedule for the parallel memory hierarchy model. This suggests that the design of a schedule for a new class of machines can be motivated by solutions to algebraic equations.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要