谷歌浏览器插件
订阅小程序
在清言上使用

A Unified Programmable Edge Matrix Processor for Deep Neural Networks and Matrix Algebra.

ACM transactions on embedded computing systems(2022)

引用 0|浏览24
暂无评分
摘要
Matrix Algebra and Deep Neural Networks represent foundational classes of computational algorithms across multiple emerging applications like Augmented Reality or Virtual Reality, autonomous navigation (cars, drones, robots), data science, and various artificial intelligence-driven solutions. An accelerator-based architecture can provide performance and energy efficiency supporting fixed functions through customized data paths. However, constrained Edge systems requiring multiple applications and diverse matrix operations to be efficiently supported, cannot afford numerous custom accelerators. In this article, we present MxCore, a unified architecture that comprises tightly coupled vector and programmable cores sharing data through highly optimized interconnects along with a configurable hardware scheduler managing the co-execution. We submit MxCore as the generalized approach to facilitate the flexible acceleration of multiple Matrix Algebra and Deep-learning applications across a range of sparsity levels. Unified compute resources improve overall resource utilization and performance per unit area. Aggressive and novel microarchitecture techniques along with block-level sparsity support optimize compute and data-reuse to minimize bandwidth and power requirements enabling ultra-low latency applications for low-power and cost-sensitive Edge deployments. MxCore requires a small silicon footprint of 0.2068 mm 2 , in a modern 7-nm process at 1 GHz and achieves (0.15 FP32 and 0.62 INT8) TMAC/mm 2 , dissipating only 11.66 μW of leakage power. At iso-technology and iso-frequency, MxCore provides an energy efficiency of 651.4×, 159.9×, 104.8×, and 124.2× as compared to the 128-core Nvidia’s Maxwell GPU for dense General Matrix Multiply, sparse Deep Neural Network, Cholesky decomposition, and triangular matrix solve respectively.
更多
查看译文
关键词
Deep neural network learning,algorithm-hardware co-design,ASIC,hardware acceleration,matrix factorization,matrix solve,convolution neural network,edge computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要