Architecting Effectual Computation for Machine Learning Accelerators

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2020)

引用 4|浏览61
暂无评分
摘要
Inference efficiency is the predominant design consideration for modern machine learning accelerators. The ability of executing multiply-and-accumulate (MAC) significantly impacts the throughput and energy consumption during inference. However, MAC operation suffers from significant ineffectual computations that severely undermines the inference efficiency and must be appropriately handled by the accelerator. The ineffectual computations are manifested in two ways: first, zero values as the input operands of the multiplier, waste time and energy but contribute nothing to the model inference; second, zero bits in nonzero values occupy a large portion of multiplication time but are useless to the final result. In this article, we propose an ineffectual-free yet cost-effective computing architecture, called split-and-accumulate (SAC) with two essential bit detection mechanisms to address these intractable problems in tandem. It replaces the conventional MAC operation in the accelerator by only manipulating the essential bits in the parameters (weights) to accomplish the partial sum computation. Besides, it also eliminates multiplications without any accuracy loss, and supports a wide range of precision configurations. Based on SAC, we propose an accelerator family called Tetris and demonstrate its application in accelerating state-of-the-art deep learning models. Tetris includes two implementations designed for either high performance (i.e., cloud applications) or low power consumption (i.e., edge devices), respectively, contingent to its built-in essential bit detection mechanism. We evaluate our design with Vivado HLS platform and achieve up to $6.96\times $ performance enhancement, and up to $55.1\times $ energy efficiency improvement over conventional accelerator designs.
更多
查看译文
关键词
Computational modeling,Throughput,Adders,Machine learning,Acceleration,Kernel,Computational efficiency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要