LRADNN: High-throughput and energy-efficient Deep Neural Network accelerator using Low Rank Approximation.

ASP-DAC(2016)

引用 16|浏览10
暂无评分
摘要
In this work, we propose an energy-efficient hardware accelerator for Deep Neural Network (DNN) using Low Rank Approximation (LRADNN). Using this scheme, inactive neurons in each layer of the DNN are dynamically identified and the corresponding computations are then bypassed. Accordingly, both the memory accesses and the arithmetic operations associated with these inactive neurons can be saved. Therefore, compared to the architectures using the direct feed-forward algorithm, LRADNN can achieve a higher throughput as well as a lower energy consumption with negligible prediction accuracy loss (within 0.1%). We implement and synthesize the proposed accelerator using TSMC 65nm technology. From the experimental results, a 31% to 53% energy reduction together with a 22% to 43% throughput increase can be achieved.
更多
查看译文
关键词
approximation theory,neural nets,power aware computing,LRADNN,TSMC 65nm technology,arithmetic operations,energy-efficient deep neural network accelerator,energy-efficient hardware accelerator,high-throughput deep neural network accelerator,inactive neurons,low rank approximation,memory accesses
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要