Efficient Non-Linear Adder for Stochastic Computing with Approximate Spatial-Temporal Sorting Network.

DAC(2023)

引用 2|浏览11
暂无评分
摘要
End-to-end stochastic computing (SC) enables fault-tolerant and area-efficient neural acceleration by conducting non-linear addition, including accumulation and activation functions, in SC bitstreams. However, existing non-linear adder designs suffer from a high hardware cost, accounting for a major portion of the datapath power and area, and may also have limited computation accuracy and flexibility. In this paper, we propose an accurate yet efficient non-linear adder design. We analyze the redundancy in existing designs and propose a parameterized approximate non-linear adder design space. By systematic design space exploration, we develop non-linear adders that are significantly more efficient than existing designs with negligible computation error. We further propose a spatial-temporal architecture to improve the design flexibility and efficiency for a wide range of network sizes. To support state-of-the-art networks, e.g., ResNet18, we demonstrate that our design can reduce the datapath area by 2.16x compared with the baseline designs. Our design can also reduce the area-delay product (ADP) of the non-linear adder by 4.13x and 23.29x for large and small convolution layers in ResNet18, respectively.
更多
查看译文
关键词
activation functions,baseline designs,computation accuracy,convolution layers,datapath area,datapath power,design flexibility,end-to-end stochastic computing,negligible computation error,nonlinear addition,parameterized approximate nonlinear adder design space,spatial-temporal architecture,spatial-temporal sorting network,systematic design space exploration
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要