Fixed-Sign Binary Neural Network: An Efficient Design of Neural Network for Internet-of-Things Devices

IEEE ACCESS(2020)

引用 7|浏览11
暂无评分
摘要
High computational requirement and rigorous memory cost are the significant issues which limit Convolutional Neural Networks' deployability in resource-constrained environments typically found in edge devices of Internet-of-Things (IoT). To address the problem, binary and ternary networks have been proposed to constrain the weights to reduce computational and memory costs. However, owing to the binary or ternary values, the backward propagations are not as efficient as normal during training, which makes it tough to train in edge devices. In this paper, we find a different way to resolve the problem and propose Fixed-Sign Binary Neural Network (FSB), which decomposes convolution kernel into sign and scaling factor as the prior researches but only trains the scaling factors instead of both. By doing so, our FSB avoids the sign involved in backward propagations and makes models easy to be deployed and trained in the IoT devices. Meanwhile, the convolution-acceleration architecture which we design for our FSB results in a reduced computing burden while achieving the same performance. Thanks to the efficiency of our FSB, even though we randomly initialize the sign and fix it to be untrainable, our FSB still has remarkable performances.
更多
查看译文
关键词
Convolution,Kernel,Neural networks,Acceleration,Quantization (signal),Backpropagation,Training,Convolutional neural network (CNN),Internet-of-Things (IoT),model compression,resource-constrained environment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要