Image-Classifier Deep Convolutional Neural Network Training by 9-bit Dedicated Hardware to Realize Validation Accuracy and Energy Efficiency Superior to the Half Precision Floating Point Format

2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS)(2018)

引用 11|浏览11
暂无评分
摘要
We propose a 9-bit floating point format for training image-classifier deep convolutional neural networks. The proposed floating point format has a 5-bit exponent, a 3-bit mantissa with the hidden most significant bit (MSB), and a sign bit. The 9-bit floating point format reduces not only the transistor count of the multiplier in the multipy-accumulate (MAC) unit, but also the data traffic for the forward and backward propagations and the weight update. Both of the reductions realize a power efficient training. To maintain the validation accuracy, the accumulator is implemented with an internal longer-bit-length floating point format while the multiplier accepts the 9-bit format. We examined this format in the training of the AlexNet and the ResNet-50 with the ILSVRC 2012 data set. The trained 9-bit AlexNet and ResNet-50 exhibited the validation accuracy superior to the 16-bit floating point format training by 1.2 % and 0.5 %, respectively. The transistor count in the 9-bit MAC unit is estimated to be reduced by 84% as compared to the 32-bit counterpart.
更多
查看译文
关键词
Deep convolutional neural network (DCNN), AlexNet, ResNet, training, floating point arithmetic, accuracy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要