Chrome Extension
WeChat Mini Program
Use on ChatGLM

Implementing Binarized Neural Network Processor on FPGA-Based Platform

2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS)(2022)

Cited 0|Views34
No score
Abstract
Binarized neural networks (BNNs) have 1-bit weights and activations, which are well suited for FPGAs. The BNNs suffer from accuracy loss compared with conventional neural networks. Shortcut connections are introduced to address the performance degradation. This work proposes a BNN processor supporting the shortcut connects. To evaluate the performance of the processor, we implement the system on an FPGA (Xilinx Kintex UltraScale). Our experiments show that the proposed processor achieves state-of-the-art energy efficiency.
More
Translated text
Key words
Deep Learning,Binarized Neural Network,Neural Network Processor,FPGA Accelerator
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined