Chrome Extension
WeChat Mini Program
Use on ChatGLM

Performance Trade-offs in Weight Quantization for Memory-Efficient Inference

2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)(2019)

Cited 1|Views50
Key words
Quantized neural networks,stochastic rounding,low-precision,energy efficient,floating-point precision
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined