Adaptive Input Normalization for Quantized Neural Networks.

Jan Schmidt,Petr Fiser, Miroslav Skrbek

IEEE Workshop on Design and Diagnostics of Electronic Circuits and Systems(2024)

引用 0|浏览0
暂无评分
摘要
Neural networks with quantized activation functions cannot adapt the quantization at the input of their first layer. Preprocessing is therefore required to adapt the range of input data to the quantization range. Such preprocessing usually includes an activation-wise linear transformation and is steered by the properties of the training set. We suggest to include the linear transform into the training process. Using the Jet Stream Classification task and an evaluation architecture of three quantized dense layers, we document that it improves accuracy, requires the same resources as standard preprocessing, plays a role in network pruning, and is reasonably stable with respect to initialization.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要