Rethinking Relu To Train Better Cnns
2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)(2018)
摘要
Most of convolutional neural networks share the same characteristic: each convolutional layer is followed by a nonlinear activation layer where Rectified Linear Unit (ReLU) is the most widely used. In this paper, we argue that the designed structure with the equal ratio between these two layers may not be the best choice since it could result in the poor generalization ability. Thus, we try to investigate a more suitable method on using ReLU to explore the better network architectures. Specifically, we propose a proportional module to keep the ratio between convolution and ReLU amount to be N:M (N>M). The proportional module can be applied in almost all networks with no extra computational cost to improve the performance. Comprehensive experimental results indicate that the proposed method achieves better performance on different benchmarks with different network architectures, thus verify the superiority of our work.
更多查看译文
关键词
convolution,ReLU,proportional module,extra computational cost,comprehensive experimental results,convolutional neural networks,convolutional layer,nonlinear activation layer,equal ratio,poor generalization ability,suitable method,rectified linear unit,CNN,network architectures
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络