Rethinking ReLU to Train Better CNNs

ICPR, pp. 603-608, 2018.

Cited by: 13|Views13
EI

Abstract:

Most of convolutional neural networks share the same characteristic: each convolutional layer is followed by a nonlinear activation layer where Rectified Linear Unit (ReLU) is the most widely used. In this paper, we argue that the designed structure with the equal ratio between these two layers may not be the best choice since it could re...More

Code:

Data:

Get fulltext within 24h
Bibtex
Your rating :
0

 

Tags
Comments