Where To Apply Dropout In Recurrent Neural Networks For Handwriting Recognition?

ICDAR '15 Proceedings of the 2015 13th International Conference on Document Analysis and Recognition (ICDAR)(2015)

引用 38|浏览66
暂无评分
摘要
The dropout technique is a data-driven regularization method for neural networks. It consists in randomly setting some activations from a given hidden layer to zero during training. Repeating the procedure for each training example, it is equivalent to sample a network from an exponential number of architectures that share weights. The goal of dropout is to prevent feature detectors to rely on each other. Dropout has successfully been applied to Deep MLPs and to convolutional neural networks, for various tasks of Speech Recognition and Computer Vision. We recently proposed a way to use dropout in MDLSTM-RNNs for handwritten word and line recognition. In this paper, we show that further improvement can be achieved by implementing dropout differently, more specifically by applying it at better positions relative to the LSTM units.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要