<inline-formula> <tex-math notation="LaTeX">$\beta$ </tex-math> </inline-formula>-Dropout: A Unified Dropout

IEEE Access(2019)

引用 4|浏览0
暂无评分
摘要
Dropout is an effective regularization method for deep learning tasks. Several variants of dropout based on sampling with different distributions have been proposed individually and have shown good generalization performance on various learning tasks. Among these variants, the canonical Bernoulli dropout is a discrete method, while the uniform dropout and the Gaussian dropout are continuous dropout methods. When facing a new learning task, one must make a decision on which method is more suitable, which is somehow unnatural and inconvenient. In this paper, we attempt to change the selection problem to a parameter tuning problem by proposing a general form of dropout, β-dropout, to unify the discrete dropout with continuous dropout. We show that by adjusting the shape parameter β, the β-dropout can yield the Bernoulli dropout, uniform dropout, and approximate Gaussian dropout. Furthermore, it can obtain continuous regularization strength, which paves the way for self-adaptive dropout regularization. As a first attempt, we propose a self-adaptive β-dropout, in which the parameter β is tuned automatically following a pre-designed strategy. The β-dropout is tested extensively on the MNIST, CIFAR-10, SVHN, NORB, and ILSVRC-12 datasets to investigate its superior performance. The results show that the β-dropout can conduct finer control of its regularization strength, therefore obtaining better performance.
更多
查看译文
关键词
Regularization,dropout,deep learning,Gaussian dropout,Bernoulli dropout
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要