Optimization of ReLU Neural Networks using Quotient Stochastic Gradient Descent

arXiv: Machine Learning(2018)

引用 23|浏览56
暂无评分
摘要
It has been well known that neural networks with rectified linear hidden units (ReLU) as activation functions are positively scale invariant, which results in severe redundancy in their weight space (i.e., many ReLU networks with different weights are actually equivalent). In this paper, we formally characterize this redundancy/equivalence using the language of \emph{quotient space} and discuss its negative impact on the optimization of ReLU neural networks. Specifically, we show that all equivalent ReLU networks correspond to the same vector in the quotient space, and each such vector can be characterized by the so-called skeleton paths in the ReLU networks. With this, we prove that the dimensionality of the quotient space is $\#$weight$-\#$(hidden nodes), indicating that the redundancy of the weight space is huge. In this paper, we propose to optimize ReLU neural networks directly in the quotient space, instead of the original weight space. We represent the loss function in the quotient space and design a new stochastic gradient descent algorithm to iteratively learn the model, which we call \emph{Quotient stochastic gradient descent } (abbreviated as Quotient SGD). We also develop efficient tricks to ensure that the implementation of Quotient SGD almost requires no extra computations as compared to standard SGD. According to the experiments on benchmark datasets, our proposed Quotient SGD can significantly improve the accuracy of the learned model.
更多
查看译文
关键词
relu neural networks,neural networks,optimization,quotient
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要