Volume-preserving Neural Networks

2021 International Joint Conference on Neural Networks (IJCNN)(2021)

引用 4|浏览2
暂无评分
摘要
We propose a novel approach to addressing the vanishing (or exploding) gradient problem in deep neural networks. We construct a new architecture for deep neural networks where all layers (except the output layer) of the network are a combination of rotation, permutation, diagonal, and activation sublayers which are all volume preserving. Our approach replaces the standard weight matrix of a neural network with a combination of diagonal, rotational and permutation matrices, all of which are volume-preserving. We introduce a coupled activation function allowing us to preserve volume even in the activation function portion of a neural network layer. This control on the volume forces the gradient (on average) to maintain equilibrium and not explode or vanish. To demonstrate our architecture we apply our volume-preserving neural network model to two standard datasets.
更多
查看译文
关键词
neural networks,volume-preserving
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要