Low-Dimensional Dynamics of Encoding and Learning in Recurrent Neural Networks.

Canadian Conference on AI(2020)

引用 2|浏览25
暂无评分
摘要
In this paper, we use dimensionality reduction techniques to study how a recurrent neural network (RNN) processes and encodes information in the context of a classification task, and we explain our findings using tools from dynamical systems theory. We observe that internal representations develop a task-relevant structure as soon as significant information is provided as input and this structure remains for some time even if we let the dynamics drift. However, the structure is only interpretable by the final classifying layer at the fixed time step for which the network was trained. We measure that throughout the training, the recurrent weights matrix is modified so that the resulting dynamical system associated with the network’s neural activations evolves into a non-trivial attractor, reminiscent of neural oscillations in the brain. Our findings suggest that RNNs change their internal dynamics throughout training so that information is stored in low-dimensional cycles, rather than in high-dimensional clusters.
更多
查看译文
关键词
recurrent neural networks,dynamics,learning,encoding,low-dimensional
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要