A novel method for speed training acceleration of recurrent neural networks

Information Sciences(2021)

引用 17|浏览30
暂无评分
摘要
Although recurrent neural networks (RNNs) perfectly solve many difficult problems, their computational complexity significantly increases training time. Therefore, the primary problem with applying RNNs is to shorten the time needed to train and operate a network. An effective solution to this problem is to use parallel processing. In the paper, a particular approach for the Jordan network will be shown, however, the presented idea is applicable to other RNN structures. This type of network is characterized by natural parallelism, and in the paper, this feature is used to significantly accelerate the learning process. High-performance learning has been achieved using a novel parallel three-dimensional architecture. The presented solutions can be implemented in digital hardware.
更多
查看译文
关键词
Recurrent neural networks,Parallel architectures,Supervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要