A Fast Learning Algorithm for the Multi-layer Neural Network.

ICAISC (1)(2022)

引用 0|浏览2
暂无评分
摘要
In this paper, the computational improvement for the scaled Givens rotation-based training algorithms is presented. Application of the scaled rotations boosts the algorithm significantly due to the elimination of the computation of the square root. In a classic variant scaled rotations utilize so-called scale factors — $$\chi $$ . It turns out that the scale factors can be omitted during the computation which boosts the overall algorithm performance even further. This paper gives a mathematical explanation of how to apply the proposed improvement to the scaled variants of the training algorithms. The last section of the paper contains several benchmarks which prove the proposed method to be superior to the classic approach.
更多
查看译文
关键词
fast learning algorithm,neural network,multi-layer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要