谷歌浏览器插件
订阅小程序
在清言上使用

A Lpso-Sgd Algorithm For The Optimization Of Convolutional Neural Network

2019 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC)(2019)

引用 5|浏览4
暂无评分
摘要
In recent years, Convolutional Neural Networks (CNN) perform very well in many complex tasks. When we train CNN, the Stochastic Gradient Descent (SGD) algorithm is widely used to optimize the loss function of CNN. However, SGD algorithm has some disadvantages such as being easy to fall into local optimum and vanishing gradient problems that need to be solved. In this paper, we propose a new hybrid algorithm that aims to tackle the disadvantage mentioned above by combining the advantages of the Lclose Particle Swarm Optimization (LPSO) and SGD algorithm. Particle Swarm Optimization (PSO) is a Global optimization algorithm, but it does not perform very well in optimizing the loss function of the neural network because of the neural network's high dimensional weight parameters and the infinite search area. To take advantage of the excellent global search capability of LPSO and the rapid convergence capability, we design the LPSO-SGD algorithm. In the experimental part, we construct the LeNet-5 deep CNN to classify the MNIST data set and the experimental results demonstrate that the proposed algorithm perform better than standard SGD algorithm.
更多
查看译文
关键词
Convolutional Neural Networks, SGD algorithm, LPSO, LeNet-5
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要