An enhanced learning algorithm with a particle filter-based gradient descent optimizer method

NEURAL COMPUTING & APPLICATIONS(2020)

引用 13|浏览3
暂无评分
摘要
This experiment integrates a particle filter concept with a gradient descent optimizer to reduce loss during iteration and obtains a particle filter-based gradient descent (PF-GD) optimizer that can determine the global minimum with excellent performance. Four functions are applied to test optimizer deployment to verify the PF-GD method. Additionally, the Modified National Institute of Standards and Technology (MNIST) database is used to test the PF-GD method by implementing a logistic regression learning algorithm. The experimental results obtained with the four functions illustrate that the PF-GD method performs much better than the conventional gradient descent optimizer, although it has some parameters that must be set before modeling. The results of implementing the MNIST dataset demonstrate that the cross-entropy of the PF-GD method exhibits a smaller decrease than that of the conventional gradient descent optimizer, resulting in higher accuracy of the PF-GD method. The PF-GD method provides the best accuracy for the training model, 97.00%, and the accuracy of evaluating the model with the test dataset is 90.37%, which is higher than the accuracy of 90.08% obtained with the conventional gradient descent optimizer.
更多
查看译文
关键词
Gradient descent,Optimizer,Particle filter,Neural network,Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要