Trend-Smooth: Accelerate Asynchronous SGD by Smoothing Parameters Using Parameter Trends.

IEEE ACCESS(2019)

引用 5|浏览74
暂无评分
摘要
Stochastic gradient descent(SGD) is the fundamental sequential method in training large scale machine learning models. To accelerate the training process, researchers proposed to use the asynchronous stochastic gradient descent (A-SGD) method in model learning. However, due to the stale information when updating parameters, A-SGD converges more slowly than SGD in the same iteration number. Moreover, A-SGD often converges to a high loss value and results in lower model accuracy. In this paper, we propose a novel algorithm called Trend-Smooth which can be adapted to the asynchronous parallel environment to overcome the above problems. Specifically, Trend-Smooth makes use of the parameter trend during the training process to shrink the learning rate of some dimensions where the gradients directions are opposite to the trends of parameters. Experiments on MNIST and CIFAR-10 datasets confirm that Trend-Smooth can accelerate the convergence speed in asynchronous training process. The test accuracy that Trend-Smooth achieves is shown to be higher than other asynchronous parallel baseline methods, and is very close to the SGD method. Moreover, Trend-Smooth can also be combined with other adaptive learning rate methods(like Momentum, RMSProp and Adam) in the asynchronous parallel environment to promote their performance.
更多
查看译文
关键词
Training,Market research,Acceleration,Convergence,Servers,Stochastic processes,Machine learning,Parameter trend,asynchronous SGD,accelerate training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要