AdaLip: An Adaptive Learning Rate Method per Layer for Stochastic Optimization

NEURAL PROCESSING LETTERS(2023)

引用 2|浏览1
暂无评分
摘要
Various works have been published around the optimization of Neural Networks that emphasize the significance of the learning rate. In this study we analyze the need for a different treatment for each layer and how this affects training. We propose a novel optimization technique, called AdaLip, that utilizes an estimation of the Lipschitz constant of the gradients in order to construct an adaptive learning rate per layer that can work on top of already existing optimizers, like SGD or Adam. A detailed experimental framework was used to prove the usefulness of the optimizer on three benchmark datasets. It showed that AdaLip improves the training performance and the convergence speed, but also made the training process more robust to the selection of the initial global learning rate.
更多
查看译文
关键词
Neural networks,Online learning,Stochastic optimization,Adaptive learning rate,Lipschitz constant
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要