Large Learning Rates Improve Generalization: But How Large Are We Talking About?
CoRR(2023)
Abstract
Inspired by recent research that recommends starting neural networks training
with large learning rates (LRs) to achieve the best generalization, we explore
this hypothesis in detail. Our study clarifies the initial LR ranges that
provide optimal results for subsequent training with a small LR or weight
averaging. We find that these ranges are in fact significantly narrower than
generally assumed. We conduct our main experiments in a simplified setup that
allows precise control of the learning rate hyperparameter and validate our key
findings in a more practical setting.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined