Principled Architecture-aware Scaling of Hyperparameters
ICLR 2024(2024)
摘要
Training a high-quality deep neural network requires choosing suitable
hyperparameters, which is a non-trivial and expensive process. Current works
try to automatically optimize or design principles of hyperparameters, such
that they can generalize to diverse unseen scenarios. However, most designs or
optimization methods are agnostic to the choice of network structures, and thus
largely ignore the impact of neural architectures on hyperparameters. In this
work, we precisely characterize the dependence of initializations and maximal
learning rates on the network architecture, which includes the network depth,
width, convolutional kernel size, and connectivity patterns. By pursuing every
parameter to be maximally updated with the same mean squared change in
pre-activations, we can generalize our initialization and learning rates across
MLPs (multi-layer perception) and CNNs (convolutional neural network) with
sophisticated graph topologies. We verify our principles with comprehensive
experiments. More importantly, our strategy further sheds light on advancing
current benchmarks for architecture design. A fair comparison of AutoML
algorithms requires accurate network rankings. However, we demonstrate that
network rankings can be easily changed by better training networks in
benchmarks with our architecture-aware learning rates and initialization.
更多查看译文
关键词
Hyperparameter Transfer,Neural Network Architecture,Neural Network Initialization,Learning Rate
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要