Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization
CoRR(2024)
Abstract
The typical training of neural networks using large stepsize gradient descent
(GD) under the logistic loss often involves two distinct phases, where the
empirical risk oscillates in the first phase but decreases monotonically in the
second phase. We investigate this phenomenon in two-layer networks that satisfy
a near-homogeneity condition. We show that the second phase begins once the
empirical risk falls below a certain threshold, dependent on the stepsize.
Additionally, we show that the normalized margin grows nearly monotonically in
the second phase, demonstrating an implicit bias of GD in training
non-homogeneous predictors. If the dataset is linearly separable and the
derivative of the activation function is bounded away from zero, we show that
the average empirical risk decreases, implying that the first phase must stop
in finite steps. Finally, we demonstrate that by choosing a suitably large
stepsize, GD that undergoes this phase transition is more efficient than GD
that monotonically decreases the risk. Our analysis applies to networks of any
width, beyond the well-known neural tangent kernel and mean-field regimes.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined