Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization by Large Step Sizes
CoRR(2024)
Abstract
We study the generalization of two-layer ReLU neural networks in a univariate
nonparametric regression problem with noisy labels. This is a problem where
kernels (e.g. NTK) are provably sub-optimal and benign overfitting does
not happen, thus disqualifying existing theory for interpolating (0-loss,
global optimal) solutions. We present a new theory of generalization for local
minima that gradient descent with a constant learning rate can stably
converge to. We show that gradient descent with a fixed learning rate η
can only find local minima that represent smooth functions with a certain
weighted first order total variation bounded by 1/η - 1/2 +
O(σ + √(MSE)) where σ is the label noise
level, MSE is short for mean squared error against the ground truth,
and O(·) hides a logarithmic factor. Under mild assumptions,
we also prove a nearly-optimal MSE bound of O(n^-4/5) within
the strict interior of the support of the n data points. Our theoretical
results are validated by extensive simulation that demonstrates large learning
rate training induces sparse linear spline fits. To the best of our knowledge,
we are the first to obtain generalization bound via minima stability in the
non-interpolation case and the first to show ReLU NNs without regularization
can achieve near-optimal rates in nonparametric regression.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined