When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?

COLT(2021)

引用 3|浏览14
暂无评分
摘要
We establish conditions under which gradient descent applied to fixed-width deep networks drives the logistic loss to zero, and prove bounds on the rate of convergence. Our analysis applies for smoothed approximations to the ReLU, such as Swish and the Huberized ReLU, proposed in previous applied work. We provide two sufficient conditions for convergence. The first is simply a bound on the loss at initialization. The second is a data separation condition used in prior analyses.
更多
查看译文
关键词
smoothed relu activations,gradient descent,deep networks,logistic loss
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要