Learning Distinct Features Helps, Provably

MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II(2023)

引用 4|浏览2
暂无评分
摘要
We study the diversity of the features learned by a two-layer neural network trained with the least squares loss. We measure the diversity by the average L-2-distance between the hidden-layer features and theoretically investigate how learning non-redundant distinct features affects the performance of the network. To do so, we derive novel generalization bounds depending on feature diversity based on Rademacher complexity for such networks. Our analysis proves that more distinct features at the network's units within the hidden layer lead to better generalization. We also show how to extend our results to deeper networks and different losses.
更多
查看译文
关键词
Neural Networks,Generalization Theory,Feature Diversity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要