Chrome Extension
WeChat Mini Program
Use on ChatGLM

How Two-Layer Neural Networks Learn, One (giant) Step at a Time

ICRA 2025(2025)

Cited 13|Views25
Key words
Feature learning,Gradient descent,SGD,Learning Theory,Two-layers neural network,Random Features
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined