Tight Hardness Results for Training Depth-2 ReLU Networks

ITCS, 2021.

Cited by: 0|Views7
EI

Abstract:

We prove several hardness results for training depth-2 neural networks with the ReLU activation function; these networks are simply weighted sums (that may include negative coefficients) of ReLUs. Our goal is to output a depth-2 neural network that minimizes the square loss with respect to a given training set. We prove that this proble...More

Code:

Data:

Full Text
Bibtex