From Boltzmann Machines to Neural Networks and Back Again

NIPS 2020, 2020.

Cited by: 0|Views14
EI
Weibo:
We show that an assumption related to ferromagneticity, but allowing for some amount of negative correlation in the Restricted Boltzmann Machine, allows us to learn the induced feedforward network faster than would be possible without distributional assumptions

Abstract:

Graphical models are powerful tools for modeling high-dimensional data, but learning graphical models in the presence of latent variables is well-known to be difficult. In this work we give new results for learning Restricted Boltzmann Machines, probably the most well-studied class of latent variable models. Our results are based on new...More

Code:

Data:

0
Your rating :
0

 

Tags
Comments