From Boltzmann Machines to Neural Networks and Back Again
NIPS 2020, 2020.
We show that an assumption related to ferromagneticity, but allowing for some amount of negative correlation in the Restricted Boltzmann Machine, allows us to learn the induced feedforward network faster than would be possible without distributional assumptions
Graphical models are powerful tools for modeling high-dimensional data, but learning graphical models in the presence of latent variables is well-known to be difficult. In this work we give new results for learning Restricted Boltzmann Machines, probably the most well-studied class of latent variable models. Our results are based on new...More
PPT (Upload PPT)