We present three novel defense strategies to combat adversarial machine learning attacks: The Denoising Autoencoder, dimensionality reduction using the learned hidden layer of a fully-connected autoencoder neural network, and a cascade of the Denoising Autoencoder followed by the...
Additional points can be projected to the map with a differentiable function that the algorithm yields. This allows for efficient projection of additional points to the map and allows us to combine EncoderMap with enhanced sampling schemes that require biasing potentials defined ...
We argue that most existing graph embedding algorithms are unregularized methods that ignore the data distributions of the latent representation and suffer from inferior embedding in real-world graph data
Experimental evaluations show the relevance of our method, as the Multi-Level Variational Autoencoder learns a semantically meaningful disentangled representation, generalises to unseen groups and enables control on the latent representation
Experimental evaluations show that our method outperforms several prior methods and flat reinforcement learning methods in tasks that require reasoning over long horizons, handling sparse rewards, and performing multi-step compound skills
We have shown that the high performance is not due to any conditioning applied on the representations but it is due to selection of a particular normalization that leads to more separable clusters in Euclidean space
A novel strategy has been proposed to design the structure of the sparse autoencoder-based Deep neural network, by which the depth and hidden neurons of the Deep sparse autoencoder could be regularly selected to extract the features of input signals
This paper proposes a new deep learning framework that essentially combines sparse autoencoder and Taguchi Method, which is an organized approach for parameter optimization in a reasonable amount of time