auto-encoder自动编码器是一种监督式学习,它在三层神经网络中的输入层和输出层使用相同的数据。这是反向传播的特例。由于学习是通过反向传播进行的,因此成为非线性优化问题。中间层和输出层的激活功能可以任意选择。当教师数据是真实值且没有范围时,通常选择输出层的激活功能作为身份映射(即,什么都没有改变)。
Pattern Recognition Letters, (2019)
The key contribution of this research is developing a novel formulation for Class Specific Mean Autoencoder and utilize it for adulthood classification
Cited by4BibtexViews10Links
0
0
pacific-asia conference on knowledge discovery and data mining, pp.357-368, (2019)
This paper proposes an expanded variational auto-encoder recommendation framework based on multiple condition labels
Cited by1BibtexViews17Links
0
0
Rajeev Sahay, Rehana Mahfuz,Aly El Gamal
2019 53rd Annual Conference on Information Sciences and Systems (CISS), (2019)
We present three novel defense strategies to combat adversarial machine learning attacks: The Denoising Autoencoder, dimensionality reduction using the learned hidden layer of a fully-connected autoencoder neural network, and a cascade of the Denoising Autoencoder followed by the...
Cited by1BibtexViews25Links
0
0
Tobias Lemke,Christine Peter
Journal of chemical theory and computation, (2019)
Additional points can be projected to the map with a differentiable function that the algorithm yields. This allows for efficient projection of additional points to the map and allows us to combine EncoderMap with enhanced sampling schemes that require biasing potentials defined ...
Cited by0BibtexViews2Links
0
0
ICML, (2018): 2323-2332
In this paper we present a junction tree variational autoencoder for generating molecular graphs
Cited by216BibtexViews13Links
0
0
IJCAI, pp.2609-2615, (2018)
We argue that most existing graph embedding algorithms are unregularized methods that ignore the data distributions of the latent representation and suffer from inferior embedding in real-world graph data
Cited by133BibtexViews56Links
0
0
ICLR, (2018)
In this paper we propose a new method to tackle the challenge of addressing both syntax and semantic constraints in generative model for structured data
Cited by97BibtexViews67Links
0
0
Diane Bouchacourt,Ryota Tomioka,Sebastian Nowozin
national conference on artificial intelligence, (2018)
Experimental evaluations show the relevance of our method, as the Multi-Level Variational Autoencoder learns a semantically meaningful disentangled representation, generalises to unseen groups and enables control on the latent representation
Cited by88BibtexViews7Links
0
0
John D. Co-Reyes, Yuxuan Liu,Abhishek Gupta, Benjamin Eysenbach,Pieter Abbeel,Sergey Levine
ICML, (2018): 1008-1017
Experimental evaluations show that our method outperforms several prior methods and flat reinforcement learning methods in tasks that require reasoning over long horizons, handling sparse rewards, and performing multi-step compound skills
Cited by47BibtexViews40Links
0
0
AISTATS, pp.661-669, (2018)
We present the symmetric variational autoencoder, a novel framework which can match the joint distribution of data and latent code using the symmetric Kullback-Leibler divergence
Cited by44BibtexViews67Links
0
0
Çaglar Aytekin, Xingyang Ni,Francesco Cricri, Emre Aksu
IJCNN, (2018)
We have shown that the high performance is not due to any conditioning applied on the representations but it is due to selection of a particular normalization that leads to more separable clusters in Euclidean space
Cited by44BibtexViews10Links
0
0
IEEE Transactions on Industrial Informatics, no. 7 (2018): 3235-3243
Deep learning has been introduced for feature representation in soft sensor applications
Cited by35BibtexViews5Links
0
0
Neurocomputing, (2018): 1-10
A novel strategy has been proposed to design the structure of the sparse autoencoder-based Deep neural network, by which the depth and hidden neurons of the Deep sparse autoencoder could be regularly selected to extract the features of input signals
Cited by26BibtexViews9Links
0
0
Inf. Sci., (2018): 27-38
Extensive experimental results demonstrated that the proposed low-rank constrained autoencoder outperformed state-of-the-art subspace clustering methods on three types of image datasets
Cited by21BibtexViews1Links
0
0
national conference on artificial intelligence, (2018)
The WAE consists of an encoding layer to decompose the input image into two half-resolution channels and a decoding layer to synthesize the original image from the two decomposed channels
Cited by11BibtexViews41Links
0
0
Gökcen Eraslan,Lukas M. Simon, Maria Mircea,Nikola S. Mueller,Fabian J. Theis
bioRxiv, pp.300681-42, (2018)
We show that deep count autoencoder network is highly scalable to datasets with up to millions of cells
Cited by5BibtexViews3
0
0
IEEE Transactions on Smart Grid, no. 2 (2018): 594-604
WORK This paper presents a framework for overvoltage identification based on feature extraction by sparse autoencoder and sparse autoencoder and classification by softmax classifier
Cited by5BibtexViews3Links
0
0
Pin Wu, Yang Yang,Xiaoqiang Li
Future Internet, no. 6 (2018)
We show that the conventional image steganography methods mostly do not serve with good payload capacity
Cited by2BibtexViews1Links
0
0
SSCI, pp.389-396, (2018)
A minimum Variance-Embedded multi-layer architecture is presented in this paper for One-class Classification
Cited by1BibtexViews3Links
0
0
Ahmad M. Karim,Mehmet Serdar Guzel, Mehmet R. Tolun, Hilal Kaya,Fatih V. Celebi
Mathematical Problems in Engineering, (2018): 1-13
This paper proposes a new deep learning framework that essentially combines sparse autoencoder and Taguchi Method, which is an organized approach for parameter optimization in a reasonable amount of time
Cited by0BibtexViews1
0
0
Keywords
Learning (artificial Intelligence)Feature ExtractionNeural NetworksAutoencoderDimensionality ReductionImage ReconstructionNeural NetsDeep LearningImage ClassificationNeural Network
Authors
Sicheng Zhao
Paper 2
Shirui Pan
Paper 2
Wenge Rong
Paper 2
Chunyuan Li
Paper 2
Angshul Majumdar
Paper 2
Xiaobing Han
Paper 2
Yunchen Pu
Paper 2
Guodong Long
Paper 2
Xugang Lu
Paper 2
Lawrence Carin
Paper 2