Discrete Memory Addressing Variational Autoencoder For Visual Concept Learning

2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2020)

引用 0|浏览86
暂无评分
摘要
A substantial aspect of general intelligence is the ability to summarize basic building blocks from various high-level concepts. Artificial vision systems with such hierarchical property can not only perform accurate reasoning for complex observations, but also learn useful low-level knowledge shared across scenes. To achieve this goal, we propose a discrete memory addressing VAE model (DM-VAE) for explicitly memorizing and reasoning about shared primitives in images. A time-persistence memory module is used to store the learned abstract knowledge and to interact with the generative model. The model decides what to pay attention to at each step, and constructs the primitive library automatically as the learning progresses in a fully unsupervised setting. While performing inference, the model attempts to interpret a new observation as a combination of previously learned elements. We further derive a proper variational lower bound which can be optimized efficiently. We conduct visual comprehension experiments on images and demonstrate that our model is able to search, identify, and memorize semantically meaningful primitive concepts.
更多
查看译文
关键词
deep generative model, hierarchical Bayesian model, concept learning, deep model with memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要