End-to-End Image Reconstruction of Image from Human Functional Magnetic Resonance Imaging Based on the "Language" of Visual Cortex.

ICCAI(2020)

引用 0|浏览12
暂无评分
摘要
In recent years, with the development of deep learning, the integration between neuroscience and computer vision has been deepened. In computer vision, it has been possible to generate images from text as well as semantic understanding from images based on deep learning. Here, text refers to human language, and the language that a computer can understand typically requires text to be encoded. In human brain visual expression, it also produces "descriptions" of visual stimuli, that is, the "language" that generates from the brain itself. Reconstruction of visual information is the process of reconstructing visual stimuli from the understanding of human brain, which is the most difficult to achieve in visual decoding. And based on the existing research of visual mechanisms, it is still difficult to understand the "language" of human brain. Inspired by generating images from text, we regarded voxel responses as the "language" of brain in order to reconstruct visual stimuli and built an end-to-end visual decoding model under the condition of small number of samples. We simply retrained a generative adversarial network (GAN) used to generate images from text on 1200 training data (including natural image stimuli and corresponding voxel responses). We regarded voxel responses as semantic information of brain, and sent them to GAN as prior information. The results showed that the decoding model we trained can reconstruct the natural images successfully. It also suggested the feasibility of reconstructing visual stimuli from "brain language", and the end-to-end model was more likely to learn the direct mapping between brain activity and visual perception. Moreover, it further indicated the great potential of combining neuroscience and computer vision.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要