A Novel Image Captioning Method Based on Generative Adversarial Networks
Artificial Neural Networks and Machine Learning – ICANN 2019 Text and Time Series Lecture Notes in Computer Science(2019)
Abstract
Although the image captioning methods based on RNN has made great progress in recent years, these are often lacking in variability and ignore some minor information. In this paper, a novel image captioning method based on Generative Adversarial Networks is proposed, which improve the naturalness and diversity of image description. In the method, matcher is added to the generator to get the feature of the image that does not appear in the standard description, then to produce descriptions conditioned on image, and discriminator to access how well a description fits the visual content. It is noteworthy that training a sequence generator is nontrivial. Experiments on MSCOCO and Flickr30k show that it performed competitively against real people in our user study and outperformed other methods on various tasks.
MoreTranslated text
Key words
LSTM,GAN,Generator,Discriminator,Matcher
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined