Learning to Write with Cooperative Discriminators
meeting of the association for computational linguistics, Volume abs/1805.06087, 2018.
EI
Abstract:
Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models, but when used to generate natural language their output tends to be overly generic, repetitive, and self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressiv...More
Code:
Data:
Tags
Comments