Learning to Write with Cooperative Discriminators

Ari Holtzman
Ari Holtzman
Jan Buys
Jan Buys
Antoine Bosselut
Antoine Bosselut
David Golub
David Golub

meeting of the association for computational linguistics, Volume abs/1805.06087, 2018.

Cited by: 23|Views26
EI

Abstract:

Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models, but when used to generate natural language their output tends to be overly generic, repetitive, and self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressiv...More

Code:

Data:

Your rating :
0

 

Tags
Comments