Understanding Neural Abstractive Summarization Models via Uncertainty

EMNLP 2020, pp. 6275-6281, 2020.

Other Links: arxiv.org|academic.microsoft.com

Abstract:

An advantage of seq2seq abstractive summarization models is that they generate text in a free-form manner, but this flexibility makes it difficult to interpret model behavior. In this work, we analyze summarization decoders in both blackbox and whitebox ways by studying on the entropy, or uncertainty, of the model’s token-level prediction...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments