Understanding Neural Abstractive Summarization Models via Uncertainty
EMNLP 2020, pp. 6275-6281, 2020.
Abstract:
An advantage of seq2seq abstractive summarization models is that they generate text in a free-form manner, but this flexibility makes it difficult to interpret model behavior. In this work, we analyze summarization decoders in both blackbox and whitebox ways by studying on the entropy, or uncertainty, of the model’s token-level prediction...More
Code:
Data:
Full Text
Tags
Comments