Detecting Hallucinated Content in Conditional Neural Sequence Generation

Cited by: 0|Bibtex|Views8
Other Links: arxiv.org

Abstract:

Neural sequence models can generate highly fluent sentences but recent studies have also shown that they are also prone to hallucinate additional content not supported by the input, which can cause a lack of trust in the model. To better assess the faithfulness of the machine outputs, we propose a new task to predict whether each token ...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments