Think While You Write: Hypothesis Verification Promotes Faithful Knowledge-to-Text Generation
arxiv(2023)
摘要
Knowledge-to-text generators often struggle to faithfully generate
descriptions for the input facts: they may produce hallucinations that
contradict the input, or describe facts not present in the input. To reduce
hallucinations, we propose a decoding-only method, TWEAK (Think While
Effectively Articulating Knowledge), which can be integrated with any generator
without retraining. TWEAK treats the generated sequences at each decoding step
and its future sequences as hypotheses, and ranks each generation candidate
based on the extent to which their hypotheses are supported by the input facts
using a Hypothesis Verification Model (HVM). We first demonstrate the
effectiveness of TWEAK by using a Natural Language Inference (NLI) model as the
HVM and report improved faithfulness with a minimal impact on the quality. We
then replace the NLI model with a task-specific HVM trained with a
first-of-a-kind dataset, FATE (Fact-Aligned Textual Entailment), which pairs
input facts with their original and perturbed descriptions. We test TWEAK with
two generators, and the best TWEAK variants improve on average for the two
models by 2.24/7.17 points in faithfulness (FactKB) in in/out-of-distribution
evaluations, respectively, and with only a 0.14/0.32-point decline in quality
(BERTScore).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要