Enhancing Pre-Trained Generative Language Models with Question Attended Span Extraction on Machine Reading Comprehension
CoRR(2024)
摘要
Machine Reading Comprehension (MRC) poses a significant challenge in the
field of Natural Language Processing (NLP). While mainstream MRC methods
predominantly leverage extractive strategies using encoder-only models such as
BERT, generative approaches face the issue of out-of-control generation – a
critical problem where answers generated are often incorrect, irrelevant, or
unfaithful to the source text. To address these limitations in generative
models for MRC, we introduce the Question-Attended Span Extraction (QASE)
module. Integrated during the fine-tuning phase of pre-trained generative
language models (PLMs), QASE significantly enhances their performance, allowing
them to surpass the extractive capabilities of advanced Large Language Models
(LLMs) such as GPT-4. Notably, these gains in performance do not come with an
increase in computational demands. The efficacy of the QASE module has been
rigorously tested across various datasets, consistently achieving or even
surpassing state-of-the-art (SOTA) results.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要