Chrome Extension
WeChat Mini Program
Use on ChatGLM

Timeline-based Sentence Decomposition with In-Context Learning for Temporal Fact Extraction

Annual Meeting of the Association for Computational Linguistics(2024)

Cited 0|Views38
Abstract
Facts extraction is pivotal for constructing knowledge graphs. Recently, theincreasing demand for temporal facts in downstream tasks has led to theemergence of the task of temporal fact extraction. In this paper, wespecifically address the extraction of temporal facts from natural languagetext. Previous studies fail to handle the challenge of establishingtime-to-fact correspondences in complex sentences. To overcome this hurdle, wepropose a timeline-based sentence decomposition strategy using large languagemodels (LLMs) with in-context learning, ensuring a fine-grained understandingof the timeline associated with various facts. In addition, we evaluate theperformance of LLMs for direct temporal fact extraction and get unsatisfactoryresults. To this end, we introduce TSDRE, a method that incorporates thedecomposition capabilities of LLMs into the traditional fine-tuning of smallerpre-trained language models (PLMs). To support the evaluation, we constructComplexTRED, a complex temporal fact extraction dataset. Our experiments showthat TSDRE achieves state-of-the-art results on both HyperRED-Temporal andComplexTRED datasets.
More
Translated text
PDF
Bibtex
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种基于时间线的句子分解策略,结合大型语言模型进行在位学习,以精细地从自然语言文本中提取时间事实,解决了复杂句子中建立事实与时间对应关系的挑战,创新地提出了TSDRE方法并构建了ComplexTRED数据集,实验结果显示其在HyperRED-Temporal和ComplexTRED数据集上取得了最先进的结果。

方法】:提出的方法是 timeline-based sentence decomposition strategy,使用了大型语言模型(LLMs)结合 in-context learning。

实验】:实验中,我们评估了LLMs直接提取时间事实的性能并且结果不理想。因此,我们引入了TSDRE方法,它将LLMs的分解能力融入了传统的较小预训练语言模型(PLMs)的微调中。为了支持评估,我们构建了ComplexTRED,一个复杂的时间事实提取数据集。在HyperRED-Temporal和ComplexTRED数据集上的实验表明,TSDRE取得了最先进的结果。