RADM-DRE:Retrieval Augmentation for Document-Level Relation Extraction with Diffusion Model.

2023 International Conference on Asian Language Processing (IALP)(2023)

引用 0|浏览3
暂无评分
摘要
Existing data augmentation methods attempt to utilize more raw samples or incorporate external knowledge to enhance the model, with the assumption that the explicit data pool for retrieval must be accessible in both training and testing stages. We argue that the data generated from the distribution of raw data beyond the raw data itself can provide more informative augmentation and can relax the strong assumption that the original raw data must be accessible in testing stage. To address this issue, we propose a novel framework that introduces a diffusion model for the first time. The Diffusion Model aims to generate data with diversity by directly inheriting the attribute of diffusion model from the data distribution, serving as a data generator. However, the raw text is discrete which is hard to generate via diffusion model directly. Thus, we model the original data in a transformed continuous embedding space, and conduct retrieval from that data distribution. Then, we concatenate the retrieval results with the original features for augmentation. Experimental results on the public datasets DocRED, CDR, and GDA demonstrate promising performance.
更多
查看译文
关键词
Data Augmentation,Diffusion Model,Relation Extraction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要