Chrome Extension
WeChat Mini Program
Use on ChatGLM

A recollect-tuning method for entity and relation extraction

EXPERT SYSTEMS WITH APPLICATIONS(2024)

Cited 0|Views52
No score
Abstract
Fine-tuning and mask-tuning (or prompt tuning) are two approaches to construct deep neural networks for entity and relation extraction. Fine-tuning based models optimize neural networks with task-relevant objective, in which pre-trained language models (PLMs) are mainly used as external resources to support word embedding. In mask-tuning models, neural networks is optimized by the same pre -training objective in a PLM, which directly outputs verbalized entity type representations. It is effective to utilize potential knowledge of PLMs. In this paper, we propose a recollect-tuning approach, which jointly makes full use of the mechanisms of both fine-tuning and mask-tuning. In this approach, the recollect-tuning iteratively masks tokens in a possible entity span. The classification is based on both the masked token representation and the entity span representation. It is the same as the process to make a decision based on incomplete information. In the training process, the deep network is optimized by task-relevant objective, which strengthens the semantic representation of each entity span. It is effective to learn entity noise-invariant features and take full advantage of potential knowledge of PLMs. Our method is evaluated on three public benchmarks (the ACE 2004, ACE 2005 and SciERC datasets) for the entity and relation extraction task. The result shows significant improvement in the two tasks, outperforming the state -of -the -art performance on ACE04, ACE05 and SciERC by +0.4%, +0.6%, and +0.5%, respectively.
More
Translated text
Key words
Entity extraction,Relation extraction,Fine-tuning,Mask-tuning
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined