RMAN: Relational multi-head attention neural network for joint extraction of entities and relations

APPLIED INTELLIGENCE(2021)

引用 21|浏览40
暂无评分
摘要
The task of extracting entities and relations has evolved from distributed extraction to joint extraction. The joint model overcomes the disadvantages of distributed extraction method and strengthens the information interaction between entities and relations. However, the existing methods of the joint model rarely pay attention to the semantic information between words, which have limitations in solving the problem of overlapping relations. In this paper, we propose an RMAN model for joint extraction of entities and relations, which includes multi-feature fusion encoder sentence representation and decoder sequence annotation. We first add a multi-head attention layer after Bi-LSTM to obtain sentence representations, and leverage the attention mechanism to capture relation-based sentence representations. Then, we perform sequence annotation on the sentence representation to obtain entity pairs. Experiments on NYT-single, NYT-multi and WebNLG datasets demonstrate that our model can efficiently extract overlapping triples, which outperforms other baselines.
更多
查看译文
关键词
Joint extraction of entities and relations, Relation feature, Multi-head attention, Sequence annotation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要