EntRED: Benchmarking Relation Extraction with Fewer Shortcuts

arxiv(2023)

引用 0|浏览90
暂无评分
摘要
Entity names play an effective role in relation extraction (RE) and often influence model performance. As a result, the entity names in the benchmarks' test sets significantly influence the evaluation of RE models. In this work, we find that the standard RE benchmarks' datasets have a large portion of incorrect entity annotations, low entity name diversity, and are prone to have shortcuts from entity names to ground-truth relations. These issues make the standard benchmarks far from reflecting the real-world scenarios. Hence, in this work, we present EntRED, a challenging RE benchmark with reduced shortcuts and higher diversity of entities. To build EntRED, we propose an end-to-end entity replacement pipeline based on causal inference (CI): ERIC. ERIC performs type-constrained replacements on entities to reduce the shortcuts from entity bias to ground-truth relations. ERIC applies CI in two aspects: 1) targeting the instances that need entity replacements, and 2) determining the candidate entities for replacements. We apply ERIC on TACRED to produce EntRED. Our EntRED evaluates whether the RE model can correctly extract the relations from the text instead of relying on entity bias. Empirical results reveal that even the strong RE model has a significant performance drop on EntRED, which memorizes entity name patterns instead of reasoning from the textual context. We release ERIC's source code and the EntRED benchmark at https://github.com/wangywUST/ENTRED.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要