谷歌浏览器插件
订阅小程序
在清言上使用

Ire2f: Rethinking Effective Refinement in Language Structure Prediction Via Efficient Iterative Retrospecting and Reasoning.

PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023(2023)

引用 0|浏览46
暂无评分
摘要
Refinement plays a critical role in language structure prediction, a process that deals with complex situations such as structural edge interdependencies. Since language structure prediction usually modeled as graph parsing, typical refinement methods involve taking an initial parsing graph as input and refining it using language input and other relevant information. Intuitively, a refinement component, i.e., refiner, should be lightweight and efficient, as it is only responsible for correcting faults in the initial graph. However, current refiners add a significant burden to the parsing process due to their reliance on time-consuming encoding-decoding procedure on the language input and graph. To make the refiner more practical for real-world applications, this paper proposes a lightweight but effective iterative refinement framework, \textsc{iRe$^2$f}, based on iterative retrospecting and reasoning without involving the re-encoding process on the graph. \textsc{iRe$^2$f} iteratively refine the parsing graph based on interaction between graph and sequence and efficiently learns the shortcut to update the sequence and graph representations in each iteration. The shortcut is calculated based on the graph representation in the latest iteration. \textsc{iRe$^2$f} reduces the number of refinement parameters by $90\%$ compared to the previous smallest refiner. Experiments on a variety of language structure prediction tasks show that \textsc{iRe$^2$f} performs comparably or better than current state-of-the-art refiners, with a significant increase in efficiency.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要