Probing For Referential Information In Language Models

58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020)(2020)

引用 29|浏览91
暂无评分
摘要
Language models keep track of complex linguistic information about the preceding context - including, e.g., syntactic relations in a sentence. We investigate whether they also capture information beneficial for resolving pronominal anaphora in English. We analyze two state of the art models with LSTM and Transformer architectures, respectively, using probe tasks on a coreference annotated corpus.Our hypothesis is that language models will capture grammatical properties of anaphora (such as agreement between a pronoun and its antecedent), but not semantico-referential information (the fact that pronoun and antecedent refer to the same entity). Instead, we find evidence that models capture referential aspects to some extent-though they are still much better at grammar. The Transformer outperforms the LSTM in all analyses, and exhibits in particular better semantico-referential abilities.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要