Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance?

EACL(2021)

引用 75|浏览93
暂无评分
摘要
Much recent attention has been devoted to analyzing sentence representations learned by neural encoders, through the paradigm of 'probing' tasks. This is often motivated by an interest to understand the information a model uses to make its decision. However, to what extent is the information encoded in a sentence representation actually used for the task which the encoder is trained on? In this work, we examine this probing paradigm through a case-study in Natural Language Inference, showing that models learn to encode linguistic properties even when not needed for a task. We identify that pre-trained word embeddings play a considerable role in encoding these properties rather than the training task itself, highlighting the importance of careful controls when designing probing experiments. Through a set of controlled synthetic tasks, we demonstrate models can encode these properties considerably above chance-level even when distributed as random noise, calling into question the interpretation of absolute claims on probing tasks.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要