Towards Better Context-aware Lexical Semantics: Adjusting Contextualized Representations through Static Anchors

Conference on Empirical Methods in Natural Language Processing(2020)

引用 17|浏览592
暂无评分
摘要
One of the most powerful features of contextualized models is their dynamic embeddings for words in context, leading to state-of-the-art representations for context-aware lexical semantics. In this paper, we present a post-processing technique that enhances these representations by learning a transformation through static anchors. Our method requires only another pre-trained model and no labeled data is needed. We show consistent improvement in a range of benchmark tasks that test contextual variations of meaning both across different usages of a word and across different words as they are used in context. We demonstrate that while the original contextual representations can be improved by another embedding space from either contextualized or static models, the static embeddings, which have lower computational requirements, provide the most gains.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要