KERMIT: Complementing Transformer Architectures with Encoders of Explicit Syntactic Interpretations
Conference on Empirical Methods in Natural Language Processing(2020)
摘要
Syntactic parsers have dominated natural language understanding for decades. Yet, their syntactic interpretations are losing centrality in downstream tasks due to the success of large-scale textual representation learners. In this paper, we propose KERMIT (Kernel-inspired Encoder with Recursive Mechanism for Interpretable Trees) to embed symbolic syntactic parse trees into artificial neural networks and to visualize how syntax is used in inference. We experimented with KERMIT paired with two state-of-the-art transformer-based universal sentence encoders (BERT and XLNet) and we showed that KERMIT can indeed boost their performance by effectively embedding human-coded universal syntactic representations in neural networks
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络