Specializing Word Embeddings (for Parsing) by Information Bottleneck
EMNLP/IJCNLP (1), pp. 2744-2754, 2019.
Pre-trained word embeddings like ELMo and BERT contain rich syntactic and semantic information, resulting in state-of-the-art performance on various tasks. We propose a very fast variational information bottleneck (VIB) method to nonlinearly compress these embeddings, keeping only the information that helps a discriminative parser. We c...More
PPT (Upload PPT)
Best Paper of EMNLP, 2019