Specializing Word Embeddings (for Parsing) by Information Bottleneck

EMNLP/IJCNLP (1), pp. 2744-2754, 2019.

Cited by: 8|Bibtex|Views34|
EI

Abstract:

Pre-trained word embeddings like ELMo and BERT contain rich syntactic and semantic information, resulting in state-of-the-art performance on various tasks. We propose a very fast variational information bottleneck (VIB) method to nonlinearly compress these embeddings, keeping only the information that helps a discriminative parser. We c...More

Code:

Data:

Your rating :
0

 

Best Paper
Best Paper of EMNLP, 2019
Tags
Comments