Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies

international conference on learning representations, 2021.

Cited by: 0|Views14
Weibo:
End-to-end learning of sparse embeddings for large vocabularies with a Bayesian nonparametric interpretation that results in up to 40x smaller embedding tables.

Abstract:

Learning continuous representations of discrete objects such as text, users, movies, and URLs lies at the heart of many applications including language and user modeling. When using discrete objects as input to neural networks, we often ignore the underlying structures (e.g. natural groupings and similarities) and embed the objects indepe...More

Code:

Data:

0
Full Text
Bibtex
Weibo
Your rating :
0

 

Tags
Comments