Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
international conference on learning representations, 2021.
End-to-end learning of sparse embeddings for large vocabularies with a Bayesian nonparametric interpretation that results in up to 40x smaller embedding tables.
Learning continuous representations of discrete objects such as text, users, movies, and URLs lies at the heart of many applications including language and user modeling. When using discrete objects as input to neural networks, we often ignore the underlying structures (e.g. natural groupings and similarities) and embed the objects indepe...More
PPT (Upload PPT)