RNN Architecture Learning with Sparse Regularization
EMNLP/IJCNLP (1), pp. 1179-1184, 2019.
EI
Abstract:
Neural models for NLP typically use large numbers of parameters to reach state-of-the-art performance, which can lead to excessive memory usage and increased runtime. We present a structure learning method for learning sparse, parameter-efficient NLP models. Our method applies group lasso to rational RNNs (Peng et al., 2018), a family o...More
Code:
Data:
Tags
Comments