Entire Information Attentive GRU for Text Representation
ICTIR, pp. 163-166, 2018.
Recurrent Neural Networks~(RNNs), such as Long Short-Term Memory~(LSTM) and Gated Recurrent Unit~(GRU), have been widely utilized in sequence representation. However, RNNs neglect variational information and long-term dependency. In this paper, we propose a new neural network structure for extracting a comprehension sequence embedding by ...More
Full Text (Upload PDF)
PPT (Upload PPT)