Entire Information Attentive GRU for Text Representation

ICTIR, pp. 163-166, 2018.

Cited by: 0|Views19
EI

Abstract:

Recurrent Neural Networks~(RNNs), such as Long Short-Term Memory~(LSTM) and Gated Recurrent Unit~(GRU), have been widely utilized in sequence representation. However, RNNs neglect variational information and long-term dependency. In this paper, we propose a new neural network structure for extracting a comprehension sequence embedding by ...More

Code:

Data:

Get fulltext within 24h
Bibtex
Your rating :
0

 

Tags
Comments