Spoken language understanding using long short-term memory neural networks

SLT(2014)

引用 390|浏览165
暂无评分
摘要
Neural network based approaches have recently produced record-setting performances in natural language understanding tasks such as word labeling. In the word labeling task, a tagger is used to assign a label to each word in an input sequence. Specifically, simple recurrent neural networks (RNNs) and convolutional neural networks (CNNs) have shown to significantly outperform the previous state-of-the-art - conditional random fields (CRFs). This paper investigates using long short-term memory (LSTM) neural networks, which contain input, output and forgetting gates and are more advanced than simple RNN, for the word labeling task. To explicitly model output-label dependence, we propose a regression model on top of the LSTM un-normalized scores. We also propose to apply deep LSTM to the task. We investigated the relative importance of each gate in the LSTM by setting other gates to a constant and only learning particular gates. Experiments on the ATIS dataset validated the effectiveness of the proposed models.
更多
查看译文
关键词
recurrent neural networks,speech processing,output-label dependence,rnn,natural language understanding tasks,word processing,lstm neural networks,long short-term memory,regression model,convolution,lstm unnormalized scores,cnn,neural network based approach,word labeling task,long short-term memory neural networks,language understanding,crf,natural language processing,recurrent neural nets,convolutional neural networks,conditional random fields,spoken language understanding,atis dataset,long short term memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要