Spoken Language Understanding-Single Model口语语言理解(Spoken Language Understanding,SLU)作为任务型对话系统的核心组件,目的是为了获取用户询问语句的框架语义表示(semantics frame)信息,进而将这些信息为对话状态追踪模块(DST)以及自然语言生成模块(NLG)所使用。SLU任务通常包含以下两个任务:意图识别任务(intent detection)和槽位填充任务(slot filling),本论文集包含分别对这两个任务分开建模的论文。
ACL, pp.1381-1393, (2020)
To compute transition score under few-shot setting, we propose the collapsed dependency transfer mechanism, which transfers the prior knowledge of the label dependencies across domains with different label sets
Cited by4BibtexViews157
0
0
Ting-En Lin,Hua Xu
Meeting of the Association for Computational Linguistics, (2019)
We proposed a two-stage method for unknown intent detection
Cited by15BibtexViews18
0
0
ICASSP, (2019): 7305-7309
We propose a novel hierarchical decoding model for spoken language understanding from unaligned data
Cited by10BibtexViews13DOI
0
0
Kumar Shridhar, Ayushman Dash, Amit Sahu, Gustav Grund Pihlgren, Pedro Alonso, Vinaychandran Pondenkandath, Gyorgy Kovacs, Foteini Simistira,Marcus Liwicki
IJCNN, pp.1-6, (2019)
The performance is compared against the results on various Natural Languages Understanding services and open source NLU platforms in the market: Botfuel, Dialogflow, Luis, Watson, Rasa, Recast, and Snips
Cited by7BibtexViews23DOI
0
0
Zijian Zhao, Su Zhu,Kai Yu
EMNLP/IJCNLP (1), pp.3635-3641, (2019)
Experimental results show that our method achieves significant improvements on DSTC 2&3 dataset, and it is is very effective for Spoken Language Understanding domain adaptation with limited data
Cited by7BibtexViews10DOI
0
0
EMNLP, (2018): 3090-3099
The intention detection results on two datasets are reported in Table 1, where the proposed capsule-based model INTENTCAPSNET performs consistently better than bagof-word classifiers using TF-IDF, as well as various neural network models designed for text classification
Cited by90BibtexViews83DOI
0
0
COLING, (2018): 1234-1245
We study the problem of data augmentation for Language understanding and propose a novel data-driven framework that models relations between utterances of the same semantic frame in the training data
Cited by55BibtexViews56
0
0
ACL, pp.426-431, (2018)
Our model has achieved the state-of-the-art performance for both slot value prediction and spoken language understanding on the benchmark even with less training data
Cited by14BibtexViews16
0
0
EMNLP, pp.633-639, (2018)
We have proposed an adversarial training method for the multi-task and multi-lingual joint modeling needed to enhance utterance intent classification
Cited by5BibtexViews18
0
0
THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, (2017): 3365-3371
We propose an alternative approach by investigating the use of deep neural network for sequence chunking, and propose three neural models so that each chunk can be treated as a complete unit for labeling
Cited by57BibtexViews31
0
0
international conference on acoustics, speech, and signal processing, (2017)
To investigate the robustness of the bidirectional LSTM-long short-term memory architectures with the attention or focus mechanism, we conduct additional experiments on the Chinese navigation dataset described in the experimental setup
Cited by41BibtexViews14DOI
0
0
EMNLP, pp.2077-2083, (2016)
We proposed an encoder-labeler Long Short-Term Memory that can conduct slot filling conditioned on the encoded sentence-level information
Cited by56BibtexViews31DOI
0
0
Bing Liu,Ian Lane
SIGDIAL Conference, (2016): 22-30
We describe a recurrent neural network model that jointly performs intent detection, slot filling, and language modeling
Cited by1BibtexViews11DOI
0
0