Spoken Language Understanding-Single Model口语语言理解(Spoken Language Understanding,SLU)作为任务型对话系统的核心组件,目的是为了获取用户询问语句的框架语义表示(semantics frame)信息,进而将这些信息为对话状态追踪模块(DST)以及自然语言生成模块(NLG)所使用。SLU任务通常包含以下两个任务:意图识别任务(intent detection)和槽位填充任务(slot filling),本论文集包含分别对这两个任务分开建模的论文。
To compute transition score under few-shot setting, we propose the collapsed dependency transfer mechanism, which transfers the prior knowledge of the label dependencies across domains with different label sets
The performance is compared against the results on various Natural Languages Understanding services and open source NLU platforms in the market: Botfuel, Dialogflow, Luis, Watson, Rasa, Recast, and Snips
Experimental results show that our method achieves significant improvements on DSTC 2&3 dataset, and it is is very effective for Spoken Language Understanding domain adaptation with limited data
The intention detection results on two datasets are reported in Table 1, where the proposed capsule-based model INTENTCAPSNET performs consistently better than bagof-word classifiers using TF-IDF, as well as various neural network models designed for text classification
We study the problem of data augmentation for Language understanding and propose a novel data-driven framework that models relations between utterances of the same semantic frame in the training data
Our model has achieved the state-of-the-art performance for both slot value prediction and spoken language understanding on the benchmark even with less training data
We propose an alternative approach by investigating the use of deep neural network for sequence chunking, and propose three neural models so that each chunk can be treated as a complete unit for labeling
To investigate the robustness of the bidirectional LSTM-long short-term memory architectures with the attention or focus mechanism, we conduct additional experiments on the Chinese navigation dataset described in the experimental setup