Spoken Language Understanding-Joint Model口语语言理解(Spoken Language Understanding,SLU)作为任务型对话系统的核心组件,目的是为了获取用户询问语句的框架语义表示(semantics frame)信息,进而将这些信息为对话状态追踪模块(DST)以及自然语言生成模块(NLG)所使用。SLU任务通常包含以下两个任务:意图识别任务(intent detection)和槽位填充任务(slot filling),本论文集包含对这两个任务联合建模的论文。
Lizhi Cheng, Weijia Jia, Wenmian Yang
We propose a Result-based Portable Framework for Spoken Language Understanding, which allows most existing single-turn SLU models to take full advantage of contextual information and be able to deal with the multi-turn SLU tasks without changing their own structure
Cited by0BibtexViews6
0
0
Libo Qin, Tailu Liu,Wanxiang Che, Bingbing Kang,Sendong Zhao,Ting Liu
We propose a co-interactive transformer to joint model slot filling and intent detection to build a directional connection between the two tasks, which enables to fully take the advantage of the mutual interaction knowledge
Cited by0BibtexViews46
0
0
EMNLP 2020, pp.1932-1937, (2020)
We present a novel non-autoregressive joint model for slot filling and intent detection with two-pass refine mechanism, which significantly improves the performance while substantially speeding up the decoding
Cited by0BibtexViews18DOI
0
0
arXiv: Computation and Language, (2019): 5259-5267
To exploit the semantic hierarchy for effective modeling, we propose a capsulebased neural network model which accomplishes slot filling and intent detection via a dynamic routing-by-agreement schema
Cited by81BibtexViews184DOI
0
0
EMNLP/IJCNLP (1), pp.2078-2087, (2019)
We propose a joint model for spoken language understanding with Stack-Propagation to better incorporate the intent information for slot filling
Cited by40BibtexViews54DOI
0
0
Yijin Liu, Fandong Meng,Jinchao Zhang,Jie Zhou, Yufeng Chen,Jinan Xu
EMNLP/IJCNLP (1), pp.1051-1060, (2019)
We propose a novel Collaborative Memory Network for jointly modeling slot filling and intent detection
Cited by4BibtexViews56DOI
0
0
north american chapter of the association for computational linguistics, (2019)
The paper investigated an approach for multiintent classification
Cited by3BibtexViews17
0
0
Qian Chen, Zhu Zhuo, Wen Wang
arXiv: Computation and Language, (2019)
We propose a joint intent classification and slot filling model based on Bidirectional Encoder Representations from Transformers
Cited by0BibtexViews9
0
0
Zhichang Zhang, Zhenwen Zhang, Haoyuan Chen, Zhiman Zhang
IEEE Access, (2019): 168849-168858
We plan to incorporate supervised pre-training into our model to improve the performance of the intent classification and slot filling tasks
Cited by0BibtexViews6DOI
0
0
SIGdial, pp.46-55, (2019)
We presented a general family of joint intent classification+slot labeling neural architectures that decomposes the task into modules for analysis
Cited by0BibtexViews57DOI
0
0
Samuel Louvan,Bernardo Magnini
SIGdial, pp.85-91, (2019)
We proposed to leverage non-conversational tasks, Named Entity Recognition and Semantic Tagging, through multi-task learning to help low resource slot filling
Cited by0BibtexViews13DOI
0
0
Haihong E, Peiqing Niu, Zhongfu Chen,Meina Song
ACL (1), pp.5467-5471, (2019)
We propose a novel SF-ID network which provides a bi-directional interrelated mechanism for intent detection and slot filling tasks
Cited by0BibtexViews12DOI
0
0
Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu,Yun-Nung Chen
NAACL-HLT, pp.753-757, (2018)
This paper focuses on learning the explicit slotintent relations by introducing a slot-gated mechanism into the state-of-the-art attention model, which allows the slot filling can be conditioned on the learned intent result in order to achieve better Spoken language understanding
Cited by83BibtexViews78
0
0
NAACL-HLT, pp.309-314, (2018)
The slot filling task can be formulated as a sequence labeling problem, and the most popular approaches with good performances are using conditional random fields and recurrent neural networks as recent works
Cited by46BibtexViews37
0
0
EMNLP, pp.3824-3833, (2018)
We propose a novel self-attentive model gated with intent for spoken language understanding
Cited by17BibtexViews14
0
0
SIGDIAL Conference, (2018): 376-384
We present a joint model for language understanding and dialogue state tracking, which is computationally efficient by way of sharing feature extraction layers between LU and DST, while achieving an accuracy comparable to modeling them separately across multiple tasks
Cited by1BibtexViews28
0
0
Xiaodong Zhang,Houfeng Wang
IJCAI, pp.2993-2999, (2016)
Two major tasks in spoken language understanding (SLU) are intent determination (ID) and slot filling (SF). Recurrent neural networks (RNNs) have been proved effective in SF, while there is no prior work using RNNs in ID. Based on the idea that the intent and semantic slots of a ...
Cited by111BibtexViews64
0
0
Bing Liu,Ian Lane
SIGDIAL Conference, (2016): 22-30
We describe a recurrent neural network model that jointly performs intent detection, slot filling, and language modeling
Cited by1BibtexViews11DOI
0
0