Spoken Language Understanding-Low-Resource口语语言理解(Spoken Language Understanding,SLU)作为任务型对话系统的核心组件,目的是为了获取用户询问语句的框架语义表示(semantics frame)信息,进而将这些信息为对话状态追踪模块(DST)以及自然语言生成模块(NLG)所使用。SLU任务通常包含以下两个任务:意图识别任务(intent detection)和槽位填充任务(slot filling),本论文集包含少样本/零样本的口语语言理解论文。
Peng Baolin, Zhu Chenguang, Zeng Michael,Gao Jianfeng
After finetuning with domain-specific dialogue data, it can produce abundant utterances which significantly boosts the performance of Spoken Language Understanding model
Cited by6BibtexViews43
0
0
ACL, pp.1381-1393, (2020)
To compute transition score under few-shot setting, we propose the collapsed dependency transfer mechanism, which transfers the prior knowledge of the label dependencies across domains with different label sets
Cited by4BibtexViews157
0
0
IJCAI 2020, pp.3845-3852, (2020)
We proposed a novel task of dialogue state induction, which is to automatically identify dialogue state slots and values over a large set of dialogue records
Cited by3BibtexViews53DOI
0
0
ACL, pp.19-25, (2020)
Our model shares its parameters across all slot types and learns to predict whether input tokens are slot entities or not. It detects concrete slot types for these slot entity tokens based on the slot type descriptions
Cited by1BibtexViews29
0
0
Yutai Hou, Yongkui Lai, Yushan Wu,Wanxiang Che,Ting Liu
We explore the few-shot learning problem of multi-label intent detection
Cited by0BibtexViews450
0
0
national conference on artificial intelligence, (2019)
Our work has primarily been motivated by the data issues in Spoken language understanding datasets, we would like to invite researchers to explore the potential of applying generative data augmentation in other NLP tasks, such as neural machine translation and natural language in...
Cited by14BibtexViews9
0
0
national conference on artificial intelligence, (2019)
We introduce a novel Zero-Shot Adaptive Transfer method for slot tagging that utilizes the slot description for transferring reusable concepts across domains to avoid some drawbacks of prior approaches such as increased training time and suboptimal concept alignments
Cited by13BibtexViews32
0
0
Hwa-Yeon Kim, Yoon-Hyung Roh, Young-Kil Kim
pp.97-102 (2019)
This paper focuses on data augmentation by reflecting the characteristics of ‘open-vocabulary’ slot in order to achieve better spoken language understanding
Cited by2BibtexViews5DOI
0
0
EMNLP, (2018): 3090-3099
The intention detection results on two datasets are reported in Table 1, where the proposed capsule-based model INTENTCAPSNET performs consistently better than bagof-word classifiers using TF-IDF, as well as various neural network models designed for text classification
Cited by90BibtexViews83DOI
0
0
meeting of the association for computational linguistics, (2018)
Our experiments demonstrate that the combination clearly improves the neural network performance in both the few-shot learning and the full dataset settings
Cited by14BibtexViews42
0
0
Marco Guerini, Simone Magnolini, Vevake Balaraman,Bernardo Magnini
SIGDIAL Conference, pp.317-326, (2018)
We present a domain portable zero-shot learning approach for entity recognition in task-oriented conversational agents, which does not assume any annotated sentences at training time
Cited by5BibtexViews15
0
0
annual meeting of the special interest group on discourse and dialogue, (2018)
We present the concept transfer learning for slot filling on the atomic concept level to solve the problem of adaptive language understanding
Cited by0BibtexViews11DOI
0
0
Conference on Empirical Methods in Natural Language Processing, (2015)
We describe a new Spoken Language Understanding model that is designed for improved domain adaptation
Cited by25BibtexViews26
0
0