Spoken Language Understanding-Low-Resource口语语言理解（Spoken Language Understanding，SLU）作为任务型对话系统的核心组件，目的是为了获取用户询问语句的框架语义表示（semantics frame）信息，进而将这些信息为对话状态追踪模块（DST）以及自然语言生成模块（NLG）所使用。SLU任务通常包含以下两个任务：意图识别任务（intent detection）和槽位填充任务（slot filling）,本论文集包含少样本/零样本的口语语言理解论文。
To compute transition score under few-shot setting, we propose the collapsed dependency transfer mechanism, which transfers the prior knowledge of the label dependencies across domains with different label sets
Our model shares its parameters across all slot types and learns to predict whether input tokens are slot entities or not. It detects concrete slot types for these slot entity tokens based on the slot type descriptions
Our work has primarily been motivated by the data issues in Spoken language understanding datasets, we would like to invite researchers to explore the potential of applying generative data augmentation in other NLP tasks, such as neural machine translation and natural language in...
We introduce a novel Zero-Shot Adaptive Transfer method for slot tagging that utilizes the slot description for transferring reusable concepts across domains to avoid some drawbacks of prior approaches such as increased training time and suboptimal concept alignments
The intention detection results on two datasets are reported in Table 1, where the proposed capsule-based model INTENTCAPSNET performs consistently better than bagof-word classifiers using TF-IDF, as well as various neural network models designed for text classification