Spoken Language Understanding-Low-Resource口语语言理解(Spoken Language Understanding,SLU)作为任务型对话系统的核心组件,目的是为了获取用户询问语句的框架语义表示(semantics frame)信息,进而将这些信息为对话状态追踪模块(DST)以及自然语言生成模块(NLG)所使用。SLU任务通常包含以下两个任务:意图识别任务(intent detection)和槽位填充任务(slot filling),本论文集包含少样本/零样本的口语语言理解论文。
After finetuning with domain-specific dialogue data, it can produce abundant utterances which significantly boosts the performance of Spoken Language Understanding model
To compute transition score under few-shot setting, we propose the collapsed dependency transfer mechanism, which transfers the prior knowledge of the label dependencies across domains with different label sets
We proposed a novel task of dialogue state induction, which is to automatically identify dialogue state slots and values over a large set of dialogue records
Our model shares its parameters across all slot types and learns to predict whether input tokens are slot entities or not. It detects concrete slot types for these slot entity tokens based on the slot type descriptions
Our work has primarily been motivated by the data issues in Spoken language understanding datasets, we would like to invite researchers to explore the potential of applying generative data augmentation in other NLP tasks, such as neural machine translation and natural language in...
We introduce a novel Zero-Shot Adaptive Transfer method for slot tagging that utilizes the slot description for transferring reusable concepts across domains to avoid some drawbacks of prior approaches such as increased training time and suboptimal concept alignments
This paper focuses on data augmentation by reflecting the characteristics of ‘open-vocabulary’ slot in order to achieve better spoken language understanding
The intention detection results on two datasets are reported in Table 1, where the proposed capsule-based model INTENTCAPSNET performs consistently better than bagof-word classifiers using TF-IDF, as well as various neural network models designed for text classification
Our experiments demonstrate that the combination clearly improves the neural network performance in both the few-shot learning and the full dataset settings
We present a domain portable zero-shot learning approach for entity recognition in task-oriented conversational agents, which does not assume any annotated sentences at training time