A Joint Learning Framework With Bert For Spoken Language Understanding

IEEE ACCESS(2019)

引用 56|浏览244
暂无评分
摘要
Intent classification and slot filling are two essential tasks for spoken language understanding. Recently, joint learning has been shown to be effective for the two tasks. However, most joint learning methods only consider joint learning using shared parameters on the surface level rather than the semantic level, and these methods suffer from small-scale human-labeled training data, resulting in poor generalization capabilities, especially for rare words. In this paper, we propose a novel encoder-decoder framework based multi-task learning model, which conducts joint training for intent classification and slot filling tasks. For the encoder of our model, we encode the input sequence as context representations using bidirectional encoder representation from transformers (BERT). For the decoder, we implement two-stage decoder process in our model. In the first stage, we use an intent classification decoder to detect the user's intent. In the second stage, we leverage the intent contextual information into the slot filling decoder to predict the semantic concept tags for each word. We conduct experiments on three popular benchmark datasets: ATIS, Snips and Facebook multilingual task-oriented datasets. The experimental results show that our proposed model outperforms the state-of-the-art approaches and achieves new state-of-the-art results on both three datasets.
更多
查看译文
关键词
Spoken language understanding, intent classification and slot filling, joint learning, intent-augmented mechanism, pre-trained language model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要