Fast Domain Adaptation For Goal-Oriented Dialogue Using A Hybrid Generative-Retrieval Transformer

2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING(2020)

引用 14|浏览44
暂无评分
摘要
Goal-oriented dialogue systems are now widely adopted in industry, where practical aspects of using them becomes of key importance. As such, it is expected from such systems to fit into a rapid prototyping cycle for new products and domains. For data-driven dialogue systems (especially those based on deep learning) that amounts to maintaining production-level performance having been provided with a few `seed' dialogue examples, normally referred to as data efficiency.With extremely data-dependent deep learning methods, the most promising way to achieve practical data efficiency is transfer learning-i.e., leveraging a greater, highly represented data source for training a base model, then fine-tuning it to available in-domain data.In this paper, we present a hybrid generative-retrieval model that can be trained using transfer learning. By using GPT-2 as the base model and fine-tuning it to the multidomain MetaLWOz dataset, we obtain a robust dialogue model able to perform both response generation and ranking(1). Combining both, it outperforms several competitive generative-only and retrieval-only baselines, measured by language modeling quality on MetaLWOz as well as in goal-oriented metrics (Intent/Slot F1-scores) on the MultiWoz corpus.
更多
查看译文
关键词
Dialogue systems, deep learning, domain adaptation, data efficiency, transfer learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要