Generalized Conditioned Dialogue Generation Based on Pre-trained Language Model

arxiv(2020)

引用 4|浏览548
暂无评分
摘要
We investigate the general problem of conditioned dialogue, in which a condition label is used as input to designate the type of the target response such as a persona. A major challenge for conditioned dialogue generation is the lack of substantial dialogue data labeled with conditions. Thus, we propose to complement the labeled dialogue data with labeled non-dialogue text data, and fine-tune BERT based on them. Our fine-tuning approach utilizes BERT for both encoder and decoder via different input representations and self-attention masks in order to distinguish the source and target side. On the target (generation) side, we use a new attention routing mechanism to choose between generating a generic word or condition-related word at each position. Our model is instantiated to persona- and topic-related dialogue. Experimental results in both cases show that our approach can produce significantly better responses than the state-of-the-art baselines.
更多
查看译文
关键词
conditioned dialogue generation,language model,pre-trained
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要