Learning to predict the adequacy of answers in chat-oriented humanagent dialogs

TENCON IEEE Region 10 Conference Proceedings(2017)

引用 4|浏览36
暂无评分
摘要
Conversational agents are gaining a lot of attention from research labs and enterprises given the recent advances in AI and the huge amount of available human-to-human dialog texts (like movie transcriptions or conversations in social media). However, most of these dialog datasets do not contain any kind of annotation in terms of adequacy of the answers which makes difficult for the conversational agent to choose suitable answers that can satisfy not just the current interaction with the user but also that can consider their previous interactions, the semantics of the turn, and, in general, the pragmatics of the discourse. In this paper, we present our collaborative efforts on creating a dataset of annotated dialogs and exploratory results on specifying a set of features to train a classifier that can be used to predict the validity, acceptability, or invalidity of the dialog turns. The classifier is trained on a set of crowd-sourced collected and annotated dialogs between users and different chatbot engines.
更多
查看译文
关键词
Chatbots,automatic evaluation,dataset
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要