The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue

HRI(2008)

引用 61|浏览28
暂无评分
摘要
Generating referring expressions is a task that has received a great deal of attention in the natural-language generation community, with an increasing amount of recent effort targeted at the generation of multimodal referring expressions. However, most implemented systems tend to assume very little shared knowledge between the speaker and the hearer, and therefore must generate fully-elaborated linguistic references. Some systems do include a representation of the physical context or the dialogue context; however, other sources of contextual information are not normally used. Also, the generated references normally consist only of language and, possibly, deictic pointing gestures. When referring to objects in the context of a task-based interaction involving jointly manipulating objects, a much richer notion of context is available, which permits a wider range of referring options. In particular, when conversational partners cooperate on a mutual task in a shared environment, objects can be made accessible simply by manipulating them as part of the task. We demonstrate that such expressions are common in a corpus of human-human dialogues based on constructing virtual objects, and then describe how this type of reference can be incorporated into the output of a humanoid robot that engages in similar joint construction dialogues with a human partner.
更多
查看译文
关键词
great deal,physical context,fully-elaborated linguistic reference,human partner,conversational partner,mutual task,task-based human-robot dialogue,shared environment,dialogue context,contextual information,natural-language generation community,human robot interaction,color,natural language processing,pragmatics,robot kinematics,humanoid robot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要