Incremental Acquisition And Reuse Of Multimodal Affective Behaviors In A Conversational Agent

HAI'18: PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION(2018)

引用 6|浏览40
暂无评分
摘要
To feel novel and engaging over time it is critical for an autonomous agent to have a large corpus of potential responses. As the size and multi-domain nature of the corpus grows, however, traditional hand-authoring of dialogue content is no longer practical. While crowdsourcing can help to overcome the problem of scale, a diverse set of authors contributing independently to an agent's language can also introduce inconsistencies in expressed behavior. In terms of affect or mood, for example, incremental authoring can result in an agent who reacts calmly at one moment but impatiently moments later with no clear reason for the transition. In contrast, affect in natural conversation develops over time based on both the agent's personality and contextual triggers. To better achieve this dynamic, an autonomous agent needs to (a) have content and behavior available for different desired affective states and (b) be able to predict what affective state will be perceived by a person for a given behavior. In this proof-of-concept paper, we explore a way to elicit and evaluate affective behavior using crowdsourcing. We show that untrained crowd workers are able to author content for a broad variety of target affect states when given semi-situated narratives as prompts. We also demonstrate that it is possible to strategically combine multimodal affective behavior and voice content from the authored pieces using a predictive model of how the expressed behavior will be perceived.
更多
查看译文
关键词
Affective behavior, multimodal behavior generation, crowdsourcing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要