Multi-modal Multi-emotion Emotional Support Conversation.

Advanced Data Mining and Applications: 19th International Conference, ADMA 2023, Shenyang, China, August 21–23, 2023, Proceedings, Part I(2023)

引用 0|浏览14
暂无评分
摘要
This paper proposes a new task of Multi-modal Multi-emotion Emotional Support Conversation ( MMESC ), which has great value in various applications, such as counseling, daily chatting, and elderly company. This task aims to fully perceive the users’ emotional states from multiple modalities and generate appropriate responses to provide comfort for improving their feelings. Traditional works mainly focus on textual conversation, while a single-modal cannot accurately reflect the users’ emotions, such as saying fine with an inconsistent disgusting feeling. To address this problem, we propose a new task on multi-modalities and exploit a new method called FEAT for this new task. FEAT can integrate fine-grained emotional knowledge from multiple modalities. It first recognizes the users’ mental states based on an emotion-aware transformer. It then generates supportive responses using a hybrid method with multiple comfort strategies. To evaluate our method, we construct a large-scale dataset named MMESConv . It is almost two times larger than existing single-modal datasets. There are three modalities in this dataset (text, audio, and video) with fine-grained emotion annotations and strategy labels. Extensive experiments on this dataset demonstrate the advantages of our proposed framework.
更多
查看译文
关键词
conversation,multi-modal,multi-emotion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要