Multimodal Prediction of User's Performance in High-Stress Dialogue Interactions

ICMI '23 Companion: Companion Publication of the 25th International Conference on Multimodal Interaction(2023)

引用 0|浏览6
暂无评分
摘要
High-Stress interactions include cases in which decisions must be made, communicated, and agreed upon in a short amount of time to avoid dire consequences. Such interactions can be a source of different multimodal signals indicating participant cognitive and emotional states, which can vary with factors such as the difficulty of the interaction. By utilizing behavioral cues, a multimodal deep neural network (with audio, video, and text modalities) was developed to predict the performance of users in these interactions. An ablation study was conducted to compare impact of different modalities. Our best model can predict the user performance with 73% accuracy in a 3-class classification task.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要