Speaker Personality Recognition With Multimodal Explicit Many2many Interactions

2020 IEEE International Conference on Multimedia and Expo (ICME)(2020)

引用 7|浏览89
暂无评分
摘要
Recently, speaker personality analysis has become an increasingly popular research task in human-computer interaction. Previous studies of user personality traits recognition normally focus on leveraging static information, i.e., tweets, images and social relationships in social platforms and websites. However, in this paper, we utilize three kinds of speaking dynamic information, i.e., textual, visual and acoustic temporal sequences, for a computer to interpret human personality traits from a face-to-face monologue. Specifically, we propose an explicit many2many (many-to-many) interactive approach to help AI efficiently recognize speaker personality traits. On the one hand, we encode the long feature sequence of human speaking for each modality with bidirectional LSTM network. On the other hand, we design a many2many attention mechanism explicitly to capture the interactions across multiple modalities for multiple interactive pairs. Empirical evaluation on 12 kinds of personality traits demonstrates the effectiveness of our proposed approach to multimodal speaker personality recognition.
更多
查看译文
关键词
Speaker personality,many2many attention,multimodal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要