Multimodal Emotion Recognition Dataset in the Wild (MERDWild)

Facundo Martínez,Ana Aguilera,Diego Mellado

2023 IEEE CHILEAN Conference on Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON)(2023)

引用 0|浏览0
暂无评分
摘要
Multimodal emotion recognition involves identifying human emotions in specific situations using artificial intelligence across multiple modalities. MERDWild, a multimodal emotion recognition dataset, addresses the challenge of unifying, cleaning, and transforming three datasets collected in uncontrolled environments with the aim of integrating and standardizing a database that encompasses three modalities: facial images, audio, and text. A methodology is presented that combines information from these modalities, utilizing �in-the-wild� datasets including AFEW, AffWild2, and MELD. MERDWild consists of 15 873 audio samples, 905 281 facial images, and 15 321 sentences, all of them considered usable quality data. The project outlines the entire process of data cleaning, transformation, normalization, and quality control, resulting in a unified structure for recognizing seven emotions.
更多
查看译文
关键词
Emotion Recognition,in-the-wild dataset,multimodal sources
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要