A Unified Approach to Facial Affect Analysis: the MAE-Face Visual Representation.

CVPR Workshops(2023)

引用 1|浏览21
暂无评分
摘要
Facial affect analysis is essential for understanding human expressions and behaviors, encompassing action unit (AU) detection, expression (EXPR) recognition, and valence-arousal (VA) estimation. The CVPR 2023 Competition on Affective Behavior Analysis in-the-wild (ABAW) is dedicated to providing a high-quality and large-scale Affwild2 dataset for identifying widely used emotion representations. In this paper, we employ MAE-Face as a unified approach to develop robust visual representations for facial affect analysis. We propose multiple techniques to improve its fine-tuning performance on various downstream tasks, incorporating a two-pass pre-training process and a two-pass fine-tuning process. Our approach exhibits strong results on numerous datasets, highlighting its versatility. Moreover, the proposed model acts as a fundamental component for our final framework in the ABAW5 competition. Our submission achieves outstanding outcomes, ranking first place in the AU and EXPR tracks and second place in the VA track.
更多
查看译文
关键词
ABAW5 competition,action unit detection,CVPR 2023 Competition,emotion representations,expression recognition,facial affect analysis,fine-tuning performance,fine-tuning process,human expressions,MAE-Face visual representation,robust visual representations,two-pass pre-training process,unified approach,valence-arousal estimation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要