Fusing active orientation models and mid-term audio features for automatic depression estimation
PETRA(2016)
摘要
In this paper, we predict a human's depression level in the BDI-II scale, using facial and voice features. Active orientation models (AOM) and several voice features were extracted from the video and audio modalities. Long-term and mid-term features were computed and a fusion is performed in the feature space. Videos from the Depression Recognition Sub-Challenge of the 2014 Audio-Visual Emotion Challenge and Workshop (AVEC 2014) were used and support vector regression models were trained to predict the depression level. We demonstrated that the fusion of AOMs with audio features leads to better performance compared to individual modalities. The obtained regression results indicate the robustness of the proposed technique, under different settings, as well as an RMSE improvement compared to the AVEC 2014 video baseline.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络