Multi-modal Biomarker Extraction Framework for Therapy Monitoring of Social Anxiety and Depression Using Audio and Video

Tobias Weise,Paula Andrea Perez-Toro, Andrea Deitermann, Bettina Hoffmann,Kubilay Can Demir, Theresa Straetz,Elmar Noeth,Andreas Maier, Thomas Kallert,Seung Hee Yang

MACHINE LEARNING FOR MULTIMODAL HEALTHCARE DATA, ML4MHD 2023(2024)

引用 0|浏览2
暂无评分
摘要
This paper introduces a framework that can be used for feature extraction, relevant to monitoring the speech therapy progress of individuals suffering from social anxiety or depression. It operates multi-modal (decision fusion) by incorporating audio and video recordings of a patient and the corresponding interviewer, at two separate test assessment sessions. The used data is provided by an ongoing project in a day-hospital and outpatient setting in Germany, with the goal of investigating whether an established speech therapy group program for adolescents, which is implemented in a stationary and semi-stationary setting, can be successfully carried out via telemedicine. The features proposed in this multi-modal approach could form the basis for interpretation and analysis by medical experts and therapists, in addition to acquired data in the form of questionnaires. Extracted audio features focus on prosody (intonation, stress, rhythm, and timing), as well as predictions from a deep neural network model, which is inspired by the Pleasure, Arousal, Dominance (PAD) emotional model space. Video features are based on a pipeline that is designed to enable visualization of the interaction between the patient and the interviewer in terms of Facial Emotion Recognition (FER), utilizing the mini-Xception network architecture.
更多
查看译文
关键词
multi-modal,biomarkers,prosody,emotion recognition,depression,social anxiety,telemedicine
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要