Multimodal Autoencoder: A Deep Learning Approach To Filling In Missing Sensor Data And Enabling Better Mood Prediction

2017 SEVENTH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII)(2017)

引用 77|浏览54
暂无评分
摘要
To accomplish forecasting of mood in real-world situations, affective computing systems need to collect and learn from multimodal data collected over weeks or months of daily use. Such systems are likely to encounter frequent data loss, e.g. when a phone loses location access, or when a sensor is recharging. Lost data can handicap classifiers trained with all modalities present in the data. This paper describes a new technique for handling missing multimodal data using a specialized denoising autoencoder: the Multimodal Autoencoder (MMAE). Empirical results from over 200 participants and 5500 days of data demonstrate that the MMAE is able to predict the feature values from multiple missing modalities more accurately than reconstruction methods such as principal components analysis (PCA). We discuss several practical benefits of the MMAE's encoding and show that it can provide robust mood prediction even when up to three quarters of the data sources are lost.
更多
查看译文
关键词
Multimodal autoencoder,sensor data,affective computing systems,location access,missing multimodal data,specialized denoising autoencoder,Multimodal Autoencoder,MMAE,multiple missing modalities,robust mood prediction,data sources,multimodal data handling,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要