A Multimodal Deep Learning Network For Group Activity Recognition

2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2018)

引用 9|浏览16
暂无评分
摘要
Several studies focused on single human activity recognition, while the classification of group activities is still under-investigated. In this paper, we present an approach for classifying the activity performed by a group of people during daily life tasks at work. We address the problem in a hierarchical way by first examining individual person actions, reconstructed from data coming from wearable and ambient sensors. We then observe if common temporal/spatial dynamics exist at the level of group activity. We deployed a Multimodal Deep Learning Network, where the term multimodal is not intended to separately elaborate the considered different input modalities, but refers to the possibility of extracting activity-related features for each group member, and then merge them through shared levels. We evaluated the proposed approach in a laboratory environment, where the employees are monitored during their normal activities. The experimental results demonstrate the effectiveness of the proposed model with respect to an SVM benchmark.
更多
查看译文
关键词
ambient sensors,group member,group activity recognition,wearable sensors,temporal-spatial dynamics,multimodal deep learning network,human activity recognition,classification,modalities,activity-related features extraction,laboratory environment,SVM,employees
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要