Attend and Discriminate: Beyond the State-of-the-Art for Human Activity Recognition UsingWearable Sensors

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies(2021)

引用 33|浏览50
暂无评分
摘要
Wearables are fundamental to improving our understanding of human activities, especially for an increasing number of healthcare applications from rehabilitation to fine-grained gait analysis. Although our collective know-how to solve Human Activity Recognition (HAR) problems with wearables has progressed immensely with end-to-end deep learning paradigms, several fundamental opportunities remain overlooked. We rigorously explore these new opportunities to learn enriched and highly discriminating activity representations. We propose: i) learning to exploit the latent relationships between multi-channel sensor modalities and specific activities; ii) investigating the effectiveness of data-agnostic augmentation for multi-modal sensor data streams to regularize deep HAR models; and iii) incorporating a classification loss criterion to encourage minimal intra-class representation differences whilst maximising inter-class differences to achieve more discriminative features. Our contributions achieves new state-of-the-art performance on four diverse activity recognition problem benchmarks with large margins-with up to 6% relative margin improvement. We extensively validate the contributions from our design concepts through extensive experiments, including activity misalignment measures, ablation studies and insights shared through both quantitative and qualitative studies. The code base and trained network parameters are open-sourced on GitHub https://github.com/AdelaideAuto-IDLab/Attend-And- Discriminate to support further research.
更多
查看译文
关键词
activity recognition,attention,center-loss,cross-channel interaction encoder,data augmentation,deep learning,time-series data,wearable sensors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要