Activity-Based Person Identification Using Multimodal Wearable Sensor Data

IEEE Internet of Things Journal(2023)

引用 7|浏览45
暂无评分
摘要
Wearable devices equipped with a variety of sensors facilitate the measurement of physiological and behavioral characteristics. Activity-based person identification is considered an emerging and fast-evolving technology in security and access control fields. Wearables, such as smartphones, Apple Watch, and Google glass can continuously sense and collect activity-related information of users, and activity patterns can be extracted for differentiating different people. Although various human activities have been widely studied, few of them (gaits and keystrokes) have been used for person identification. In this article, we performed person identification using two public benchmark data sets (UCI-HAR and WISDM2019), which are collected from several different activities using multimodal sensors (accelerometer and gyroscope) embedded in wearable devices (smartphone and smartwatch). We implemented eight classifiers, including an multivariate squeeze-and-excitation network (MSENet), time-series transformer (TST), temporal convolutional network (TCN), CNN-LSTM, ConvLSTM, XGBoost, decision tree, and $k$ -nearest neighbor. The proposed MSENet can model the relationship between different sensor data. It achieved the best person identification accuracies under different activities of 91.31% and 97.79%, respectively, for the public data sets of UCI-HAR and WISDM2019. We also investigated the effects of sensor modality, human activity, feature fusion, and window size for sensor signal segmentation. Compared to the related work, our approach has achieved the state of the art.
更多
查看译文
关键词
Biometrics,feature fusion,machine learning,multimodal sensor,person identification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要