Detection of Texting While Walking in Occluded Scenarios Using Variational Autoencoder.

2024 IEEE/SICE International Symposium on System Integration (SII)(2024)

引用 0|浏览0
暂无评分
摘要
Texting while walking is a common behavior exhibited by pedestrians. While several studies explored the detection of texting while walking, the influence of occlusions were neglected. In this paper, we propose an image-based method which utilizes a pre-trained Variational Autoencoder. The proposed method takes sequence of 2D coordinates of upper body key points of the pedestrians as input, encodes the data into a 2D latent space, and uses the encoded data to distinguish text walkers from normal pedestrians. The proposed architecture enables the model to extract meaningful features from occluded data. Results of ablation test and comparison with a previous method revealed that the proposed architecture is successful in identifying text walkers even under heavy occlusion, outperforming a previously proposed method.
更多
查看译文
关键词
Walking,Variational Autoencoder,Occlusion Scenarios,Upper Body,Latent Space,Coordinates Of Points,Image-based Methods,Sequencing Data,Classification Accuracy,F1 Score,Training Phase,Point Cloud,Bounding Box,Kullback-Leibler,Fully-connected Layer,Data Frame,Mobile Robot,Latent Representation,Reconstruction Loss,Latent Vector,Occlusal Pattern,Collision Risk,Image Inpainting,Body Pose,RGB Camera,Increase In Classification Accuracy,Input Data Sequence,Trial Data,Activation Function
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要