Encoding Kinematic and Temporal Gait Data in an Appearance-Based Feature for the Automatic Classification of Autism Spectrum Disorder

IEEE ACCESS(2023)

引用 0|浏览10
暂无评分
摘要
In appearance-based gait analysis studies, Gait Energy Images (GEI) have been shown to be an effective tool for human identification and gait pathology detection. In addition, model-based studies found kinematic and spatio-temporal features to be useful for gait recognition and Autism Spectrum Disorder (ASD) classification. Adapting the GEI to focus on the strong ASD features would improve the early screening of ASD by allowing the use of powerful appearance-based classifiers such as Convolutional Neural Networks (CNN). This paper introduces an enhanced GEI, by averaging images from a video sequence to produce a single image but by retention of a person's joint positions only, instead of the full body silhouettes. Depth is encoded into the binary images before they are averaged using colour mapping, a technique used in the Chrono-Gait Image. The Joint Energy Image (JEI) therefore embeds both the temporal and depth information of the joints into a 2D image. The image was preprocessed using Principal Component Analysis before being applied to a Multi-Layer Perceptron, and a Random Forest classifier. The JEI was also applied to a CNN directly and accuracy was improved when using a Test Time Augmentation (TTA) measure. The CNN achieved a TTA accuracy of 95.56% when trained on a primary dataset of 100 subjects (50 with ASD and 50 that are typically developed), and 80% TTA accuracy on a secondary dataset of 20 subjects (10 ASD and 10 typically developed) across multiple tests.
更多
查看译文
关键词
Kinematics,Convolutional neural networks,Feature extraction,Variable speed drives,Spatiotemporal phenomena,Three-dimensional displays,Autism,Autism spectrum disorder,gait analysis,neural networks,video analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要