Electromyography Based Gesture Decoding Employing Few-Shot Learning, Transfer Learning, and Training From Scratch.

IEEE Access(2023)

引用 0|浏览7
暂无评分
摘要
Over the last decade several machine learning (ML) based data-driven approaches have been used for Electromyography (EMG) based control of prosthetic hands. However, the performance of EMG-based frameworks can be affected by: i) the onset of fatigue due to long data collection sessions, ii) musculoskeletal differences between individuals, and iii) sensor position drifting between different sessions with the same user. To evaluate these aspects, in this work, we compare the performance of EMG-based hand gesture decoding models developed using three approaches. This comparison allows for future works in EMG-based Human-Machine Interfaces development to make more informed ML decisions. First, we trained from scratch a Transformer-based architecture, called Temporal Multi-Channel Vision Transformer (TMC-ViT). For our second approach, we utilized a pre-trained and fine-tuned TMC-ViT model (a transfer learning approach). Finally, for our third approach, we developed a Prototypical Network (a few-shot learning approach). The models are trained in a subject-specific and subject-generic manner for eight subjects and validated employing the 10-fold cross-validation procedure. This study shows that training a deep learning decoding model from scratch in a subject-specific manner leads to higher decoding accuracies when a larger dataset is available. For smaller datasets, subject-generic models, or inter-session models, the few-shot learning approach produces more robust results with better performance, and is more suited to applications where long data collection scenarios are not possible, or where multiple users are intended for the interface. Our findings show that the few-shot learning approach can outperform training a model from scratch in different scenarios.
更多
查看译文
关键词
Electromyography,gesture decoding,deep learning,few-shot learning,transfer learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要