Human Hand Motion Prediction in Disassembly Operations

Volume 5: 27th Design for Manufacturing and the Life Cycle Conference (DFMLC)(2022)

引用 0|浏览2
暂无评分
摘要
Abstract The remanufacturing workforce can benefit from the capabilities of robotic technology, where robots can alleviate the labor-intensive nature of disassembly operations and help with handling toxic and hazardous materials. However, operators’ safety is an important aspect of human-robot collaboration in disassembly operations. This study focuses on predicting human hand motion to provide advanced information to disassembly robots when collaborating with humans. A prediction framework is proposed, which consists of two deep learning models, including convolutional long short-term memory (ConvLSTM) and You Only Look Once (YOLO). ConvLSTM forecasts the next-frame image using images from the disassembly process, and then the YOLO model identifies the human hand object on the predicted image resulting from ConvLSTM. The disassembly images collected from four desktop computers are used to train the ConvLSTM and YOLO. The results reveal that the combined framework of ConvLSTM and YOLO performs well in predicting human hand motion and locating the hand object. The outcomes highlight the need for developing deep learning models capable of recognizing human motion when working with different designs as often remanufacturing workforce have to deal with a wide range of products from different brands, models, and conditions.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要