Learning-Based Visual-Strain Fusion for Eye-in-Hand Continuum Robot Pose Estimation and Control

IEEE TRANSACTIONS ON ROBOTICS(2023)

引用 2|浏览43
暂无评分
摘要
Image processing has significantly extended the practical value of the eye-in-hand camera, enabling and promoting its applications for quantitative measurement. However, fully vision-based pose estimation methods sometimes encounter difficulties in handling cases with deficient features. In this article, we fuse visual information with the sparse strain data collected from a single-core fiber inscribed with fiber Bragg gratings (FBGs) to facilitate continuum robot pose estimation. An improved extreme learning machine algorithm with selective training data updates is implemented to establish and refine the FBG-empowered (F-emp) pose estimator online. The integration of F-emp pose estimation can improve sensing robustness by reducing the number of times that visual tracking is lost given moving visual obstacles and varying lighting. In particular, this integration solves pose estimation failures under full occlusion of the tracked features or complete darkness. Utilizing the fused pose feedback, a hybrid controller incorporating kinematics and data-driven algorithms is proposed to accomplish fast convergence with high accuracy. The online-learning error compensator can improve the target tracking performance with a 52.3%-90.1% error reduction compared with constant-curvature model-based control, without requiring fine model-parameter tuning and prior data acquisition.
更多
查看译文
关键词
Robot sensing systems,Robots,Sensors,Cameras,Pose estimation,Robot vision systems,Robot kinematics,Camera pose estimation,fiber Bragg grating (FBG),hybrid control,online learning,visual-strain fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要