Inverse-Reinforcement-Learning-Based Robotic Ultrasound Active Compliance Control in Uncertain Environments

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS(2024)

引用 4|浏览6
暂无评分
摘要
Robotic ultrasound systems (RUSs) have gained increasing attention because they can automate repetitive procedures and relieve operators' workloads. However, the complexity and uncertainty of the human surface pose a challenge for stable scanning control. This article proposes a general active compliance control strategy based on inverse reinforcement learning (IRL) to perform adaptable scanning for uncertain and unstructured environments. We analyze the manual scanning process pattern and propose a velocity-and-force-related control strategy to achieve variable force control and handle unpredictable deformation. Then, a hybrid policy optimization framework is proposed to improve transferability. In this framework, a reinforcement learning policy with a predefined reward is built to establish the relationship between contact force and posture. Furthermore, the policy is re-optimized using IRL and generated demonstrations for IRL training. The policy is trained on simple standard phantoms and further evaluated for stability and transferability in unseen and complex environments. Quantitative results show that the difference between the proposed method and the three-dimensional (3-D) reconstructed model in terms of posture is (2.3 +/- 1.3 degrees, 1.9 +/- 1.2 degrees) in continuous scans. Overall, our method provides a solution for improving the usability of RUSs in real-world environments.
更多
查看译文
关键词
Force,Ultrasonic imaging,Probes,Task analysis,Surface impedance,Force control,Deformation,Active compliance control,inverse reinforcement learning (IRL),robotic ultrasound system (RUS),variable force control
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要