Leveraging Deep Reinforcement Learning for Reaching Robotic Tasks

2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)(2017)

引用 33|浏览36
暂无评分
摘要
This work leverages Deep Reinforcement Learning (DRL) to make robotic control immune to changes in the robot manipulator or the environment and to perform reaching, collision avoidance and grasping without explicit, prior and fine knowledge of the human arm structure and kinematics, without careful hand-eye calibration, solely based on visual/retinal input, and in ways that are robust to environmental changes. We learn a manipulation policy which we show takes the first steps toward generalizing to changes in the environment and can scale and adapt to new manipulators. Experiments are aimed at a) comparing different DCNN network architectures b) assessing the reward prediction for two radically different manipulators and c) performing a sensitivity analysis comparing a classical visual servoing formulation of the reaching task with the proposed DRL method.
更多
查看译文
关键词
reaching task,sensitivity analysis,reward prediction,manipulation policy,kinematics,human arm structure,grasping,collision avoidance,robot manipulator,robotic control,robotic tasks,DRL,deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要