Policy Transfer via Kinematic Domain Randomization and Adaptation

2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021)(2021)

引用 21|浏览119
暂无评分
摘要
Transferring reinforcement learning policies trained in physics simulation to the real hardware remains a challenge, known as the "sim-to-real" gap. Domain randomization is a simple yet effective technique to address dynamics discrepancies across source and target domains, but its success generally depends on heuristics and trial-and-error. In this work we investigate the impact of randomized parameter selection on policy transferability across different types of domain discrepancies. Contrary to common practice in which kinematic parameters are carefully measured while dynamic parameters are randomized, we found that virtually randomizing kinematic parameters (e.g., link lengths) during training in simulation generally outperforms dynamic randomization. Based on this finding, we introduce a new domain adaptation algorithm that utilizes simulated kinematic parameters variation. Our algorithm, Multi-Policy Bayesian Optimization, trains an ensemble of universal policies conditioned on virtual kinematic parameters and efficiently adapts to the target environment using a limited number of target domain rollouts. We showcase our findings on a simulated quadruped robot in five different target environments covering different aspects of domain discrepancies.
更多
查看译文
关键词
reinforcement learning policies,physics simulation,sim-to-real gap,dynamics discrepancies,target domains,randomized parameter selection,policy transferability,domain discrepancies,dynamic parameters,dynamic randomization,domain adaptation algorithm,kinematic parameters variation,multipolicy Bayesian optimization,universal policies,virtual kinematic parameters,target domain rollouts,simulated quadruped robot,policy transfer,kinematic domain randomization,target environments
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要