Ag2Manip: Learning Novel Manipulation Skills with Agent-Agnostic Visual and Action Representations
arxiv(2024)
摘要
Autonomous robotic systems capable of learning novel manipulation tasks are
poised to transform industries from manufacturing to service automation.
However, modern methods (e.g., VIP and R3M) still face significant hurdles,
notably the domain gap among robotic embodiments and the sparsity of successful
task executions within specific action spaces, resulting in misaligned and
ambiguous task representations. We introduce Ag2Manip (Agent-Agnostic
representations for Manipulation), a framework aimed at surmounting these
challenges through two key innovations: a novel agent-agnostic visual
representation derived from human manipulation videos, with the specifics of
embodiments obscured to enhance generalizability; and an agent-agnostic action
representation abstracting a robot's kinematics to a universal agent proxy,
emphasizing crucial interactions between end-effector and object. Ag2Manip's
empirical validation across simulated benchmarks like FrankaKitchen, ManiSkill,
and PartManip shows a 325
domain-specific demonstrations. Ablation studies underline the essential
contributions of the visual and action representations to this success.
Extending our evaluations to the real world, Ag2Manip significantly improves
imitation learning success rates from 50
effectiveness and generalizability across both simulated and physical
environments.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要