Continual Skill Learning with Vision-Language Model for Robotic Manipulation
ICRA 2024(2024)
Abstract
The advent of Large Language Models (LLMs) has significantly enhanced the capability of robots to perform tasks based on human instructions. However, a persistent challenge remains: enabling robots to autonomously learn and improve from their past experiences. Addressing this, our paper introduces a systematic approach that assists robots in acquiring new skills through their interaction history. At the heart of our methodology is the development of a meta-task framework, which conceptualizes all tasks as sequences of meta-tasks within a hierarchical skill library that categorizes tasks based on their complexity. To ensure the applicability of acquired skills across different settings, we incorporate a scene understanding module that maintains skill consistency across diverse environments. Moreover, our system is designed to allow human operators to effortlessly invoke these newly acquired skills through direct instructions. We have conducted extensive testing of our system in both simulated and real-world environments to validate its effectiveness and versatility.
MoreTranslated text
Key words
Manipulation Planning,AI-Enabled Robotics,Intelligent and Flexible Manufacturing
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined