Chrome Extension
WeChat Mini Program
Use on ChatGLM

Task-prompt Generalized World Model in Multi-environment Offline Reinforcement Learning

ECAI 2023(2023)

Cited 0|Views28
No score
Abstract
Offline reinforcement learning (RL) circumvents costly interactions with the environment by utilising historical trajectories. Incorporating a world model into this method could substantially enhance the transfer performance of various tasks without expensive calculations from scratch. However, due to the complexity arising from different types of generalisation, previous works have focused almost exclusively on single-environment tasks. In this study, we introduce a multi-environment offline RL setting to investigate whether a generalised world model can be learned from large, diverse datasets and serve as a good surrogate for policy learning in different tasks. Inspired by the success of multi-task prompt methods, we propose the Task-prompt Generalised World Model (TGW) framework, which demonstrates notable performance in this setting. TGW comprises three modules: a task-state prompter, a generalised dynamics module, and a reward module. We implement the generalised dynamics module as a transformer-based recurrent state-space model (TransRSSM) and employ prompts to provide task-specific instructions, enabling TGW to address the internal stochasticity of the generalised world model. On the MuJoCo control benchmarks, TGW significantly outperforms previous offline RL algorithms in multi-environment setting.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined