Semi-Offline Reinforcement Learning for Optimized Text Generation

Changyu Chen,Xiting Wang,Yiqiao Jin, Victor Ye Dong,Li Dong, Jie Cao, Yi Liu,Rui Yan

CoRR(2023)

引用 5|浏览87
暂无评分
摘要
In reinforcement learning (RL), there are two major settings for interacting with the environment: online and offline. Online methods explore the environment at significant time cost, and offline methods efficiently obtain reward signals by sacrificing exploration capability. We propose semi-offline RL, a novel paradigm that smoothly transits from offline to online settings, balances exploration capability and training cost, and provides a theoretical foundation for comparing different RL settings. Based on the semi-offline formulation, we present the RL setting that is optimal in terms of optimization cost, asymptotic error, and overfitting error bound. Extensive experiments show that our semi-offline approach is efficient and yields comparable or often better performance compared with state-of-the-art methods.
更多
查看译文
关键词
text,generation,learning,semi-offline
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要