Online Hyper-parameter Tuning in Off-policy Learning via Evolutionary Strategies
arxiv(2020)
摘要
Off-policy learning algorithms have been known to be sensitive to the choice of hyper-parameters. However, unlike near on-policy algorithms for which hyper-parameters could be optimized via e.g. meta-gradients, similar techniques could not be straightforwardly applied to off-policy learning. In this work, we propose a framework which entails the application of Evolutionary Strategies to online hyper-parameter tuning in off-policy learning. Our formulation draws close connections to meta-gradients and leverages the strengths of black-box optimization with relatively low-dimensional search spaces. We show that our method outperforms state-of-the-art off-policy learning baselines with static hyper-parameters and recent prior work over a wide range of continuous control benchmarks.
更多查看译文
关键词
evolutionary strategies,learning,hyper-parameter,off-policy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络