Quality with Just Enough Diversity in Evolutionary Policy Search
arxiv(2024)
摘要
Evolution Strategies (ES) are effective gradient-free optimization methods
that can be competitive with gradient-based approaches for policy search. ES
only rely on the total episodic scores of solutions in their population, from
which they estimate fitness gradients for their update with no access to true
gradient information. However this makes them sensitive to deceptive fitness
landscapes, and they tend to only explore one way to solve a problem.
Quality-Diversity methods such as MAP-Elites introduced additional information
with behavior descriptors (BD) to return a population of diverse solutions,
which helps exploration but leads to a large part of the evaluation budget not
being focused on finding the best performing solution. Here we show that
behavior information can also be leveraged to find the best policy by
identifying promising search areas which can then be efficiently explored with
ES. We introduce the framework of Quality with Just Enough Diversity (JEDi)
which learns the relationship between behavior and fitness to focus evaluations
on solutions that matter. When trying to reach higher fitness values, JEDi
outperforms both QD and ES methods on hard exploration tasks like mazes and on
complex control problems with large policies.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要