Inverse Reinforcement Learning with Sub-optimal Experts
CoRR(2024)
摘要
Inverse Reinforcement Learning (IRL) techniques deal with the problem of
deducing a reward function that explains the behavior of an expert agent who is
assumed to act optimally in an underlying unknown task. In several problems of
interest, however, it is possible to observe the behavior of multiple experts
with different degree of optimality (e.g., racing drivers whose skills ranges
from amateurs to professionals). For this reason, in this work, we extend the
IRL formulation to problems where, in addition to demonstrations from the
optimal agent, we can observe the behavior of multiple sub-optimal experts.
Given this problem, we first study the theoretical properties of the class of
reward functions that are compatible with a given set of experts, i.e., the
feasible reward set. Our results show that the presence of multiple sub-optimal
experts can significantly shrink the set of compatible rewards. Furthermore, we
study the statistical complexity of estimating the feasible reward set with a
generative model. To this end, we analyze a uniform sampling algorithm that
results in being minimax optimal whenever the sub-optimal experts' performance
level is sufficiently close to the one of the optimal agent.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要