Active Learning of Reward Dynamics from Hierarchical Queries

2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2019)

引用 34|浏览42
暂无评分
摘要
Enabling robots to act according to human preferences across diverse environments is a crucial task, extensively studied by both roboticists and machine learning researchers. To achieve it, human preferences are often encoded by a reward function which the robot optimizes for. This reward function is generally static in the sense that it does not vary with time or the interactions. Unfortunately, such static reward functions do not always adequately capture human preferences, especially, in non-stationary environments: Human preferences change in response to the emergent behaviors of the other agents in the environment. In this work, we propose learning reward dynamics that can adapt in non-stationary environments with several interacting agents. We define reward dynamics as a tuple of reward functions, one for each mode of interaction, and mode-utility functions governing transitions between the modes. Reward dynamics thereby encodes not only different human preferences but also how the preferences change. Our contribution is in the way we adapt preference-based learning into a hierarchical approach that aims at learning not only reward functions but also how they evolve based on interactions. We derive a probabilistic observation model of how people will respond to the hierarchical queries. Our algorithm leverages this model to actively select hierarchical queries that will maximize the volume removed from a continuous hypothesis space of reward dynamics. We empirically demonstrate reward dynamics can match human preferences accurately.
更多
查看译文
关键词
static reward functions,nonstationary environments,reward dynamics,mode-utility functions,human preferences,hierarchical queries,continuous hypothesis space,active learning,robots,probabilistic observation model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要