Flexible online adaptation of learning strategy using EEG-based reinforcement signals in real-world robotic applications.

ICRA(2020)

引用 10|浏览38
暂无评分
摘要
Flexible adaptation of learning strategy depending on online changes of the user’s current intents have a high relevance in human-robot collaboration. In our previous study, we proposed an intrinsic interactive reinforcement learning approach for human-robot interaction, in which a robot learns his/her action strategy based on intrinsic human feedback that is generated in the human’s brain as neural signature of the human’s implicit evaluation of the robot’s actions. Our approach has an inherent property that allows robots to adapt their behavior depending on online changes of the human’s current intents. Such flexible adaptation is possible, since robot learning is updated in real time by human’s online feedback. In this paper, the adaptivity of robot learning is tested on eight subjects who change their current control strategy by adding a new gesture to the previous used gestures. This paper evaluates the learning progress by analyzing learning phases (before and after adding a new gesture for control). The results show that the robot can adapt the previously learned policy depending on online changes of the user’s intents. Especially, learning progress is interrelated with the classification performance of electroencephalograms (EEGs), which are used to measure the human’s implicit evaluation of the robot’s actions.
更多
查看译文
关键词
learned policy,learning phases,current control strategy,robot learning,intrinsic human feedback,human-robot interaction,intrinsic interactive reinforcement learning approach,human-robot collaboration,flexible adaptation,real-world robotic applications,reinforcement signals,flexible online adaptation,learning progress
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要