Learning Symbolic and Subsymbolic Temporal Task Constraints from Bimanual Human Demonstrations
arxiv(2024)
摘要
Learning task models of bimanual manipulation from human demonstration and
their execution on a robot should take temporal constraints between actions
into account. This includes constraints on (i) the symbolic level such as
precedence relations or temporal overlap in the execution, and (ii) the
subsymbolic level such as the duration of different actions, or their starting
and end points in time. Such temporal constraints are crucial for temporal
planning, reasoning, and the exact timing for the execution of bimanual actions
on a bimanual robot. In our previous work, we addressed the learning of
temporal task constraints on the symbolic level and demonstrated how a robot
can leverage this knowledge to respond to failures during execution. In this
work, we propose a novel model-driven approach for the combined learning of
symbolic and subsymbolic temporal task constraints from multiple bimanual human
demonstrations. Our main contributions are a subsymbolic foundation of a
temporal task model that describes temporal nexuses of actions in the task
based on distributions of temporal differences between semantic action
keypoints, as well as a method based on fuzzy logic to derive symbolic temporal
task constraints from this representation. This complements our previous work
on learning comprehensive temporal task models by integrating symbolic and
subsymbolic information based on a subsymbolic foundation, while still
maintaining the symbolic expressiveness of our previous approach. We compare
our proposed approach with our previous pure-symbolic approach and show that we
can reproduce and even outperform it. Additionally, we show how the subsymbolic
temporal task constraints can synchronize otherwise unimanual movement
primitives for bimanual behavior on a humanoid robot.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要