Language and Sketching: An LLM-driven Interactive Multimodal Multitask Robot Navigation Framework
arxiv(2023)
摘要
The socially-aware navigation system has evolved to adeptly avoid various
obstacles while performing multiple tasks, such as point-to-point navigation,
human-following, and -guiding. However, a prominent gap persists: in
Human-Robot Interaction (HRI), the procedure of communicating commands to
robots demands intricate mathematical formulations. Furthermore, the transition
between tasks does not quite possess the intuitive control and user-centric
interactivity that one would desire. In this work, we propose an LLM-driven
interactive multimodal multitask robot navigation framework, termed LIM2N, to
solve the above new challenge in the navigation field. We achieve this by first
introducing a multimodal interaction framework where language and hand-drawn
inputs can serve as navigation constraints and control objectives. Next, a
reinforcement learning agent is built to handle multiple tasks with the
received information. Crucially, LIM2N creates smooth cooperation among the
reasoning of multimodal input, multitask planning, and adaptation and
processing of the intelligent sensing modules in the complicated system.
Extensive experiments are conducted in both simulation and the real world
demonstrating that LIM2N has superior user needs understanding, alongside an
enhanced interactive experience.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要