Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention
CVPR, Volume abs/1812.04155, 2019, Pages 12527-12537.
EI
Abstract:
We present Vision-based Navigation with Language-based Assistance (VNLA), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments. The task emulates a real-world scenario in that (a) the requester may not know how to navigate to the target objects a...More
Code:
Data:
Full Text
Tags
Comments