Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention

CVPR, Volume abs/1812.04155, 2019, Pages 12527-12537.

Cited by: 18|Bibtex|Views71
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org

Abstract:

We present Vision-based Navigation with Language-based Assistance (VNLA), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments. The task emulates a real-world scenario in that (a) the requester may not know how to navigate to the target objects a...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments