Visual Perception Generalization for Vision-and-Language Navigation via Meta-Learning

IEEE transactions on neural networks and learning systems(2023)

引用 11|浏览47
暂无评分
摘要
Vision-and-language navigation (VLN) is a challenging task that requires an agent to navigate in real-world environments by understanding natural language instructions and visual information received in real time. Prior works have implemented VLN tasks on continuous environments or physical robots, all of which use a fixed-camera configuration due to the limitations of datasets, such as 1.5-m height, 90° horizontal field of view (HFOV), and so on. However, real-life robots with different purposes have multiple camera configurations, and the huge gap in visual information makes it difficult to directly transfer the learned navigation skills between various robots. In this brief, we propose a visual perception generalization strategy based on meta-learning, which enables the agent to fast adapt to a new camera configuration. In the training phase, we first locate the generalization problem to the visual perception module and then compare two meta-learning algorithms for better generalization in seen and unseen environments. One of them uses the model-agnostic meta-learning (MAML) algorithm that requires few-shot adaptation, and the other refers to a metric-based meta-learning method with a feature-wise affine transformation (AT) layer. The experimental results on the VLN-CE dataset demonstrate that our strategy successfully adapts the learned navigation skills to new camera configurations, and the two algorithms show their advantages in seen and unseen environments respectively.
更多
查看译文
关键词
Navigation,Visual perception,Task analysis,Visualization,Cameras,Robot vision systems,Adaptation models,Embodied agent,meta-learning,vision-and-language navigation (VLN),visual perception generalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要