A Dual Semantic-Aware Recurrent Global-Adaptive Network For Vision-and-Language Navigation
arxiv(2023)
摘要
Vision-and-Language Navigation (VLN) is a realistic but challenging task that
requires an agent to locate the target region using verbal and visual cues.
While significant advancements have been achieved recently, there are still two
broad limitations: (1) The explicit information mining for significant guiding
semantics concealed in both vision and language is still under-explored; (2)
The previously structured map method provides the average historical appearance
of visited nodes, while it ignores distinctive contributions of various images
and potent information retention in the reasoning process. This work proposes a
dual semantic-aware recurrent global-adaptive network (DSRG) to address the
above problems. First, DSRG proposes an instruction-guidance linguistic module
(IGL) and an appearance-semantics visual module (ASV) for boosting vision and
language semantic learning respectively. For the memory mechanism, a global
adaptive aggregation module (GAA) is devised for explicit panoramic observation
fusion, and a recurrent memory fusion module (RMF) is introduced to supply
implicit temporal hidden states. Extensive experimental results on the R2R and
REVERIE datasets demonstrate that our method achieves better performance than
existing methods. Code is available at https://github.com/CrystalSixone/DSRG.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要