DriveLM: Driving with Graph Visual Question Answering
CoRR(2023)
摘要
We study how vision-language models (VLMs) trained on web-scale data can be
integrated into end-to-end driving systems to boost generalization and enable
interactivity with human users. While recent approaches adapt VLMs to driving
via single-round visual question answering (VQA), human drivers reason about
decisions in multiple steps. Starting from the localization of key objects,
humans estimate object interactions before taking actions. The key insight is
that with our proposed task, Graph VQA, where we model graph-structured
reasoning through perception, prediction and planning question-answer pairs, we
obtain a suitable proxy task to mimic the human reasoning process. We
instantiate datasets (DriveLM-Data) built upon nuScenes and CARLA, and propose
a VLM-based baseline approach (DriveLM-Agent) for jointly performing Graph VQA
and end-to-end driving. The experiments demonstrate that Graph VQA provides a
simple, principled framework for reasoning about a driving scene, and
DriveLM-Data provides a challenging benchmark for this task. Our DriveLM-Agent
baseline performs end-to-end autonomous driving competitively in comparison to
state-of-the-art driving-specific architectures. Notably, its benefits are
pronounced when it is evaluated zero-shot on unseen objects or sensor
configurations. We hope this work can be the starting point to shed new light
on how to apply VLMs for autonomous driving. To facilitate future research, all
code, data, and models are available to the public.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要