Toward Grounded Commonsense Reasoning
CoRR(2023)
摘要
Consider a robot tasked with tidying a desk with a meticulously constructed
Lego sports car. A human may recognize that it is not appropriate to
disassemble the sports car and put it away as part of the "tidying." How can a
robot reach that conclusion? Although large language models (LLMs) have
recently been used to enable commonsense reasoning, grounding this reasoning in
the real world has been challenging. To reason in the real world, robots must
go beyond passively querying LLMs and actively gather information from the
environment that is required to make the right decision. For instance, after
detecting that there is an occluded car, the robot may need to actively
perceive the car to know whether it is an advanced model car made out of Legos
or a toy car built by a toddler. We propose an approach that leverages an LLM
and vision language model (VLM) to help a robot actively perceive its
environment to perform grounded commonsense reasoning. To evaluate our
framework at scale, we release the MessySurfaces dataset which contains images
of 70 real-world surfaces that need to be cleaned. We additionally illustrate
our approach with a robot on 2 carefully designed surfaces. We find an average
12.9
on the robot experiments over baselines that do not use active perception. The
dataset, code, and videos of our approach can be found at
https://minaek.github.io/grounded_commonsense_reasoning.
更多查看译文
关键词
reasoning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要