谷歌浏览器插件
订阅小程序
在清言上使用

Affordance-Based Active Belief: Recognition Using Visual And Manual Actions

2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016)(2016)

引用 13|浏览22
暂无评分
摘要
This paper presents an active, model-based recognition system. It applies information theoretic measures in a belief-driven planning framework to recognize objects using the history of visual and manual interactions and to select the most informative actions. A generalization of the aspect graph is used to construct forward models of objects that account for visual transitions. We use populations of these models to define the belief state of the recognition problem. This paper focuses on the impact of the belief-space and object model representations on recognition efficiency and performance. A benchmarking system is introduced to execute controlled experiments in a challenging mobile manipulation domain. It offers a large population of objects that remain ambiguous from single sensor geometry or from visual or manual actions alone. Results are presented for recognition performance on this dataset using locomotive, pushing, and lifting controllers as the basis for active information gathering on single objects. An information theoretic approach that is greedy over the expected information gain is used to select informative actions, and its performance is compared to a sequence of random actions.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要