Visual grasp affordances from appearance-based cues

ICCV Workshops(2011)

引用 26|浏览108
暂无评分
摘要
In this paper, we investigate the prediction of visual grasp affordances from 2D measurements. Appearance-based estimation of grasp affordances is desirable when 3-D scans are unreliable due to clutter or material properties. We develop a general framework for estimating grasp affordances from 2-D sources, including local texture-like measures as well as object-category measures that capture previously learned grasp strategies. Local approaches to estimating grasp positions have been shown to be effective in real-world scenarios, but are unable to impart object-level biases and can be prone to false positives. We describe how global cues can be used to compute continuous pose estimates and corresponding grasp point locations, using a max-margin optimization for category-level continuous pose regression. We provide a novel dataset to evaluate visual grasp affordance estimation; on this dataset we show that a fused method outperforms either local or global methods alone, and that continuous pose estimation improves over discrete output models.
更多
查看译文
关键词
visual grasp affordance estimation,continuous pose estimation,object-category measure,texture-like measure,grasp position estimation,appearance-based estimation,pose estimation,appearance-based cue,image texture,2d measurement,computer model,material properties,pipelines,estimation,detectors,point location,three dimensional,computational modeling,false positive
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要