谷歌浏览器插件
订阅小程序
在清言上使用

MetaCOG: Learning a Metacognition to Recover What Objects Are Actually There

arXiv (Cornell University)(2021)

引用 0|浏览2
暂无评分
摘要
Humans not only form representations about the world based on what we see, but also learn meta-cognitive representations about how our own vision works. This enables us to recognize when our vision is unreliable (e.g., when we realize that we are experiencing a visual illusion) and enables us to question what we see. Inspired by this human capacity, we present MetaCOG: a model that increases the robustness of object detectors by learning representations of their reliability, and does so without feedback. Specifically, MetaCOG is a hierarchical probabilistic model that expresses a joint distribution over the objects in a 3D scene and the outputs produced by a detector. When paired with an off-the-shelf object detector, MetaCOG takes detections as input and infers the detector's tendencies to miss objects of certain categories and to hallucinate objects that are not actually present, all without access to ground-truth object labels. When paired with three modern neural object detectors, MetaCOG learns useful and accurate meta-cognitive representations, resulting in improved performance on the detection task. Additionally, we show that MetaCOG is robust to varying levels of error in the detections. Our results are a proof-of-concept for a novel approach to the problem of correcting a faulty vision system's errors. The model code, datasets, results, and demos are available: https://osf.io/8b9qt/?view_only=8c1b1c412c6b4e1697e3c7859be2fce6
更多
查看译文
关键词
metacognition,objects,learning,recover
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要