Searching Scenes by Abstracting Things.

arXiv: Computer Vision and Pattern Recognition(2016)

引用 23|浏览76
暂无评分
摘要
In this paper we propose to represent a scene as an abstraction of u0027thingsu0027. We start from u0027thingsu0027 as generated by modern object proposals, and we investigate their immediately observable properties: position, size, aspect ratio and color, and those only. Where the recent successes and excitement of the field lie in object identification, we represent the scene composition independent of object identities. We make three contributions in this work. First, we study simple observable properties of u0027thingsu0027, and call it things syntax. Second, we propose translating the things syntax in linguistic abstract statements and study their descriptive effect to retrieve scenes. Thirdly, we propose querying of scenes with abstract block illustrations and study their effectiveness to discriminate among different types of scenes. The benefit of abstract statements and block illustrations is that we generate them directly from the images, without any learning beforehand as in the standard attribute learning. Surprisingly, we show that even though we use the simplest of features from u0027thingsu0027 layout and no learning at all, we can still retrieve scenes reasonably well.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要