Chrome Extension
WeChat Mini Program
Use on ChatGLM

Spatial Perception by Object-Aware Visual Scene Representation

IEEE International Conference on Computer Vision (ICCV)(2019)CCF A

Seoul Natl Univ

Cited 2|Views20
Abstract
Spatial perception is a fundamental ability necessary for autonomous mobile robots to move robustly and safely in the real-world. Recent advances in SLAM enabled a single camera-based system to concurrently build 3D maps of the world while tracking its location and orientation. However, such systems often fail to track themselves within the map and cannot recognize previously visited places due to the lack of reliable descriptions of the observed scenes. We present a spatial perception framework that uses an object-aware visual scene representation to enhance the spatial abilities. The proposed representation compensates for aberrations of conventional geometric scene representations by fusing those representations with semantic features extracted from perceived objects. We implemented this framework on a mobile robot platform to validate its performance in home situations. Further evaluations were conducted with the ScanNet dataset which provides large-scale 3D photo-realistic indoor scenes. Extensive tests show that our framework can reliably generate maps by reducing tracking-failure, and better recognize overlap in the map.
More
Translated text
Key words
spatial perception,visual slam,relocalization,scene representation,mobile robots
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers

Visual Perception Framework for an Intelligent Mobile Robot

2020 17TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS (UR) 2020

被引用5

Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种利用物体感知的视觉场景表示来增强移动机器人空间感知能力的框架,提高了地图构建的可靠性和位置识别的准确性。

方法】:作者通过融合传统的几何场景表示与从感知物体中提取的语义特征,创建了一种物体感知的视觉场景表示。

实验】:该框架在移动机器人平台上实施,并在家庭环境进行了验证。此外,使用ScanNet数据集进行了评估,结果显示框架能减少跟踪失败,更好地识别地图中的重合部分。