AutoLay - Benchmarking amodal layout estimation for autonomous driving.

IROS(2020)

引用 3|浏览3
暂无评分
摘要
Given an image or a video captured from a monocular camera, amodal layout estimation is the task of predicting semantics and occupancy in bird’s eye view. The term amodal implies we also reason about entities in the scene that are occluded or truncated in image space. While several recent efforts have tackled this problem, there is a lack of standardization in task specification, datasets, and evaluation protocols. We address these gaps with AutoLay, a dataset and benchmark for amodal layout estimation from monocular images. AutoLay encompasses driving imagery from two popular datasets: KITTI [1] and Argoverse [2]. In addition to fine-grained attributes such as lanes, sidewalks, and vehicles, we also provide semantically annotated 3D point clouds. We implement several baselines and bleeding edge approaches, and release our data and code.1.
更多
查看译文
关键词
AutoLay,amodal layout estimation,bird,term amodal,image space,monocular images
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要