Depth Data Error Modeling of the ZED 3D Vision Sensor from Stereolabs

Electronic Letters on Computer Vision and Image Analysis(2018)

引用 61|浏览2
暂无评分
摘要
The ZED camera is binocular vision system that can be used to provide a 3D perception of the world. It can be applied in autonomous robot navigation, virtual reality, tracking, motion analysis and so on. This paper proposes a mathematical error model for depth data estimated by the ZED camera with its several resolutions of operation. For doing that, the ZED is attached to a Nvidia Jetson TK1 board providing an embedded system that is used for processing raw data acquired by ZED from a 3D checkerboard. Corners are extracted from the checkerboard using RGB data, and a 3D reconstruction is done for these points using disparity data calculated from the ZED camera, coming up with a partially ordered, and regularly distributed (in 3D space) point cloud of corners with given coordinates, which are computed by the device software. These corners also have their ideal world (3D) positions known with respect to the coordinate frame origin that is empirically set in the pattern. Both given (computed) coordinates from the camera’s data and known (ideal) coordinates of a corner can, thus, be compared for estimating the error between the given and ideal point locations of the detected corner cloud. Subsequently, using a curve fitting technique, we obtain the equations that model the RMS (Root Mean Square) error. This procedure is repeated for several resolutions of the ZED sensor, and at several distances. Results showed its best effectiveness with a maximum distance of approximately sixteen meters, in real time, which allows its use in robotic or other online applications.
更多
查看译文
关键词
Sensor Systems,3D and Stereo
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要