Efficient Encoding and Decoding Extended Geocodes for Massive Point Cloud Data.

BigComp(2019)

引用 3|浏览123
暂无评分
摘要
With the development of mobile surveying and mapping technologies, point cloud data has been emerging in a variety of applications including robot navigation, self-driving drones/vehicles, and three-dimensional (3D) urban space modeling. In addition, there is an increasing demand for the database management system to share and reuse point cloud data, unlike being treated as archive files in the traditional uses and applications. However, database scalability needs to be explored to process and manage a massive volume of point cloud data defined by a 3D (X, Y, and Z) coordinates system. The typical approach to handle big data and distribute it across multiple nodes is data partitioning. Geohashing is a popular way to convert a latitude/longitude spatial point into a code/string and has used for storing data into buckets of the grid. Many methods of handling big geospatial data, especially NoSQL databases, are based on the geohashing techniques. In this paper, we propose an efficient method to encode/decode 3D point cloud in a Discrete Global Grid System (DGGS) that represents the Earth as hierarchical sequences of equal area/volume tessellations, similar to geohash. The current geohash of base36 has the difficulties of working with high-resolution 3D point clouds for data storage, filter, integration, and analytics because of its limitation of cell size and unequal areas. We employ DGGS-based Morton codes with more than 64 bits for precise 3D coordinates of point cloud and compare the encoding/decoding performance between two implementations: using strings and using the combination of bit interleaving and lookup tables.
更多
查看译文
关键词
Three-dimensional displays,Geospatial analysis,Face,Earth,Laser radar,Encoding,Databases
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要