Improving Progressive Retrieval for HPC Scientific Data using Deep Neural Network.

ICDE(2023)

引用 0|浏览36
暂无评分
摘要
As the disparity between compute and I/O on high-performance computing systems has continued to widen, it has become increasingly difficult to perform post-hoc data analytics on full-resolution scientific simulation data due to the high I/O cost. Error-bounded data decomposition and progressive data retrieval framework has recently been developed to address such a challenge by performing data decomposition before storage and reading only part of the decomposed data when necessary. However, the performance of the progressive retrieval framework has been suffering from the over-pessimistic error control theory, such that the achieved maximum error of recomposed data is significantly lower than the required error. Therefore, more data than required is fetched for recomposition, incurring additional I/O overhead. In order to tackle this issue, we propose a DNN-based progressive retrieval framework that can better identify the minimum amount of data to be retrieved. Our contributions are as follows: 1) We provide an in-depth investigation of the recently developed progressive retrieval framework; 2) We propose two designs of prediction models (named D-MGARD and E-MGARD) to estimate the amount of retrieved data size based on error bounds. 3) We evaluate our proposed solutions using scientific datasets generated by real-world simulations from two domains. Evaluation results demonstrate the effectiveness of our solution in accurately predicting the amount of retrieval data size, as well as the advantages of our solution over the traditional approach to reducing the I/O overhead. Based on our evaluation, our solution is shown to read significantly less data (5% - 40% with D-MGARD, 20% - 80% with E-MGARD).
更多
查看译文
关键词
High-performance computing,lossy compression,scientific data management,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要