R2D2: Reducing Redundancy and Duplication in Data Lakes
Proc. ACM Manag. Data(2023)
摘要
Enterprise data lakes often suffer from substantial amounts of duplicate and
redundant data, with data volumes ranging from terabytes to petabytes. This
leads to both increased storage costs and unnecessarily high maintenance costs
for these datasets. In this work, we focus on identifying and reducing
redundancy in enterprise data lakes by addressing the problem of 'dataset
containment'. To the best of our knowledge, this is one of the first works that
addresses table-level containment at a large scale.
We propose R2D2: a three-step hierarchical pipeline that efficiently
identifies almost all instances of containment by progressively reducing the
search space in the data lake. It first builds (i) a schema containment graph,
followed by (ii) statistical min-max pruning, and finally, (iii) content level
pruning. We further propose minimizing the total storage and access costs by
optimally identifying redundant datasets that can be deleted (and reconstructed
on demand) while respecting latency constraints.
We implement our system on Azure Databricks clusters using Apache Spark for
enterprise data stored in ADLS Gen2, and on AWS clusters for open-source data.
In contrast to existing modified baselines that are inaccurate or take several
days to run, our pipeline can process an enterprise customer data lake at the
TB scale in approximately 5 hours with high accuracy. We present theoretical
results as well as extensive empirical validation on both enterprise (scale of
TBs) and open-source datasets (scale of MBs - GBs), which showcase the
effectiveness of our pipeline.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要