Settling Time vs. Accuracy Tradeoffs for Clustering Big Data
arxiv(2024)
摘要
We study the theoretical and practical runtime limits of k-means and k-median
clustering on large datasets. Since effectively all clustering methods are
slower than the time it takes to read the dataset, the fastest approach is to
quickly compress the data and perform the clustering on the compressed
representation. Unfortunately, there is no universal best choice for
compressing the number of points - while random sampling runs in sublinear time
and coresets provide theoretical guarantees, the former does not enforce
accuracy while the latter is too slow as the numbers of points and clusters
grow. Indeed, it has been conjectured that any sensitivity-based coreset
construction requires super-linear time in the dataset size. We examine this
relationship by first showing that there does exist an algorithm that obtains
coresets via sensitivity sampling in effectively linear time - within
log-factors of the time it takes to read the data. Any approach that
significantly improves on this must then resort to practical heuristics,
leading us to consider the spectrum of sampling strategies across both real and
artificial datasets in the static and streaming settings. Through this, we show
the conditions in which coresets are necessary for preserving cluster validity
as well as the settings in which faster, cruder sampling strategies are
sufficient. As a result, we provide a comprehensive theoretical and practical
blueprint for effective clustering regardless of data size. Our code is
publicly available and has scripts to recreate the experiments.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要