Sensitivity Sampling for k-Means: Worst Case and Stability Optimal Coreset Bounds
arxiv(2024)
摘要
Coresets are arguably the most popular compression paradigm for center-based
clustering objectives such as k-means. Given a point set P, a coreset
Ω is a small, weighted summary that preserves the cost of all candidate
solutions S up to a (1±ε) factor. For k-means in
d-dimensional Euclidean space the cost for solution S is ∑_p∈
Pmin_s∈ Sp-s^2.
A very popular method for coreset construction, both in theory and practice,
is Sensitivity Sampling, where points are sampled in proportion to their
importance. We show that Sensitivity Sampling yields optimal coresets of size
Õ(k/ε^2min(√(k),ε^-2)) for worst-case
instances. Uniquely among all known coreset algorithms, for well-clusterable
data sets with Ω(1) cost stability, Sensitivity Sampling gives coresets
of size Õ(k/ε^2), improving over the worst-case lower
bound. Notably, Sensitivity Sampling does not have to know the cost stability
in order to exploit it: It is appropriately sensitive to the clusterability of
the data set while being oblivious to it.
We also show that any coreset for stable instances consisting of only input
points must have size Ω(k/ε^2). Our results for Sensitivity
Sampling also extend to the k-median problem, and more general metric spaces.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要