Near-Optimal Sample Complexity Bounds for Maximum Likelihood Estimation of Multivariate Log-concave Densities.

COLT(2018)

引用 24|浏览83
暂无评分
摘要
We study the problem of learning multivariate log-concave densities with respect to a global loss function. We obtain the emph{first} upper bound on the sample complexity of the maximum likelihood estimator (MLE) for a log-concave density on (mathbb{R}^d), for all (d geq 4). Prior to this work, no finite sample upper bound was known for this estimator in more than (3) dimensions. In more detail, we prove that for any (d geq 1) and (varepsilonu003e0), given (widetilde{O}_d((1/varepsilon)^{(d+3)/2})) samples drawn from an unknown log-concave density (f_0) on (mathbb{R}^d), the MLE outputs a hypothesis (h) that with high probability is (varepsilon)-close to (f_0), in squared Hellinger loss. A sample complexity lower bound of (Omega_d((1/varepsilon)^{(d+1)/2})) was previously known for any learning algorithm that achieves this guarantee. We thus establish that the sample complexity of the log-concave MLE is near-optimal, up to an (tilde{O}(1/varepsilon)) factor.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要