Tight bounds on ℓ1 approximation and learning of self-bounding functions

Theoretical Computer Science(2020)

引用 4|浏览113
暂无评分
摘要
We study the complexity of learning and approximation of self-bounding functions over the uniform distribution on the Boolean hypercube {0,1}n. Informally, a function f:{0,1}n→R is self-bounding if for every x∈{0,1}n, f(x) upper bounds the sum of all the n marginal decreases in the value of the function at x. Self-bounding functions include such well-known classes of functions as submodular and fractionally-subadditive (XOS) functions. They were introduced by Boucheron et al. (2010) in the context of concentration of measure inequalities. Our main result is a nearly tight ℓ1-approximation of self-bounding functions by low-degree juntas. Specifically, all self-bounding functions can be ϵ-approximated in ℓ1 by a polynomial of degree O˜(1/ϵ) over 2O˜(1/ϵ) variables. We show that both the degree and junta-size are optimal up to logarithmic terms. Previous techniques considered stronger ℓ2 approximation and proved nearly tight bounds of Θ(1/ϵ2) on the degree and 2Θ(1/ϵ2) on the number of variables. Our bounds rely on the analysis of noise stability of self-bounding functions together with a stronger connection between noise stability and ℓ1 approximation by low-degree polynomials. This technique can also be used to get tighter bounds on ℓ1 approximation by low-degree polynomials and a faster learning algorithm for halfspaces.
更多
查看译文
关键词
PAC learning,Submodular function,XOS function,Fourier analysis,Noise stability,Polynomial approximation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要