Approximate Submodular Functions and Performance Guarantees

arXiv: Data Structures and Algorithms(2018)

引用 23|浏览9
暂无评分
摘要
We consider the problem of maximizing non-negative non-decreasing set functions. Although most of the recent work focus on exploiting submodularity, it turns out that several objectives we encounter in practice are not submodular. Nonetheless, often we leverage the greedy algorithms used in submodular functions to determine a solution to the non-submodular functions. Hereafter, we propose to address the original problem by approximating the non-submodular function and analyze the incurred error, as well as the performance trade-offs. To quantify the approximation error, we introduce a novel concept of δ-approximation of a function, which we used to define the space of submodular functions that lie within an approximation error. We provide necessary conditions on the existence of such δ-approximation functions, which might not be unique. Consequently, we characterize this subspace which we refer to as region of submodularity. Furthermore, submodular functions are known to lead to different sub-optimality guarantees, so we generalize those dependencies upon a δ-approximation into the notion of greedy curvature. Finally, we used this latter notion to simplify some of the existing results and efficiently (i.e., linear complexity) determine tightened bounds on the sub-optimality guarantees using objective functions commonly used in practical setups and validate them using real data.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要