Non-Monotone Adaptive Submodular Maximization

IJCAI(2015)

引用 40|浏览42
暂无评分
摘要
A wide range of AI problems, such as sensor placement, active learning, and network influence maximization, require sequentially selecting elements from a large set with the goal of optimizing the utility of the selected subset. Moreover, each element that is picked may provide stochastic feedback, which can be used to make smarter decisions about future selections. Finding efficient policies for this general class of adaptive optimization problems can be extremely hard. However, when the objective function is adaptive monotone and adaptive submodular, a simple greedy policy attains a 1 - 1/e approximation ratio in terms of expected utility. Unfortunately, many practical objective functions are naturally non-monotone; to our knowledge, no existing policy has provable performance guarantees when the assumption of adaptive monotonicity is lifted. We propose the adaptive random greedy policy for maximizing adaptive submodular functions, and prove that it retains the aforementioned 1 - 1/e approximation ratio for functions that are also adaptive monotone, while it additionally provides a 1/e approximation ratio for non-monotone adaptive submodular functions. We showcase the benefits of adaptivity on three real-world network data sets using two non-monotone functions, representative of two classes of commonly encountered non-monotone objectives.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要