AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
We develop Interior point based approach, a well-founded interior point based approach, which computes the gradient by differentiating the homogeneous self dual formulation Linear Programming using a log barrier

Interior Point Solving for LP-based prediction+optimisation

NIPS 2020, (2020)

被引用0|浏览37
EI
下载 PDF 全文
引用
微博一下

摘要

Solving optimization problems is the key to decision making in many real-life analytics applications. However, the coefficients of the optimization problems are often uncertain and dependent on external factors, such as future demand or energy or stock prices. Machine learning (ML) models, especially neural networks, are increasingly be...更多
0
简介
  • A combinatorial optimization is used for decision making with the aim of maximizing a predefined objective.
  • In many real-world problems, there are uncertainties over the coefficients of the objective function and they must be predicted from historical data by using a Machine Learning (ML) model, such as stock price prediction for portfolio optimization [5].
  • The authors consider combinatorial optimization problems that can be formulated as a mixed integer linear program (MILP).
  • The central challenge is how to compute the gradients from the MILP-based task loss, given that MILP is discrete and the Linear Programming (LP) relaxation is not twice differentiable
重点内容
  • There is recently a growing interest in data-driven decision making
  • We consider combinatorial optimization problems that can be formulated as a mixed integer linear program (MILP)
  • We want to train a neural network model to predict the coefficients of the MILP in a way such that the parameters of the neural network are determined by minimizing a task loss, which takes the effect of the predicted coefficients on the MILP output into consideration
  • The central challenge is how to compute the gradients from the MILP-based task loss, given that MILP is discrete and the Linear Programming (LP) relaxation is not twice differentiable
  • We develop Interior point based approach (IntOpt), a well-founded interior point based approach, which computes the gradient by differentiating the homogeneous self dual formulation LP using a log barrier
  • The model is evaluated with MILP optimization problems and continuous relaxation is performed before solving and differentiating the optimization problems
方法
  • The authors evaluate the Interior point based approach (IntOpt) on three predict-and-optimize problems.
  • The authors compare it with a two-stage approach, the QPTL [29] approach and the SPO approach [12, 18].
  • Training is carried out over the relaxed problem, but the authors evaluate the objective and the regret on the test data by solving the discrete MILP to optimality.
  • The neural network and the MILP model have been implemented using PyTorch 1.5.0 [23] and
结论
  • Out of the three experimental setups considered, for the first experiment, both the prediction and the optimization task are fairly simple, whereas the optimization tasks are challenging for the other two experiments
  • It seems, the approach does not perform well compared to SPO for the easier problem, but yields better result for the challenging problems.
  • The authors would like to point out, the SPO approach can take advantage of any blackbox solver as an optimization oracle, whereas both QPTL and IntOpt use interior point algorithms to solve the optimization problem.
  • Techniques to tighten the LP relaxation can potentially further benefit the approach
表格
  • Table1: Comparison among approaches for the Knapsack Problem. Maximisation problem, number between brackets is standard deviation across 10 runs
  • Table2: Comparison among approaches for the Energy Scheduling problems
  • Table3: Comparison among IntOpt variants for the Energy Scheduling problem
  • Table4: Comparison among approaches for the shortest path problem (minimization)
Download tables as Excel
相关工作
  • Several approaches focus on differentiating an argmin optimization problem to fit it within neural network layers. For convex optimization problems, the KKT conidtions maps the coefficients to the set of solutions. So the KKT conditions can be differentiated, using implicit function theorem, for argmin differentiation. Following this idea, Amos and Kolter [3] developed a PyTorch compatible differentiable layer for Quadratic Programming (QP) optimization. In a similar way Agrawal et al [2] introduce a differntiable layer for convex cone program using LSQR 21. Agrawal et al [1] proposed to use this framework for convex optimization problem after transforming it to convex cone programs.
基金
  • This research received funding from the Flemish Government (AI Research Program) and FWO Flanders project G0G3220N
引用论文
  • Akshay Agrawal, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, and J Zico Kolter. Differentiable convex optimization layers. In Advances in neural information processing systems, pages 9562–9574, 2019.
    Google ScholarLocate open access versionFindings
  • Akshay Agrawal, Shane Barratt, Stephen Boyd, Enzo Busseti, and Walaa M Moursi. Differentiating through a cone program. arXiv preprint arXiv:1904.09043, 2019.
    Findings
  • Brandon Amos and J Zico Kolter. Optnet: Differentiable optimization as a layer in neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 136–145. JMLR.org, 2017.
    Google ScholarLocate open access versionFindings
  • Erling D. Andersen and Knud D. Andersen. The mosek interior point optimizer for linear programming: An implementation of the homogeneous algorithm. High Performance Optimization, pages 197–232, 2000. doi: 10.1007/978-1-4757-3216-0_8.
    Locate open access versionFindings
  • Yoshua Bengio. Using a financial training criterion rather than a prediction criterion. International Journal of Neural Systems, 8(04):433–443, 1997.
    Google ScholarLocate open access versionFindings
  • Dimitris Bertsimas and John Tsitsiklis. Introduction to linear programming. Athena Scientific, 1, 1997.
    Google ScholarLocate open access versionFindings
  • Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
    Google ScholarFindings
  • Maxime C Cohen, Ngai-Hang Zachary Leung, Kiran Panchamgam, Georgia Perakis, and Anthony Smith. The impact of linear optimization on promotion planning. Operations Research, 65(2):446–468, 2017.
    Google ScholarLocate open access versionFindings
  • Emir Demirovic, Peter J Stuckey, James Bailey, Jeffrey Chan, Chris Leckie, Kotagiri Ramamohanarao, and Tias Guns. An investigation into prediction+ optimisation for the knapsack problem. In International Conference on Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pages 241–257.
    Google ScholarLocate open access versionFindings
  • Emir Demirovic, Peter J Stuckey, James Bailey, Jeffrey Chan, Christopher Leckie, Kotagiri Ramamohanarao, and Tias Guns. Dynamic programming for predict+ optimise. 2020.
    Google ScholarFindings
  • Priya L. Donti, Brandon Amos, and J. Zico Kolter. Task-based end-to-end model learning in stochastic optimization. In Advances in Neural Information Processing Systems, pages 5484–5494, 2017.
    Google ScholarLocate open access versionFindings
  • Adam N Elmachtoub and Paul Grigas. Smart "predict, then optimize". arXiv preprint arXiv:1710.08005, 2017.
    Findings
  • Aaron Ferber, Bryan Wilder, Bistra Dilkina, and Milind Tambe. Mipaal: Mixed integer program as a layer. In AAAI, pages 1504–1511, 2020.
    Google ScholarLocate open access versionFindings
  • LLC Gurobi Optimization. Gurobi optimizer reference manual, 2020. URL http://www.gurobi.com.
    Findings
  • Georgiana Ifrim, Barry O’Sullivan, and Helmut Simonis. Properties of energy-price forecasts for scheduling. In International Conference on Principles and Practice of Constraint Programming, pages 957–972.
    Google ScholarLocate open access versionFindings
  • Jure Leskovec and Julian J Mcauley. Learning to discover social circles in ego networks. In Advances in neural information processing systems, pages 539–547, 2012.
    Google ScholarLocate open access versionFindings
  • Chun Kai Ling, Fei Fang, and J. Zico Kolter. What game are we playing? end-to-end learning in normal and extensive form games. In IJCAI 2018: 27th International Joint Conference on Artificial Intelligence, pages 396–402, 2018.
    Google ScholarLocate open access versionFindings
  • Jayanta Mandi, Tias Guns, Emir Demirovic, and Peter. J Stuckey. Smart predict-and-optimize for hard combinatorial optimization problems. In AAAI 2020: The Thirty-Fourth AAAI Conference on Artificial Intelligence, volume 34, pages 1603–1610, 2020.
    Google ScholarLocate open access versionFindings
  • James Martens and Ilya Sutskever. Training deep and recurrent networks with hessian-free optimization. In Neural networks: Tricks of the trade, pages 479–535.
    Google ScholarLocate open access versionFindings
  • Hugo Morais, Péter Kádár, Pedro Faria, Zita A Vale, and HM Khodr. Optimal scheduling of a renewable micro-grid in an isolated load area using mixed-integer linear programming. Renewable Energy, 35(1):151–156, 2010.
    Google ScholarLocate open access versionFindings
  • Christopher C Paige and Michael A Saunders. Lsqr: An algorithm for sparse linear equations and sparse least squares. ACM Transactions on Mathematical Software (TOMS), 8(1):43–71, 1982.
    Google ScholarLocate open access versionFindings
  • Christos H Papadimitriou and Kenneth Steiglitz. Combinatorial optimization: algorithms and complexity. Courier Corporation, 1998.
    Google ScholarFindings
  • Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pages 8024–8035, 2019.
    Google ScholarLocate open access versionFindings
  • PAN Ping-Qi. Linear programming computation. Springer, 2014.
    Google ScholarFindings
  • Marin Vlastelica Pogancic, Anselm Paulus, Vit Musil, Georg Martius, and Michal Rolinek. Differentiation of blackbox combinatorial solvers. In ICLR 2020: Eighth International Conference on Learning Representations, 2020.
    Google ScholarLocate open access versionFindings
  • Mohammad Hossein Rafiei and Hojjat Adeli. A novel machine learning model for estimation of sale prices of real estate units. Journal of Construction Engineering and Management, 142 (2):04015066, 2016.
    Google ScholarLocate open access versionFindings
  • Helmut Simonis, Barry O’Sullivan, Deepak Mehta, Barry Hurley, and Milan De Cauwer. CSPLib problem 059: Energy-cost aware scheduling. http://www.csplib.org/Problems/prob059.
    Findings
  • Po-Wei Wang, Priya Donti, Bryan Wilder, and Zico Kolter. Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In ICML 2019: Thirty-sixth International Conference on Machine Learning, pages 6545–6554, 2019.
    Google ScholarLocate open access versionFindings
  • Bryan Wilder, Bistra Dilkina, and Milind Tambe. Melding the data-decisions pipeline: Decisionfocused learning for combinatorial optimization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 1658–1665, 2019.
    Google ScholarLocate open access versionFindings
  • Stephen J Wright. Primal-dual interior-point methods, volume 54.
    Google ScholarLocate open access versionFindings
  • Xiaojie Xu, Pi-Fang Hung, and Yinyu Ye. A simplified homogeneous and self-dual linear programming algorithm and its implementation. Annals of Operations Research, 62(1):151– 171, 1996. For more details refer to https://github.com/JayMan91/NeurIPSIntopt
    Locate open access versionFindings
作者
Jayanta Mandi
Jayanta Mandi
您的评分 :
0

 

标签
评论
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科