Gradient descent procedure for solving linear programming relaxations of combinatorial optimization problems in parallel mode on extra large scale

arxiv(2020)

引用 0|浏览2
暂无评分
摘要
Linear programming (LP) relaxation is a standard technique for solving hard combinatorial optimization (CO) problems. Here we present a gradient descent algorithm which exploits the special structure of some LP relaxations induced by CO problems. The algorithm can be run in parallel mode and was implemented as CUDA C/C++ program to be executed on GPU. We exemplify efficiency of the algorithm by solving a fractional 2-matching problem. Our results demonstrate that a fractional 2-matching problem with 100,000 nodes is solved by our algorithm on a modern GPU on a scale of a second while solving the problem with simplex method would take more than an hour. The algorithm can be modified to solve more complicated LP relaxations derived from CO problems.
更多
查看译文
关键词
linear programming relaxations,combinatorial optimization problems,gradient descent procedure,linear programming
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要