Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems
JOURNAL OF GLOBAL OPTIMIZATION(2020)
摘要
This paper proposes an inertial Bregman proximal gradient method for minimizing the sum of two possibly nonconvex functions. This method includes two different inertial steps and adopts the Bregman regularization in solving the subproblem. Under some general parameter constraints, we prove the subsequential convergence that each generated sequence converges to the stationary point of the considered problem. To overcome the parameter constraints, we further propose a nonmonotone line search strategy to make the parameter selections more flexible. The subsequential convergence of the proposed method with line search is established. When the line search is monotone, we prove the stronger global convergence and linear convergence rate under Kurdyka–Łojasiewicz framework. Moreover, numerical results on SCAD and MCP nonconvex penalty problems are reported to demonstrate the effectiveness and superiority of the proposed methods and line search strategy.
更多查看译文
关键词
Nonconvex, Nonsmooth, Inertial proximal gradient method, Bregman regularization, Kurdyka-Lojasiewicz property, Global convergence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络