Convergence Rate Bounds for the Mirror Descent Method: IQCs and the Bregman Divergence

arxiv(2022)

引用 1|浏览0
暂无评分
摘要
This paper is concerned with convergence analysis for the mirror descent (MD) method, a well-known algorithm in convex optimization. An analysis framework via integral quadratic constraints (IQCs) is constructed to analyze the convergence rate of the MD method with strongly convex objective functions in both continuous-time and discrete-time. We formulate the problem of finding convergence rates of the MD algorithms into feasibility problems of linear matrix inequalities (LMIs) in both schemes. In particular, in continuous-time, we show that the Bregman divergence function, which is commonly used as a Lyapunov function for this algorithm, is a special case of the class of Lyapunov functions associated with the Popov criterion, when the latter is applied to an appropriate reformulation of the problem. Thus, applying the Popov criterion and its combination with other IQCs, can lead to convergence rate bounds with reduced conservatism. We also illustrate via examples that the convergence rate bounds derived can be tight.
更多
查看译文
关键词
mirror descent method,iqcs,convergence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要