Stochastic Extragradient with Random Reshuffling: Improved Convergence for Variational Inequalities
International Conference on Artificial Intelligence and Statistics(2024)
摘要
The Stochastic Extragradient (SEG) method is one of the most popular
algorithms for solving finite-sum min-max optimization and variational
inequality problems (VIPs) appearing in various machine learning tasks.
However, existing convergence analyses of SEG focus on its with-replacement
variants, while practical implementations of the method randomly reshuffle
components and sequentially use them. Unlike the well-studied with-replacement
variants, SEG with Random Reshuffling (SEG-RR) lacks established theoretical
guarantees. In this work, we provide a convergence analysis of SEG-RR for three
classes of VIPs: (i) strongly monotone, (ii) affine, and (iii) monotone. We
derive conditions under which SEG-RR achieves a faster convergence rate than
the uniform with-replacement sampling SEG. In the monotone setting, our
analysis of SEG-RR guarantees convergence to an arbitrary accuracy without
large batch sizes, a strong requirement needed in the classical
with-replacement SEG. As a byproduct of our results, we provide convergence
guarantees for Shuffle Once SEG (shuffles the data only at the beginning of the
algorithm) and the Incremental Extragradient (does not shuffle the data). We
supplement our analysis with experiments validating empirically the superior
performance of SEG-RR over the classical with-replacement sampling SEG.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要