Learned Finite-Time Consensus for Distributed Optimization
arxiv(2024)
摘要
Most algorithms for decentralized learning employ a consensus or diffusion
mechanism to drive agents to a common solution of a global optimization
problem. Generally this takes the form of linear averaging, at a rate of
contraction determined by the mixing rate of the underlying network topology.
For very sparse graphs this can yield a bottleneck, slowing down the
convergence of the learning algorithm. We show that a sequence of matrices
achieving finite-time consensus can be learned for unknown graph topologies in
a decentralized manner by solving a constrained matrix factorization problem.
We demonstrate numerically the benefit of the resulting scheme in both
structured and unstructured graphs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要