Zeroth-Order Non-Convex Optimization for Cooperative Multi-Agent Systems With Diminishing Step Size and Smoothing Radius.

IEEE Control. Syst. Lett.(2023)

引用 0|浏览7
暂无评分
摘要
We study a class of zeroth-order distributed optimization problems, where each agent can control a partial vector and observe a local cost that depends on the joint vector of all agents, and the agents can communicate with each other with time delay. We propose and study a gradient descent-based algorithm using two-point gradient estimators with diminishing smoothing parameters and diminishing step-size and we establish the convergence rate to a first-order stationary point for general nonconvex problems. A byproduct of our proposed method with diminishing step size and smoothing parameters, as opposed to the fixed-parameter scheme, is that our proposed algorithm does not require any information regarding the local cost functions. This makes the solution appealing in practice as it allows for optimizing an unknown (black-box) global function without prior knowledge of its smoothness parameters. At the same time, the performance will adaptively match the problem instance parameters.
更多
查看译文
关键词
optimization,zeroth-order,non-convex,multi-agent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要