Distributed Online Optimization in Dynamic Environments Using Mirror Descent.

IEEE Trans. Automat. Contr.(2018)

引用 236|浏览32
暂无评分
摘要
This work addresses decentralized online optimization in nonstationary environments. A network of agents aim to track the minimizer of a global, time-varying, and convex function. The minimizer follows a known linear dynamics corrupted by unknown unstructured noise. At each time, the global function (which could be a tracking error) can be cast as a sum of a finite number of local functions, each of which is assigned to one agent in the network. Moreover, the local functions become available to agents sequentially, and agents do not have prior knowledge of the future cost functions. Therefore, agents must communicate with each other to build an online approximation of the global function. We propose a decentralized variation of the celebrated mirror descent algorithm, according to which agents perform a consensus step to follow the global function and take into account the dynamics of the global minimizer. In order to measure the performance of the proposed online algorithm, we compare it to its offline counterpart, where the global functions are available a priori. The gap between the two losses is defined as dynamic regret. We establish a regret bound that scales inversely in the spectral gap of the network and represents the deviation of minimizer sequence with respect to the given dynamics. We show that our framework subsumes a number of results in distributed optimization.
更多
查看译文
关键词
Heuristic algorithms,Cost function,Mirrors,Convex functions,Euclidean distance,Benchmark testing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要