Randomized Neural Networks Based Decentralized Multi-Task Learning Via Hybrid Multi-Block Admm

IEEE TRANSACTIONS ON SIGNAL PROCESSING(2021)

引用 2|浏览1
暂无评分
摘要
In multi-task learning (MTL), related tasks learn jointly to improve generalization performance. To exploit the high learning speed of feed-forward neural networks (FNN), we apply the randomized single-hidden layer FNN (RSF) to the MTL problem, where the output weights of RSFs for all the tasks are learned collaboratively. We first present the RSF based MTL problem in the centralized setting, which is solved by the proposed MTL-RSF algorithm. Due to the fact that many data sets of different tasks are geo-distributed, decentralized machine learning is studied. We formulate the decentralized MTL problem based on RSF as majorized multi-block optimization with coupled bi-convex objective functions. To solve the problem, we propose the DMTL-RSF algorithm, which is a hybrid Jacobian and Gauss-Seidel Proximal multi-block alternating direction method of multipliers (ADMM). Further, to reduce the computation load of DMTL-RSF, DMTL-RSF with first-order approximation (FO-DMTL-RSF) is presented. Theoretical analysis shows that the convergence to the stationary point of proposed decentralized algorithms can be guaranteed conditionally. Through simulations, we demonstrate the convergence of presented algorithms, and also show that they can outperform existing MTL methods. Moreover, by adjusting the dimension of hidden feature space, there exists a trade-off between communication load and learning accuracy for DMTL-RSF.
更多
查看译文
关键词
Task analysis, Convex functions, Training, Neural networks, Approximation algorithms, Signal processing algorithms, Load modeling, Multi-task learning, randomized feed-forward neural networks, decentralized optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要