Variance Reduction Can Improve Trade-Off in Multi-Objective Learning

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

Cited 0|Views30
No score
Many machine learning problems today have multiple objective functions, which are often tackled by the multi-objective learning (MOL) framework. Albeit many encouraging results are obtained by MOL algorithms, a recent theoretical study [1] revealed that these gradient-based MOL methods (e.g., MGDA, CAGrad) all reflect an inherent trade-off between optimization convergence speeds and conflict-avoidance abilities. To this end, we develop an improved stochastic variance-reduced multi-objective gradient correction method for MOL, achieving the ${\mathcal{O}}\left({{\varepsilon ^{ - 1.5}}}\right)$ sample complexity. In addition, our proposed method simultaneously improves the theoretical guarantees for conflict avoidance and convergence rate compared to prior stochastic gradient-based MOL methods in the non-convex setting. We further validate the effectiveness of the proposed method empirically using popular multi-task learning (MTL) benchmarks.
Translated text
Key words
Multi-objective learning,Multi-task learning
AI Read Science
Must-Reading Tree
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined