Improving Multimodal Learning with Multi-Loss Gradient Modulation
CoRR(2024)
摘要
Learning from multiple modalities, such as audio and video, offers
opportunities for leveraging complementary information, enhancing robustness,
and improving contextual understanding and performance. However, combining such
modalities presents challenges, especially when modalities differ in data
structure, predictive contribution, and the complexity of their learning
processes. It has been observed that one modality can potentially dominate the
learning process, hindering the effective utilization of information from other
modalities and leading to sub-optimal model performance. To address this issue
the vast majority of previous works suggest to assess the unimodal
contributions and dynamically adjust the training to equalize them. We improve
upon previous work by introducing a multi-loss objective and further refining
the balancing process, allowing it to dynamically adjust the learning pace of
each modality in both directions, acceleration and deceleration, with the
ability to phase out balancing effects upon convergence. We achieve superior
results across three audio-video datasets: on CREMA-D, models with ResNet
backbone encoders surpass the previous best by 1.9
backbone models deliver improvements ranging from 2.8
different fusion methods. On AVE, improvements range from 2.7
on UCF101, gains reach up to 6.1
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要