Gradient-Based Meta-Learning Using Adaptive Multiple Loss Weighting and Homoscedastic Uncertainty

2023 3rd International Conference on Consumer Electronics and Computer Engineering (ICCECE)(2023)

引用 0|浏览3
暂无评分
摘要
Model-agnostic meta-learning schemes adopt gradient descent to learn task commonalities and obtain the initialization parameters of the meta-model to rapidly adjust to new tasks with only a few training samples. Therefore, such schemes have become the mainstream meta-learning approach for studying few shot learning problems. This study mainly addresses the challenge of task uncertainty in few-shot learning and proposes an improved meta-learning approach, which first enables a task specific learner to select the initial parameter that minimize the loss of a new task, then generates weights by comparing meta-loss differences, and finally leads into the homoscedastic uncertainty of the task to weight the diverse losses. Our model conducts superior on few shot learning task than previous meta learning approach and improves its robustness regardless of the initial learning rates and query sets.
更多
查看译文
关键词
meta-learning,homoscedastic uncertainty,meta-loss
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要