M2SGD - Learning to Learn Important Weights.

CVPR Workshops(2020)

引用 5|浏览15
暂无评分
摘要
Meta-learning concerns rapid knowledge acquisition. One popular approach cast optimisation as a learning problem and it has been shown that learnt neural optimisers updated base learners more quickly than their handcrafted counterparts. In this paper, we learn an optimisation rule that sparsely updates the learner parameters and removes redundant weights. We present Masked Meta-SGD (M 2 SGD), a neural optimiser which is not only capable of updating learners quickly, but also capable of removing 83.71% weights for ResNet20s.We release our codes at https://github.com/Nic5472K/CLVISION2020_CVPR_M2SGD.
更多
查看译文
关键词
neural optimiser,knowledge acquisition,learning problem,optimisation rule,learner parameters,masked meta-SGD,meta-learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要