Maximum Entropy Models for Fast Adaptation

arxiv(2020)

引用 0|浏览70
暂无评分
摘要
Deep Neural Networks have shown great promise on a variety of downstream tasks; but their ability to adapt to new data and tasks remains a challenging problem. The ability of a model to perform few-shot adaptation to a novel task is important for the scalability and deployment of machine learning models. Recent work has shown that the learned features in a neural network follow a normal distribution [41], which thereby results in a strong prior on the downstream task. This implicit overfitting to data from training tasks limits the ability to generalize and adapt to unseen tasks at test time. This also highlights the importance of learning task-agnostic representations from data. In this paper, we propose a regularization scheme using a max-entropy prior on the learned features of a neural network; such that the extracted features make minimal assumptions about the training data. We evaluate our method on adaptation to unseen tasks by performing experiments in 4 distinct settings. We find that our method compares favourably against multiple strong baselines across all of these experiments.
更多
查看译文
关键词
fast adaptation,maximum entropy models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要