Context-guided entropy minimization for semi-supervised domain adaptation.

Neural networks : the official journal of the International Neural Network Society(2022)

引用 6|浏览44
暂无评分
摘要
Semi-Supervised Domain Adaptation has been widely studied with various approaches to address domain shift with labeled source-domain data combined with scarcely labeled target-domain data. Model adaptation is becoming promising with a paradigm of source pre-training and target fine-tuning, which eliminates the simultaneous availability of data from both domains and makes for data privacy. Among the model adaptation methods, Entropy Minimization (EM) is popularly incorporated to encourage a low-density separation on target samples. However, EM tends to brutally force models to make over-confident predictions, which could make the models collapse with deteriorated performance. In this paper, we first study the over-confidence of EM with a quantitative analysis, which shows the importance of capturing the dependency among labels. To address this issue, we propose to guide EM via longitudinal self-distillation. Specifically, we produce a dynamic "teacher" label distribution during training by constructing a graph on target data and perform pseudo-label propagation to encourage the "teacher" distribution to capture context category dependency based on a global data structure. Then EM is guided longitudinally by distilling the learned label distribution to combat the brute-force over-confidence. Extensive experiments demonstrate the effectiveness of our methods.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要