Robust Local Preserving and Global Aligning Network for Adversarial Domain Adaptation

IEEE Transactions on Knowledge and Data Engineering(2023)

引用 19|浏览62
暂无评分
摘要
Unsupervised domain adaptation (UDA) requires source domain samples with clean ground truth labels during training. Accurately labeling a large number of source domain samples is time-consuming and laborious. An alternative is to utilize samples with noisy labels for training. However, training with noisy labels can greatly reduce the performance of UDA. In this paper, we address the problem that learning UDA models only with access to noisy labels and propose a novel method called robust local preserving and global aligning network (RLPGA). RLPGA improves the robustness of the label noise from two aspects. One is learning a classifier by a robust informative-theoretic-based loss function. The other is constructing two adjacency weight matrices and two negative weight matrices by the proposed local preserving module to preserve the local topology structures of input data. We conduct theoretical analysis on the robustness of the proposed RLPGA and prove that the robust informative-theoretic-based loss and the local preserving module are beneficial to reduce the empirical risk of the target domain. A series of empirical studies show the effectiveness of our proposed RLPGA.
更多
查看译文
关键词
Wasserstein distance,unsupervised domain adaptation,noisy label,representation learning,adversarial learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络