DiPrompT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning
CVPR 2024(2024)
摘要
Federated learning (FL) has emerged as a powerful paradigm for learning from
decentralized data, and federated domain generalization further considers the
test dataset (target domain) is absent from the decentralized training data
(source domains). However, most existing FL methods assume that domain labels
are provided during training, and their evaluation imposes explicit constraints
on the number of domains, which must strictly match the number of clients.
Because of the underutilization of numerous edge devices and additional
cross-client domain annotations in the real world, such restrictions may be
impractical and involve potential privacy leaks. In this paper, we propose an
efficient and novel approach, called Disentangled Prompt Tuning (DiPrompT), a
method that tackles the above restrictions by learning adaptive prompts for
domain generalization in a distributed manner. Specifically, we first design
two types of prompts, i.e., global prompt to capture general knowledge across
all clients and domain prompts to capture domain-specific knowledge. They
eliminate the restriction on the one-to-one mapping between source domains and
local clients. Furthermore, a dynamic query metric is introduced to
automatically search the suitable domain label for each sample, which includes
two-substep text-image alignments based on prompt tuning without
labor-intensive annotation. Extensive experiments on multiple datasets
demonstrate that our DiPrompT achieves superior domain generalization
performance over state-of-the-art FL methods when domain labels are not
provided, and even outperforms many centralized learning methods using domain
labels.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要