Federated Adversarial Domain Hallucination for Privacy-Preserving Domain Generalization

IEEE TRANSACTIONS ON MULTIMEDIA(2024)

Cited 0|Views12
No score
Abstract
Domain generalization aims to reduce the vulnerability of deep neural networks in the out-of-domain distribution scenario. With the recent and increasing data privacy concerns, federated domain generalization, where multiple domains are distributed on different local clients, has become an important research problem and brings new challenges for learning domain-invariant information from separated domains. In this paper, we address the problem of federated domain generalization from the perspective of domain hallucination. We propose a novel federated domain hallucination learning framework, with no additional data exchange between clients other than model weights, based on the idea that a domain hallucination with enlarged prediction uncertainty for the global model is more likely to transform the samples into an unseen domain. These types of desired domain hallucinations are achieved by generating samples that maximize the entropy of the global model and minimize the cross-entropy of the local model, where the latter loss is further introduced to maintain the sample semantics. By training the local models with the learned domain hallucinations, the final model is expected to be more robust to unseen domain shifts. We perform extensive experiments on three object classification benchmarks and one medical image segmentation benchmark. The proposed method outperforms state-of-the-art methods on all the benchmarks, demonstrating its effectiveness.
More
Translated text
Key words
Domain shift,domain generalization,federated learning,privacy preserving
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined