SALAD : Source-free Active Label-Agnostic Domain Adaptation for Classification, Segmentation and Detection

2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)(2023)

Cited 1|Views17
No score
Abstract
We present a novel method, SALAD, for the challenging vision task of adapting a pre-trained "source" domain network to a "target" domain, with a small budget for annotation in the "target" domain and a shift in the label space. Further, the task assumes that the source data is not available for adaptation, due to privacy concerns or otherwise. We postulate that such systems need to jointly optimize the dual task of (i) selecting fixed number of samples from the target domain for annotation and (ii) transfer of knowledge from the pre-trained network to the target domain. To do this, SALAD consists of a novel Guided Attention Transfer Network (GATN) and an active learning function, H-AL. The GATN enables feature distillation from pre-trained network to the target network, complemented with the target samples mined by H-AL using transfer-ability and uncertainty criteria. SALAD has three key benefits: (i) it is task-agnostic, and can be applied across various visual tasks such as classification, segmentation and detection; (ii) it can handle shifts in output label space from the pre-trained source network to the target domain; (iii) it does not require access to source data for adaptation. We conduct extensive experiments across 3 visual tasks, viz. digits classification (MNIST, SVHN, VISDA), synthetic (GTA5) to real (CityScapes) image segmentation, and document layout detection (PubLayNet to DSSE). We show that our source-free approach, SALAD, results in an improvement of 0.5%-31.3% (across datasets and tasks) over prior adaptation methods that assume access to large amounts of annotated source data for adaptation. Code is available here.
More
Translated text
Key words
Algorithms: Machine learning architectures,formulations,and algorithms (including transfer)
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined