ConvLoRA and AdaBN based Domain Adaptation via Self-Training
CoRR(2024)
摘要
Existing domain adaptation (DA) methods often involve pre-training on the
source domain and fine-tuning on the target domain. For multi-target domain
adaptation, having a dedicated/separate fine-tuned network for each target
domain, that retain all the pre-trained model parameters, is prohibitively
expensive. To address this limitation, we propose Convolutional Low-Rank
Adaptation (ConvLoRA). ConvLoRA freezes pre-trained model weights, adds
trainable low-rank decomposition matrices to convolutional layers, and
backpropagates the gradient through these matrices thus greatly reducing the
number of trainable parameters. To further boost adaptation, we utilize
Adaptive Batch Normalization (AdaBN) which computes target-specific running
statistics and use it along with ConvLoRA. Our method has fewer trainable
parameters and performs better or on-par with large independent fine-tuned
networks (with less than 0.9
when tested on the segmentation of Calgary-Campinas dataset containing brain
MRI images. Our approach is simple, yet effective and can be applied to any
deep learning-based architecture which uses convolutional and batch
normalization layers. Code is available at:
https://github.com/aleemsidra/ConvLoRA.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要