Approximate Nullspace Augmented Finetuning for Robust Vision Transformers
arxiv(2024)
摘要
Enhancing the robustness of deep learning models, particularly in the realm
of vision transformers (ViTs), is crucial for their real-world deployment. In
this work, we provide a finetuning approach to enhance the robustness of vision
transformers inspired by the concept of nullspace from linear algebra. Our
investigation centers on whether a vision transformer can exhibit resilience to
input variations akin to the nullspace property in linear mappings, implying
that perturbations sampled from this nullspace do not influence the model's
output when added to the input. Firstly, we show that for many pretrained ViTs,
a non-trivial nullspace exists due to the presence of the patch embedding
layer. Secondly, as nullspace is a concept associated with linear algebra, we
demonstrate that it is possible to synthesize approximate nullspace elements
for the non-linear blocks of ViTs employing an optimisation strategy. Finally,
we propose a fine-tuning strategy for ViTs wherein we augment the training data
with synthesized approximate nullspace noise. After finetuning, we find that
the model demonstrates robustness to adversarial and natural image perbutations
alike.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要