SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning
arxiv(2022)
摘要
Recent years have witnessed tremendous success in Self-Supervised Learning
(SSL), which has been widely utilized to facilitate various downstream tasks in
Computer Vision (CV) and Natural Language Processing (NLP) domains. However,
attackers may steal such SSL models and commercialize them for profit, making
it crucial to verify the ownership of the SSL models. Most existing ownership
protection solutions (e.g., backdoor-based watermarks) are designed for
supervised learning models and cannot be used directly since they require that
the models' downstream tasks and target labels be known and available during
watermark embedding, which is not always possible in the domain of SSL. To
address such a problem, especially when downstream tasks are diverse and
unknown during watermark embedding, we propose a novel black-box watermarking
solution, named SSL-WM, for verifying the ownership of SSL models. SSL-WM maps
watermarked inputs of the protected encoders into an invariant representation
space, which causes any downstream classifier to produce expected behavior,
thus allowing the detection of embedded watermarks. We evaluate SSL-WM on
numerous tasks, such as CV and NLP, using different SSL models both
contrastive-based and generative-based. Experimental results demonstrate that
SSL-WM can effectively verify the ownership of stolen SSL models in various
downstream tasks. Furthermore, SSL-WM is robust against model fine-tuning,
pruning, and input preprocessing attacks. Lastly, SSL-WM can also evade
detection from evaluated watermark detection approaches, demonstrating its
promising application in protecting the ownership of SSL models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要