Blind Backdoors in Deep Learning Models

Bagdasaryan Eugene
Bagdasaryan Eugene
Cited by: 0|Bibtex|Views18
Other Links: arxiv.org

Abstract:

We investigate a new method for injecting backdoors into machine learning models, based on poisoning the loss computation in the model-training code. Our attack is blind: the attacker cannot modify the training data, nor observe the execution of his code, nor access the resulting model. We develop a new technique for blind backdoor trai...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments