Customizing Triggers with Concealed Data Poisoning

Tony Z. Zhao
Tony Z. Zhao
Shi Feng
Shi Feng
Cited by: 0|Bibtex|Views22
Other Links: arxiv.org

Abstract:

Adversarial attacks alter NLP model predictions by perturbing test-time inputs. However, it is much less understood whether, and how, predictions can be manipulated with small, concealed changes to the training data. In this work, we develop a new data poisoning attack that allows an adversary to control model predictions whenever a des...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments