Customizing Triggers with Concealed Data Poisoning
Abstract:
Adversarial attacks alter NLP model predictions by perturbing test-time inputs. However, it is much less understood whether, and how, predictions can be manipulated with small, concealed changes to the training data. In this work, we develop a new data poisoning attack that allows an adversary to control model predictions whenever a des...More
Code:
Data:
Full Text
Tags
Comments