Generating Adversarial Examples in Chinese Texts Using Sentence-Pieces
Abstract:
Adversarial attacks in texts are mostly substitution-based methods that replace words or characters in the original texts to achieve success attacks. Recent methods use pre-trained language models as the substitutes generator. While in Chinese, such methods are not applicable since words in Chinese require segmentations first. In this p...More
Code:
Data:
Tags
Comments