Learning under p-Tampering Attacks.

ISAIM(2018)

引用 25|浏览11
暂无评分
摘要
Mahloujifar and Mahmoody (TCCu002717) studied attacks against learning algorithms using a special case of Valiantu0027s malicious noise, called $p$-tampering, in which the adversary could change training examples with independent probability $p$ but only using correct labels. They showed the power of such attacks by increasing the error probability in the so called `targetedu0027 poisoning model in which the adversaryu0027s goal is to increase the loss of the generated hypothesis over a particular test example. At the heart of their attack was an efficient algorithm to bias the average output of any bounded real-valued function through $p$-tampering. In this work, we present new attacks for biasing the average output of bounded real-valued functions, improving upon the biasing attacks of MM16. Our improved biasing attacks, directly imply improved $p$-tampering attacks against learners in the targeted poisoning model. As a bonus, our attacks come with considerably simpler analysis compared to previous attacks. also study the possibility of PAC learning under $p$-tampering attacks in the emph{non-targeted} (aka indiscriminate) setting where the adversaryu0027s goal is to increase the risk of the generated hypothesis (for a random test example). We show that PAC learning is emph{possible} under $p$-tampering poisoning attacks essentially whenever it is possible in the realizable setting without the attacks. We further show that PAC learning under `no-mistakeu0027 adversarial noise is emph{not} possible, if the adversary could choose the (still limited to only $p$ fraction of) tampered examples that she substitutes with adversarially chosen ones. Our formal model for such `bounded-budgetu0027 tampering attackers is inspired by the notions of (strong) adaptive corruption in secure multi-party computation.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要