Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks
arxiv(2022)
摘要
Autonomous agents deployed in the real world need to be robust against
adversarial attacks on sensory inputs. Robustifying agent policies requires
anticipating the strongest attacks possible. We demonstrate that existing
observation-space attacks on reinforcement learning agents have a common
weakness: while effective, their lack of information-theoretic detectability
constraints makes them detectable using automated means or human inspection.
Detectability is undesirable to adversaries as it may trigger security
escalations. We introduce ϵ-illusory, a novel form of adversarial
attack on sequential decision-makers that is both effective and of
ϵ-bounded statistical detectability. We propose a novel dual ascent
algorithm to learn such attacks end-to-end. Compared to existing attacks, we
empirically find ϵ-illusory to be significantly harder to detect with
automated methods, and a small study with human participants (IRB approval
under reference R84123/RE001) suggests they are similarly harder to detect for
humans. Our findings suggest the need for better anomaly detectors, as well as
effective hardware- and system-level defenses. The project website can be found
at https://tinyurl.com/illusory-attacks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要