Exploring DNN Robustness Against Adversarial Attacks Using Approximate Multipliers
CoRR(2024)
摘要
Deep Neural Networks (DNNs) have advanced in many real-world applications,
such as healthcare and autonomous driving. However, their high computational
complexity and vulnerability to adversarial attacks are ongoing challenges. In
this letter, approximate multipliers are used to explore DNN robustness
improvement against adversarial attacks. By uniformly replacing accurate
multipliers for state-of-the-art approximate ones in DNN layer models, we
explore the DNNs robustness against various adversarial attacks in a feasible
time. Results show up to 7
is present while improving robust accuracy up to 10
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要