Mind the Scaling Factors: Resilience Analysis of Quantized Adversarially Robust CNNs

2022 Design, Automation & Test in Europe Conference & Exhibition (DATE)(2022)

引用 4|浏览24
暂无评分
摘要
As more deep learning algorithms enter safety-critical application domains, the importance of analyzing their resilience against hardware faults cannot be overstated. Most existing works focus on bit-flips in memory, fewer focus on compute errors, and almost none study the effect of hardware faults on adversarially trained convolutional neural networks (CNNs). In this work, we show that adversarially trained CNNs are more susceptible to failure due to hardware errors when compared to vanilla-trained models. We identify large differences in the quantization scaling factors of the CNNs which are resilient to hardware faults and those which are not. As adversarially trained CNNs learn robustness against input attack perturbations, their internal weight and activation distributions open a backdoor for injecting large magnitude hardware faults. We propose a simple weight decay remedy for adversarially trained models to maintain adversarial robustness and hardware resilience in the same CNN. We improve the fault resilience of an adversarially trained ResNet56 by 25% for large-scale bit-flip benchmarks on activation data while gaining slightly improved accuracy and adversarial robustness.
更多
查看译文
关键词
resilience analysis,deep learning,bit-flips,adversarially trained convolutional neural networks,quantization scaling factors,hardware resilience,fault resilience,safety-critical application,adversarially robust CNN quantization,adversarially trained ResNet56,input attack perturbations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要