Detecting Adversarial Samples with Neuron Coverage
2021 IEEE International Conference on Computer Science, Artificial Intelligence and Electronic Engineering (CSAIEE)(2021)
摘要
Deep learning technologies have shown impressive performance in many areas. However, deep learning systems can be deceived by using intentionally crafted data, says, adversarial samples. This inherent vulnerability limits its application in safety-critical domains such as automatic driving, military applications and so on. As a kind of defense measures, various approaches have been proposed to det...
更多查看译文
关键词
Deep learning,Measurement,Costs,Computational modeling,Neurons,Feature extraction,Security
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要