Guessing Outputs of Dynamically Pruned CNNs Using Memory Access Patterns

IEEE Computer Architecture Letters(2021)

引用 0|浏览9
暂无评分
摘要
Dynamic activation pruning of convolutional neural networks (CNNs) is a class of techniques that reduce both runtime and memory usage in CNN implementations by skipping unnecessary or low-impact computations in convolutional layers. However, since dynamic pruning results in different sequences of memory accesses depending on the input to the CNN, they potentially open the door to inference-phase side-channel attacks that may leak private data with each input. We demonstrate a memory-based attack inferring a dynamically-pruned CNN’s outputs for various victim CNN models and datasets. We find that an attacker can train their own machine learning model to learn to guess victim image classifications using the victim’s memory access patterns with significantly better than random chance. Moreover, unlike previous related work, our attack: 1) continually leaks user data for each input and 2) does not require adversarial presence during the victim training.
更多
查看译文
关键词
Side-channel attacks,machine learning,artificial neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要