Enabling Secure in-Memory Neural Network Computing by Sparse Fast Gradient Encryption

2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)(2019)

引用 25|浏览55
暂无评分
摘要
Neural network (NN) computing is energy-consuming on traditional computing systems, owing to the inherent memory wall bottleneck of the von Neumann architecture and the Moore's Law being approaching the end. Non-volatile memories (NVMs) have been demonstrated as promising alternatives for constructing computing-in-memory (CiM) systems to accelerate NN computing. However, NVM-based NN computing systems are vulnerable to the confidentiality attacks because the weight parameters persist in memory when the system is powered off, enabling an attacker with physical access to extract the well-trained NN models. The goal of this work is to find a solution for thwarting the confidentiality attacks. We define and model the weight encryption problem. Then we propose an effective framework, containing a sparse fast gradient encryption (SFGE) method and a runtime encryption scheduling (RES) scheme, to guarantee the confidentiality security of NN models with a negligible performance overhead. The experiments demonstrate that only encrypting an extremely small proportion of the weights (e.g., 20 weights per layer in ResNet-101), the NN models can be strictly protected.
更多
查看译文
关键词
RES scheme,SFGE method,secure in-memory neural network computing,runtime encryption scheduling scheme,sparse fast gradient encryption method,weight encryption problem,NVM-based NN computing systems,computing-in-memory systems,nonvolatile memories,Moore's Law,von Neumann architecture,memory wall bottleneck
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要