Stealing Neural Network Models through the Scan Chain: A New Threat for ML Hardware

2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD)(2021)

引用 4|浏览8
暂无评分
摘要
Stealing trained machine learning (ML) models is a new and growing concern due to the model's development cost. Existing work on ML model extraction either applies a mathematical attack or exploits hardware vulnerabilities such as side-channel leakage. This paper shows a new style of attack, for the first time, on ML models running on embedded devices by abusing the scan-chain infrastructure. We i...
更多
查看译文
关键词
Design automation,Network topology,Perturbation methods,Neurons,Machine learning,Mathematical models,Hardware
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要