ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via Synthesis Tuning

DAC(2023)

引用 3|浏览93
暂无评分
摘要
Oracle-less machine learning (ML) attacks have broken various logic locking schemes. Regular synthesis, which is tailored for area-power-delay optimization, yields netlists where key-gate localities are vulnerable to learning. Thus, we call for security-aware logic synthesis. We propose ALMOST, a framework for adversarial learning to mitigate oracle-less ML attacks via synthesis tuning. ALMOST uses a simulated-annealing-based synthesis recipe generator, employing adversarially trained models that can predict state-of-the-art attacks’ accuracies over wide ranges of recipes and key-gate localities. Experiments on ISCAS benchmarks confirm the attacks’ accuracies drops to around 50% for ALMOST-synthesized circuits, all while not undermining design optimization.
更多
查看译文
关键词
adversarial learning,attacks,synthesis tuning,oracle-less
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要