Towards Verification-Aware Knowledge Distillation for Neural-Network Controlled Systems: Invited Paper

2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)(2019)

引用 16|浏览7
暂无评分
摘要
Neural networks are widely used in many applications ranging from classification to control. While these networks are composed of simple arithmetic operations, they are challenging to formally verify for properties such as reachability due to the presence of nonlinear activation functions. In this paper, we make the observation that Lipschitz continuity of a neural network not only can play a major role in the construction of reachable sets for neural-network controlled systems but also can be systematically controlled during training of the neural network. We build on this observation to develop a novel verification-aware knowledge distillation framework that transfers the knowledge of a trained network to a new and easier-to-verify network. Experimental results show that our method can substantially improve reachability analysis of neural-network controlled systems for several state-of-the-art tools.
更多
查看译文
关键词
trained network,easier-to-verify network,verification-aware knowledge distillation framework,neural network controlled systems,reachability analysis,Lipschitz continuity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要