Safe Design of Stable Neural Networks for Fault Detection in Small UAVs

Computer Safety, Reliability, and Security. SAFECOMP 2022 Workshops (2022)

引用 0|浏览9
暂无评分
摘要
Stability of a machine learning model is the extent to which a model can continue to operate correctly despite small perturbations in its inputs. A formal method to measure stability is the Lipschitz constant of the model which allows to evaluate how small perturbations in the inputs impact the output variations. Variations in the outputs may lead to high errors for regression tasks or unintended changes in the classes for classification tasks. Verification of the stability of ML models is crucial in many industrial domains such as aeronautics, space, automotive etc. It has been recognized that data-driven models are intrinsically extremely sensitive to small perturbation of the inputs. Therefore, the need to design methods for verifying the stability of ML models is of importance for manufacturers developing safety critical products. In this work, we focus on Small Unmanned Aerial Vehicles (UAVs) which are in the frontage of new technology solutions for intelligent systems. However, real-time fault detection/diagnosis in such UAVs remains a challenge from data collection to prediction tasks. This work presents application of neural networks to detect in real-time elevon positioning faults. We show the efficiency of a formal method based on the Lipschitz constant for quantifying the stability of neural network models. We also present how this method can be coupled with spectral normalization constraints at the design phase to control the internal parameters of the model and make it more stable while keeping a high level of performance (accuracy-stability trade-off).
更多
查看译文
关键词
Safety, Stability, Lipschitz constant, Verification, Machine learning, Neural networks, UAV, Tabular data, Adversarial attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要