谷歌浏览器插件
订阅小程序
在清言上使用

Neural Dynamics for Improving Optimiser in Deep Learning with Noise Considered

CAAI transactions on intelligence technology(2023)

引用 0|浏览17
暂无评分
摘要
As deep learning evolves, neural network structures become increasingly sophisticated, bringing a series of new optimisation challenges. For example, deep neural networks (DNNs) are vulnerable to a variety of attacks. Training neural networks under privacy constraints is a method to alleviate privacy leakage, and one way to do this is to add noise to the gradient. However, the existing optimisers suffer from weak convergence in the presence of increased noise during training, which leads to a low robustness of the optimiser. To stabilise and improve the convergence of DNNs, the authors propose a neural dynamics (ND) optimiser, which is inspired by the zeroing neural dynamics originated from zeroing neural networks. The authors first analyse the relationship between DNNs and control systems. Then, the authors construct the ND optimiser to update network parameters. Moreover, the proposed ND optimiser alleviates the non‐convergence problem that may be suffered by adding noise to the gradient from different scenarios. Furthermore, experiments are conducted on different neural network structures, including ResNet18, ResNet34, Inception‐v3, MobileNet, and long and short‐term memory network. Comparative results using CIFAR, YouTube Faces, and R8 datasets demonstrate that the ND optimiser improves the accuracy and stability of DNNs under noise‐free and noise‐polluted conditions. The source code is publicly available at https://github.com/LongJin‐lab/ND .
更多
查看译文
关键词
deep learning,deep neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要