## AI helps you reading Science

## AI Insight

AI extracts a summary of this paper

Weibo:

# Almost Surely Stable Deep Dynamics

NIPS 2020, (2021)

EI

Keywords

Abstract

We introduce a method for learning provably stable deep neural network based dynamic models from observed data. Specifically, we consider discrete-time stochastic dynamic models, as they are of particular interest in practical applications such as estimation and control. However, these aspects exacerbate the challenge of guaranteeing st...More

Code:

Data:

Introduction

- Stability is a critical requirement in the design of physical systems. White-box models based on first principles can explicitly account for stability in their design.
- Deep neural networks (DNNs) are flexible function approximators, well suited for modeling complicated dynamics.
- Their black-box design makes both physical interpretation and stability analysis challenging.
- This paper focuses on the construction of provably stable DNN-based dynamic models.
- These models are amenable to standard deep learning architectures and training practices, while retaining the asymptotic behavior of the underlying dynamics.

Highlights

- Stability is a critical requirement in the design of physical systems
- This paper focuses on the construction of provably stable deep neural networks (DNNs)-based dynamic models
- The examples here deal with low-dimensional state spaces for convenient visualizations, we note that our method is not restricted to this setting, as the dynamic model is based on DNNs and can take any state dimension
- We have developed a framework for constructing neural network dynamic models with provable global stability guarantees
- We showed how convexity can be exploited to give a closed-form stable dynamic model, extended this approach to implicitly-defined stable models

Methods

- The code for the methods is available here: https://github.com/NPLawrence/stochastic_ dynamics.
- Further details about the experiments and models can be found in Appendix D.
- The authors give an example dealing with a chaotic system in Appendix C.
- The examples here deal with low-dimensional state spaces for convenient visualizations, the authors note that the method is not restricted to this setting, as the dynamic model is based on DNNs and can take any state dimension

Results

**Results are shown in Figure**

3 and correspond to the matrix A=

1 0.90 in Eq (16). In the first experiment, the authors use training data from the system (16) in which there is no noise.**Results are shown in Figure**.- 3 and correspond to the matrix A=.
- The authors use training data from the system (16) in which there is no noise.
- The MDN gives very small variance in its predictions.
- The predicted mean refers to the dynamics defined by feeding the means through the MDN as ‘states’.
- The two plots show predictions corresponding to the system (16) with B = 0.1, where the last plot uses implicit dynamics.

Conclusion

- The authors have developed a framework for constructing neural network dynamic models with provable global stability guarantees.
- The authors showed how convexity can be exploited to give a closed-form stable dynamic model, extended this approach to implicitly-defined stable models.
- The latter case can be reduced to a one-dimensional root-finding problem, making a robust and cheap implementation straightforward.
- Interesting avenues for future work include applications to control and reinforcement learning

Summary

## Introduction:

Stability is a critical requirement in the design of physical systems. White-box models based on first principles can explicitly account for stability in their design.- Deep neural networks (DNNs) are flexible function approximators, well suited for modeling complicated dynamics.
- Their black-box design makes both physical interpretation and stability analysis challenging.
- This paper focuses on the construction of provably stable DNN-based dynamic models.
- These models are amenable to standard deep learning architectures and training practices, while retaining the asymptotic behavior of the underlying dynamics.
## Objectives:

The authors' goal is to construct a DNN representation of f with global stability guarantees about the origin.## Methods:

The code for the methods is available here: https://github.com/NPLawrence/stochastic_ dynamics.- Further details about the experiments and models can be found in Appendix D.
- The authors give an example dealing with a chaotic system in Appendix C.
- The examples here deal with low-dimensional state spaces for convenient visualizations, the authors note that the method is not restricted to this setting, as the dynamic model is based on DNNs and can take any state dimension
## Results:

**Results are shown in Figure**

3 and correspond to the matrix A=

1 0.90 in Eq (16). In the first experiment, the authors use training data from the system (16) in which there is no noise.**Results are shown in Figure**.- 3 and correspond to the matrix A=.
- The authors use training data from the system (16) in which there is no noise.
- The MDN gives very small variance in its predictions.
- The predicted mean refers to the dynamics defined by feeding the means through the MDN as ‘states’.
- The two plots show predictions corresponding to the system (16) with B = 0.1, where the last plot uses implicit dynamics.
## Conclusion:

The authors have developed a framework for constructing neural network dynamic models with provable global stability guarantees.- The authors showed how convexity can be exploited to give a closed-form stable dynamic model, extended this approach to implicitly-defined stable models.
- The latter case can be reduced to a one-dimensional root-finding problem, making a robust and cheap implementation straightforward.
- Interesting avenues for future work include applications to control and reinforcement learning

Related work

- Our work is most similar in spirit to that of Manek and Kolter [30]. However, their proposed approach is for deterministic, continuous-time systems, whereas this paper is concerned with learning from noisy discrete measurements xt, xt+1, . . . (rather than observations of the functions x(·) and x (·)). Discrete-time systems with stochastic elements require completely different analysis. Lyapunov stability theory has been deployed in several other recent machine learning and reinforcement learning works. Richards et al [35] introduce a general neural network structure for representing Lyapunov functions. The approach is used to estimate the largest region of attraction for a fixed deterministic, discrete-time system. Umlauft and Hirche [39] consider the stability of nonlinear stochastic models under certain state transition distributions. However, their approach is constrained to provably stable stochastic dynamics under a quadratic Lyapunov function. Khansari-Zadeh and Billard [25] consider Gaussian mixture models for learning continuous-time dynamical systems but only enforce stability of the means. Wang et al [40] develop dynamical models in which the latent dynamics and observations follow Gaussian Processes; stability analysis is later given by Beckers and Hirche [5, 6]. In reinforcement learning, [7, 17, 13] utilize Lyapunov stability to perform safe policy updates within an estimated region of attraction.

Funding

- Acknowledgments and Disclosure of Funding We gratefully acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC) and Honeywell Connected Plant

Reference

- Akshay Agrawal, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, and J Zico Kolter. Differentiable convex optimization layers. In Advances in Neural Information Processing Systems, pages 9558–9570, 2019.
- Brandon Amos and J Zico Kolter. Optnet: Differentiable optimization as a layer in neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 136–145. JMLR. org, 2017.
- Brandon Amos, Lei Xu, and J Zico Kolter. Input convex neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 146–155. JMLR. org, 2017.
- Shaojie Bai, J Zico Kolter, and Vladlen Koltun. Deep equilibrium models. In Advances in Neural Information Processing Systems, pages 688–699, 2019.
- Thomas Beckers and Sandra Hirche. Equilibrium distributions and stability analysis of Gaussian process state space models. In 2016 IEEE 55th Conference on Decision and Control (CDC), pages 6355–6361. IEEE, 2016.
- Thomas Beckers and Sandra Hirche. Stability of Gaussian process state space models. In 2016 European Control Conference (ECC), pages 2275–2281. IEEE, 2016.
- Felix Berkenkamp, Matteo Turchetta, Angela Schoellig, and Andreas Krause. Safe model-based reinforcement learning with stability guarantees. In Advances in neural information processing systems, pages 908–918, 2017.
- Christopher M Bishop. Mixture density networks. Technical report, Aston University, 1994.
- Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424, 2015.
- Nicoletta Bof, Ruggero Carli, and Luca Schenato. Lyapunov theory for discrete time systems. arXiv preprint arXiv:1809.05289, 2018.
- Fabio Bonassi, Enrico Terzi, Marcello Farina, and Riccardo Scattolini. LSTM neural networks: Input to state stability and probabilistic safety verification. arXiv preprint arXiv:1912.04377, 2019.
- Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot Holtham. Reversible architectures for arbitrarily deep residual neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
- Ya-Chien Chang, Nima Roohi, and Sicun Gao. Neural Lyapunov control. In Advances in Neural Information Processing Systems, pages 3240–3249, 2019.
- Tian Qi Chen and David K Duvenaud. Neural networks with cheap differential operators. In Advances in Neural Information Processing Systems, pages 9961–9971, 2019.
- Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In Advances in neural information processing systems, pages 6571–6583, 2018.
- Yize Chen, Yuanyuan Shi, and Baosen Zhang. Optimal control via neural networks: A convex approach. arXiv preprint arXiv:1805.11835, 2018.
- Yinlam Chow, Ofir Nachum, Edgar Duenez-Guzman, and Mohammad Ghavamzadeh. A Lyapunov-based approach to safe reinforcement learning. In Advances in neural information processing systems, pages 8092–8101, 2018.
- Laurent El Ghaoui, Fangda Gu, Bertrand Travacca, and Armin Askari. Implicit deep learning. arXiv preprint arXiv:1908.06315, 2019.
- Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
- David Ha and Jürgen Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2018.
- Eldad Haber and Lars Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34(1): 014004, 2017.
- Moritz Hardt, Tengyu Ma, and Benjamin Recht. Gradient descent learns linear dynamical systems. The Journal of Machine Learning Research, 19(1):1025–1068, 2018.
- R Kalman and J Bertram. Control system analysis and design via the second method of Lyapunov:(i) continuous-time systems (ii) discrete time systems. IRE Transactions on Automatic Control, 4(3):112–112, 1959.
- Hassan K Khalil. Nonlinear systems. Prentice-Hall, 2002.
- S Mohammad Khansari-Zadeh and Aude Billard. Learning stable nonlinear dynamical systems with Gaussian mixture models. IEEE Transactions on Robotics, 27(5):943–957, 2011.
- Diederik P Kingma and Jimmy Ba. Adam: a method for stochastic optimization. arXiv Preprint, arXiv:1412.6980, 2014.
- Frank Kozin. A survey of stability of stochastic systems. Automatica, 5(1):95–112, 1969.
- Harold J Kushner. On the stability of stochastic dynamical systems. Proceedings of the National Academy of Sciences of the United States of America, 53(1):8, 1965.
- Harold J Kushner. A partial history of the early development of continuous-time nonlinear stochastic systems theory. Automatica, 50(2):303–334, 2014.
- Gaurav Manek and J Zico Kolter. Learning stable deep dynamics models. In Advances in Neural Information Processing Systems, pages 11126–11134, 2019.
- John Miller and Moritz Hardt. Stable recurrent models. arXiv preprint arXiv:1805.10369, 2018.
- Samet Oymak. Stochastic gradient descent learns state equations with nonlinear activations. arXiv preprint arXiv:1809.03019, 2018.
- Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pages 8024–8035, 2019.
- Yuzhen Qin, Ming Cao, and Brian DO Anderson. Lyapunov criterion for stochastic systems and its applications in distributed computation. IEEE Transactions on Automatic Control, 2019.
- Spencer M Richards, Felix Berkenkamp, and Andreas Krause. The Lyapunov neural network: Adaptive stability certification for safe learning of dynamic systems. arXiv preprint arXiv:1808.00924, 2018.
- AJ Roberts. Modify the improved euler scheme to integrate stochastic differential equations. arXiv preprint arXiv:1210.0933, 2012.
- Walter Rudin. Principles of mathematical analysis, volume 3. McGraw-hill New York, 1964.
- Charlie Tang and Russ R Salakhutdinov. Learning stochastic feedforward neural networks. In Advances in Neural Information Processing Systems, pages 530–538, 2013.
- Jonas Umlauft and Sandra Hirche. Learning stable stochastic nonlinear dynamical systems. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3502–3510. JMLR. org, 2017.
- Jack Wang, Aaron Hertzmann, and David J Fleet. Gaussian process dynamical models. In Advances in neural information processing systems, pages 1441–1448, 2006.
- Juliang Yin, Deng Ding, Zhi Liu, and Suiyang Khoo. Some properties of finite-time stable stochastic nonlinear systems. Applied Mathematics and Computation, 259:686–697, 2015.
- Heiga Zen and Andrew Senior. Deep mixture density networks for acoustic modeling in statistical parametric speech synthesis. In 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 3844–3848. IEEE, 2014.
- Qianggong Zhang, Yanyang Gu, Michalkiewicz Mateusz, Mahsa Baktashmotlagh, and Anders Eriksson. Implicitly defined layers in neural networks. arXiv preprint arXiv:2003.01822, 2020.

Tags

Comments

数据免责声明

页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果，我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问，可以通过电子邮件方式联系我们：report@aminer.cn